It’s a question many of us have pondered while scrolling through our feeds: is that hilarious video of a politician singing opera real, or was it conjured by AI? This growing blur between reality and digital creation is precisely what platforms like TikTok are grappling with, and they're starting to roll out some pretty thoughtful solutions.
TikTok, a global hub for creativity with over a billion users, finds itself in a unique position. They're not just a place where AI-generated content (AIGC) lands; they're also building tools that empower creators to make it. This dual role means they're both a 'Builder' and a 'Distributor' of synthetic media, as defined by the Partnership on AI's (PAI) Synthetic Media Framework. Their mission, as they see it, is a delicate balancing act: fostering unbridled creative expression while simultaneously building guardrails to prevent harmful misinformation.
Transparency has always been a cornerstone for TikTok, evident in their existing labels for sponsored content, state-affiliated media, and verified accounts. But with AIGC, the stakes feel higher. Imagine a piece of political satire – funny and clearly fake when labeled, but potentially misleading if presented as fact without context. This is where the challenge truly lies.
Back in the autumn of 2022, long before AI-generated content flooded online spaces, TikTok's Integrity and Authenticity Policy team began anticipating these issues. They recognized the potential for both good and harm. While they already had rules against content that distorted real-world events, they saw a clear need for specific guidance on synthetic media. The goal was to cultivate a culture where creators could explore AI's creative possibilities openly, but with viewers always in the loop.
To shape these policies, TikTok didn't go it alone. They consulted with experts, including members of their Safety Advisory Committee, WITNESS, and Dr. David Rand from MIT, whose research delves into how people perceive AI labels. This collaboration was instrumental in designing the AI-generated content labels that are now taking shape.
In April 2023, TikTok introduced a policy requiring creators to disclose realistic synthetic media. Initially, this disclosure could be in any format the creator chose – a sticker, a caption, you name it. But to streamline things and ensure viewers consistently understood what they were seeing, TikTok's Trust & Safety Product team developed a dedicated toggle. This feature makes it much simpler for users to flag their AIGC.
The tricky part, of course, was deciding where to draw the line. When should AI use in user-generated content (UGC) necessitate a label? They wisely avoided creating a system that would lead to 'label fatigue' or dilute the impact of important disclosures. For instance, minor edits made with AI tools or simple filters that don't fundamentally alter an image aren't mandated for labeling. The focus is on 'realistic' AI uses, which must be labeled. For content that's wholly generated or significantly altered by AI, labeling is strongly encouraged. This thoughtful approach aims to empower creators while safeguarding the integrity of information on the platform, a crucial step as we all navigate this rapidly evolving digital landscape.
