It feels like just yesterday we were marveling at AI's ability to generate text, and now, here we are, grappling with a whole new wave of synthetic media flooding our favorite platforms. TikTok, ever at the forefront of digital trends, has been proactively shaping how we interact with this evolving content. They've been working on a thoughtful approach to AI-generated content (AIGC) labeling, aiming to strike that delicate balance between fostering creativity and preventing the spread of misleading information.
Think about it: TikTok is a global hub for over a billion users, a place where self-expression through video is paramount. This naturally includes creators using AI tools, whether they're crafting content off-platform or utilizing TikTok's own AI-powered creative features. Recognizing their role as both a builder and distributor of this kind of content, TikTok has aligned itself with frameworks like the Partnership on AI's Synthetic Media Framework. Their goal? To empower creators to explore the exciting possibilities of AI while ensuring transparency and mitigating potential harms.
This isn't a new concern for TikTok. They've long prioritized transparency with labels for sponsored content, state-affiliated media, and verified accounts. But with AIGC, the stakes feel a bit higher, especially when it comes to preventing users from being misled. The nuances are particularly tricky, like with political satire. What might be clearly humorous when labeled could easily be misinterpreted as fact if left unmarked.
The Challenge of Anticipating Harm
The real challenge for TikTok, as they've described it, was anticipating the potential downsides of AI tools while also acknowledging their positive uses. Back in the autumn of 2022, when awareness of synthetic media tools was growing but they weren't yet ubiquitous, TikTok's Integrity and Authenticity Policy team began developing their initial policies. Even though they already had rules against content that could mislead about real-world events, they foresaw potential harms – from harassment and bullying of individuals depicted in synthetic content to broader platform integrity issues if AIGC wasn't clearly understood.
They wanted to establish clear on-platform norms. This meant creating a policy that would encourage creators to be transparent about their AI use, allowing them to explore the creative potential of AI, while simultaneously protecting viewers from being deceived. To get this right, they consulted with experts, including members of their Safety Advisory Committee, WITNESS, and Dr. David Rand from MIT, who studies how people perceive AI labels. Dr. Rand's research was instrumental in shaping the design of TikTok's AI-generated content labels.
The Evolution of the Labeling Policy
In April 2023, TikTok rolled out a policy requiring creators to disclose realistic synthetic media. Initially, they asked users to disclose their use of AI in a way they chose – a sticker, a caption, whatever worked. But to make things even simpler and provide consistent visual cues for viewers, they later developed a dedicated toggle within their Trust & Safety product. This allows users to easily apply a label to their AIGC.
A significant part of this process involved deciding where to draw the line. When should AI use in user-generated content (UGC) trigger a mandatory label? This was a crucial decision to avoid overwhelming users with labels or diluting their impact. They didn't want to label every minor AI edit or filter that didn't fundamentally alter an image. Ultimately, the decision was to require labeling for all realistic uses of AI, while strongly encouraging labeling for content that is wholly generated or significantly edited by AI. Minor edits, thankfully, fall outside this requirement. While they encourage labeling for all AIGC, unlabeled realistic synthetic media will be removed, signaling a clear commitment to authenticity and user trust as we move through 2024 and into 2025.
