It’s a world where seeing isn't always believing anymore. We've all likely stumbled across a photo or video online that made us pause, wondering, "Is that real?" For many, that question became a lot more urgent in 2024 as Meta, the parent company of Facebook and Instagram, began grappling with the rising tide of AI-generated content.
Remember Jake? His story is a stark reminder of the real-world consequences. He lost a staggering AUD 130,000, duped by a scam that leveraged the likeness of popular celebrities in AI-manipulated videos. It’s a chilling example of how sophisticated AI can be used to exploit trust, and it underscores why platforms like Meta are feeling the pressure to act.
Back in February 2024, Meta announced a significant shift in its approach: it would start labeling content created with AI. This wasn't just a minor tweak; it was a move to bring transparency to the digital spaces where billions of us spend our time. The idea is simple: a text label, appearing above photos and videos, indicating that some or all of the content has been generated or manipulated by AI. This feature, initially rolled out in February, was designed to be a helpful signal to users, helping them discern what's authentic from what's digitally crafted.
But as is often the case with rapidly evolving technology, Meta's policy wasn't a static document. Throughout 2024, the company continued to refine its approach. A comprehensive blog post in April detailed their evolving strategy, acknowledging the complexities of labeling AI-generated content and manipulated media. It’s a continuous process, a dance between innovation and safeguarding users.
By May, the commitment to labeling became more concrete. Meta began applying "Made with AI" labels to videos, images, and audio across its platforms. This expansion was a direct response to growing concerns, both from users and governments, about the potential risks of deepfakes and other AI-driven manipulations. Monika Bickert, Vice President of Content Policy, highlighted this move, explaining that the policy was broadening beyond just a narrow category of doctored videos.
What's particularly interesting is Meta's tiered approach. While a standard "Made with AI" label serves as a general indicator, they also introduced more prominent labels for content deemed to pose a "particularly high risk of materially deceiving the public on a matter of importance." This means that if an AI-generated piece of content could significantly mislead people on a crucial topic, it gets a more attention-grabbing flag. This nuanced approach acknowledges that not all AI content carries the same level of risk.
This new policy represents a significant departure from Meta's previous stance. Before this, their 'manipulated media' policy primarily focused on videos where AI made someone appear to say something they didn't. Content that violated this rule was often removed. The updated policy, however, is more permissive in that it allows content to remain online, but with clear labeling. It now extends to videos showing someone doing something they didn't, as well as photos and audio. This broader scope reflects the rapid advancements in AI, which can now create highly realistic images and audio, not just videos.
The company itself noted that their original policy, written in 2020, was based on a landscape where realistic AI-generated content was rare and the main concern was video manipulation. The last few years, and especially the last year, have seen an explosion in the capabilities of AI, leading to the need for a more robust and adaptable framework.
We're seeing this play out globally. In the US, regulators have already had to address AI-generated 'robocalls' that mimicked political figures. The White House has also pledged to tackle the issue of non-consensual deepfake pornography, especially after disturbing fake nude images of a pop star circulated online. Even former political figures have voiced concerns about AI being used to alter their public image.
Meta isn't alone in this endeavor. Other major tech players are also implementing similar strategies. TikTok, for instance, has been asking users to label their own AI-generated content for a while, and YouTube has introduced an honor-based system. It seems the industry is collectively realizing that transparency through labeling is a crucial step in building trust in an increasingly AI-influenced digital world. As we move through 2025, expect these policies to continue evolving, mirroring the relentless pace of AI development.
