It’s a bit like the Wild West out there with AI-generated content, isn't it? One minute, you’re marveling at a hyper-realistic AI-generated image, and the next, you’re hearing about actors’ likenesses being used in scams, so convincing that even their families can’t tell the difference. This stark reality, highlighted by incidents like actor Wang Jinsong’s image being deepfaked for a wealth scam, has really put the spotlight on the darker side of rapidly advancing technology.
This isn't just a fringe issue; it’s something that’s been on the minds of lawmakers. During China’s Two Sessions, the idea of putting “safety guardrails” around AI became a hot topic. For instance, National People's Congress representative Liu Xiaojing proposed a mandatory labeling system for AI-generated content, suggesting that videos and audio created by AI should have an indelible digital watermark. This idea certainly struck a chord with many online.
And it’s not just talk. China has already taken steps, with the "Measures for the Identification of AI-Generated Content" officially taking effect on September 1, 2025. This regulation mandates that AI-generated content must be labeled. The approach is twofold: explicit labels for content that could confuse or mislead the public, and implicit, embedded metadata for service providers to ensure traceability and accountability. This marks a significant move towards a more regulated era for AI content, a sort of “certified to work” status for AI creations.
However, even with these rules in place, the reality on the ground is proving to be a bit more complex. While many platforms have introduced labeling features, a considerable amount of AI-generated content continues to spread “under the radar” across short videos, image posts, and even live streams. What’s more concerning is that some illicit activities have gone underground, with deepfake technology intertwining with black-market operations to form new industries.
Just how serious is this? In February, cyberspace administrations dealt with over ten thousand violating accounts and cleared more than half a million pieces of illegal information. These numbers paint a picture of a constant, intense “cat-and-mouse game” between regulators and those trying to circumvent the rules.
So, why do these issues persist even with regulations? The core problem seems to be a lag in the governance chain. At the source, while laws draw clear lines, some bad actors are actively engaging in an “anti-labeling” business. New techniques and services emerge constantly to bypass detection, with even “priced-to-order” guides for evasion appearing – a true “one step ahead” scenario.
During dissemination, platform reviews often struggle to keep pace with the speed of fabrication. Faced with a massive volume of content, current AI detection tools can falter in complex situations. Malicious actors can easily bypass checks by using locally run open-source models, leaving platforms in a constant state of vulnerability. And when it comes to penalties, as Wang Jinsong himself pointed out, the cost of violating the law is too low, diminishing its deterrent effect and encouraging some to take risks.
To break this deadlock, simply “sticking labels” isn’t enough. We need to build a more robust, multi-layered defense system that ensures content is “identifiable, traceable, and accountable.”
Firstly, the technological defenses must be upgraded. This means pushing for standardization in AI labeling technology to eliminate loopholes between different platforms. It also means accelerating the development of tamper-proof implicit labeling technologies, using AI to counter AI, thereby improving detection accuracy and making it impossible for “invisible” fake content to hide.
Secondly, the boundaries of rights and responsibilities need to be clearly defined. Who generates, who disseminates, and who uses – the responsibilities of each party must be unambiguous. For those who maliciously delete labels or provide evasion tools, stricter measures are necessary. The goal is to create a system where the convenience of AI doesn't come at the expense of truth and trust.
