It’s a question that’s becoming increasingly common, isn't it? "Is that Instagram photo AI?" The rapid advancement of artificial intelligence has blurred the lines between what's real and what's digitally crafted, and Meta, the parent company of Facebook and Instagram, has been grappling with how to address this head-on.
Back in February 2024, Meta announced a significant shift in its approach to labeling AI-generated content. This wasn't just a minor tweak; it was a response to a growing concern, a concern highlighted by stories like that of 'Jake,' who tragically lost AUD 130,000 to a scam that leveraged AI to impersonate popular figures. While this change came too late for him, it signaled Meta's commitment to preventing future AI-driven fraud.
The initial AI labeling feature, rolled out in May, was designed to place a text label above photos and videos that contained AI-generated or manipulated elements. It was a broad stroke, aiming to cover everything from entirely AI-created images to those that had been partially altered. But as we’ve seen, the digital world moves at lightning speed, and Meta’s policy has been anything but static. Throughout 2024, the company continued to refine its approach, publishing detailed explanations like the April blog post, "Our Approach to Labeling AI-Generated Content and Manipulated Media."
This evolution wasn't just about transparency; it was also about managing risk. Monika Bickert, Meta's Vice President of Content Policy, explained that the company would apply "Made with AI" labels to AI-generated videos, images, and audio. Crucially, they also planned to introduce more prominent labels for digitally altered media that posed a "particularly high risk of materially deceiving the public on a matter of importance." This distinction is vital, acknowledging that not all AI content carries the same potential for harm.
Beyond immediate concerns of misinformation and scams, Meta has also been exploring more futuristic, and perhaps more ethically complex, applications of AI. A patent granted in December 2025, initially filed in 2023, reveals a fascinating, albeit controversial, concept: using large language models to allow AI to "continue living" on social platforms even after a user has passed away. The idea is to create a "digital doppelganger" by analyzing a user's past posts, comments, and even private messages to simulate their online presence. This could range from liking friends' photos to, in some scenarios, simulating video calls with the user's voice and likeness.
Meta's stated motivation for this patent is to mitigate the "more serious and lasting" impact of user loss on the platform's ecosystem, essentially viewing user death as a "systemic risk" to engagement. However, the company has clarified that they "currently have no plans to advance this specific example," and that applying for defensive patents is standard practice. Still, the mere existence of such a patent opens a Pandora's Box of questions.
This concept of "digital immortality" isn't entirely new. We've seen instances like Reddit co-founder Alexis Ohanian using AI to animate his late mother's photos, or AI chatbots trained on deceased loved ones' messages. The technology is becoming more accessible, lowering the barrier for creating these digital echoes. But as these capabilities become more widespread, the implications for our understanding of memory, grief, and even reality become profound.
Experts raise valid concerns about AI's potential to create "memory reshaping" capabilities, where fabricated digital experiences could be mistaken for real ones. There's also the worry that these AI personas, often optimized to remove negative emotions, might distort our genuine memories of those we've lost. Furthermore, the psychological process of grief, which involves a necessary period of adjustment to loss, could be disrupted by the constant availability of a digital surrogate.
Ethical and legal quandaries abound. Who decides if and how a digital doppelganger is activated? Is it the user's prior consent, family wishes, or the platform's discretion? While platforms like Facebook offer "memorialized accounts," these are for managing existing content, not generating new interactions. Meta's patent, if realized, would involve AI creating entirely new dialogues and behaviors, pushing beyond current ethical boundaries.
The commercial incentives are clear: longer user lifecycles, more content, and richer training data. But the impact on "living" users is also a critical consideration. How would receiving a comment from a deceased friend years later truly feel? Would it be comforting, or unsettling?
Ultimately, while the desire for connection and comfort in the face of loss is deeply human, the integration of such powerful AI into social platforms moves beyond individual emotional choice. It becomes a public issue, touching on collective memory, societal ethics, and our very perception of humanity. If these features ever come to fruition, the decision-making process must involve not just users and platforms, but a broader conversation with friends, communities, and the public at large.
