Grammarly's AI Stumbles: When 'Expert' Feedback Goes Rogue

It seems like just yesterday we were marveling at how AI could make our writing shine, offering suggestions that felt almost… human. Grammarly, a name synonymous with polishing prose, recently found itself in hot water over a feature that took this idea a step too far. Their "Expert Review" function, launched with the best intentions, ended up causing quite a stir.

Imagine this: you're working on an article about astrophysics, and Grammarly suggests feedback from a "renowned astrophysicist." Sounds helpful, right? The catch was, these "experts" weren't real people Grammarly had consulted. Instead, they were AI-generated personas, cobbled together from publicly available information about actual writers, scientists, and bloggers. The system would pluck names – living or deceased – and present their supposed insights as if they were offering personal advice on your text. It was like having a ghostwriter, but one that borrowed identities without permission.

This approach, as you might expect, didn't sit well with many. For living authors, it felt like a violation of their identity and intellectual property. Even for those who had passed on, the idea of their likeness being used to generate AI feedback without consent raised ethical questions. The reference material points out that the company stated these experts were "for reference only" and didn't imply endorsement, but that explanation felt a bit like a flimsy shield against a rising tide of criticism.

Things escalated when a collective lawsuit was filed against Superhuman, the company behind Grammarly, over this very feature. Initially, Grammarly's response was to offer authors a way to opt out. While that might have appeased some, it left many others – particularly those who might not be closely following AI news or the deceased figures whose identities were used – in a state of unawareness.

Now, the company has announced it's disabling the "Expert Review" feature and re-evaluating its design. The CEO, Shishir Mehrotra, shared that the aim was to help users discover influential perspectives and for experts to connect with fans. One can't help but imagine a certain Carl Sagan perhaps lamenting the missed opportunity for posthumous fan engagement, but the core issue remains: impersonation, even by AI, carries significant ethical weight.

This whole episode serves as a potent reminder. Grammarly, at its heart, is a tool designed to enhance clarity and correctness in our writing. Its core functions, like spotting grammar errors, improving sentence structure, and refining tone, are invaluable. The reference material highlights how its free version offers basic suggestions, while Grammarly Pro delves deeper into clarity, vocabulary, and tone, even offering paraphrasing tools powered by generative AI. These are the bread-and-butter uses that have made Grammarly a trusted companion for many writers. The "Expert Review" incident, however, shows the delicate balance between leveraging AI's capabilities and respecting individual rights and authenticity. It's a lesson learned, perhaps, in the ongoing evolution of human-AI collaboration.

Leave a Reply

Your email address will not be published. Required fields are marked *