Grammarly's new "Expert Review" feature is generating serious backlash from writers, journalists, and academics who say the AI tool is impersonating them without consent. The feature, which launched last year, uses AI to simulate feedback from real experts—including tech journalists from major publications and deceased scholars—based on their publicly available work.
The controversy centers on Grammarly's approach of using persona prompting, where the AI generates feedback "inspired by" specific individuals' writing styles without their involvement or permission. Users select experts from a dynamically generated list, and the AI produces critiques mimicking how those individuals might respond to the user's writing.
Critics have labeled the practice "identity theft" and "necromrimination." Academics have been particularly vocal, with historians accusing Grammarly of "resurrecting" deceased scholars by synthesizing their works into AI personas. Dr. Verena Krebs and Vanessa Heggie, a professor specializing in the history of science, called the practice "obscene" and "creating little LLMs based on their scraped work."
Tech journalists from outlets including The Verge, Wired, and Bloomberg are reportedly included in the expert pool, though none were consulted or consented to participate. Grammarly's disclaimer states that references to experts "do not indicate any affiliation with Grammarly or endorsement."
The feature remains available despite the criticism. Grammarly positions Expert Review as a way to get domain-specific feedback, but historians and journalists argue the name is misleading—"these are not expert reviews, because there are no 'experts' involved," according to historian C.E. Aubin.
This controversy highlights broader questions about AI ethics in writing tools, particularly around the use of personas derived from real people's work without consent or compensation.