Grammarly's new AI-powered "expert review" feature, designed to provide writing feedback in the style of notable figures, has sparked a significant controversy by using the names and personas of living journalists and editors without their consent. This incident highlights the escalating ethical and legal challenges AI companies face regarding training data, personality rights, and user transparency as they race to deploy increasingly sophisticated generative features.
Key Takeaways
- Grammarly's "Expert Review" feature, launched in August, offers AI-generated writing advice "inspired by" subject matter experts, including deceased professors and, controversially, living journalists.
- The Verge discovered the feature was presenting feedback attributed to its own staff, including Editor-in-Chief Nilay Patel and Editor-at-Large David Pierce, none of whom granted permission for their names to be used.
- The AI-generated comments mimic the supposed expert's style, creating a direct and unauthorized implication of endorsement or participation.
- This follows a similar pattern seen with other AI tools, raising urgent questions about consent, personality rights, and the sourcing of training data for these commercial features.
The Unauthorized "Expert" Controversy
Grammarly's Expert Review feature, launched in August 2025, is marketed as a tool that allows users to receive writing feedback "inspired by" a library of subject matter experts. As reported by Wired, this library includes figures like recently deceased professors. However, when tested by The Verge, the feature presented a more immediate ethical breach: it offered comments ostensibly from the publication's own high-profile editorial staff.
The AI-generated feedback included annotations and suggestions attributed to The Verge's Editor-in-Chief Nilay Patel, Editor-at-Large David Pierce, and senior editors Sean Hollister and Tom Warren. The feature's interface presents these comments with the expert's name and a small avatar, creating a strong visual and contextual implication that the feedback originates from or is endorsed by that individual. A Verge spokesperson confirmed that none of these journalists gave Grammarly permission to include them in the "expert reviews," making this a clear case of unauthorized use of their names and professional identities.
Industry Context & Analysis
This incident is not an isolated case but part of a troubling pattern in the aggressive deployment of generative AI features. It mirrors controversies faced by other industry leaders, where the line between training on public data and appropriating personal identity becomes dangerously blurred. Unlike OpenAI's approach with ChatGPT Voice, which involved licensing agreements with voice actors for the "Sky" persona, Grammarly appears to have bypassed any formal consent or licensing process for the living individuals in its "expert" library. This creates significant legal exposure under personality rights and publicity rights laws, which vary by state but generally protect against the unauthorized commercial use of an individual's name, likeness, or identity.
The technical implication here is profound: these systems are often trained on vast corpora of publicly available text, including articles, social media posts, and interviews. While this data might be publicly accessible, using it to simulate a specific living person's style and feedback for a commercial product is a distinct and contentious application. It moves beyond analyzing language patterns to actively constructing a digital persona that can mislead users about endorsement and participation. This practice stands in stark contrast to the approach of companies like Hugging Face, which emphasizes dataset transparency and provenance through initiatives like its Data Governance initiative, allowing for more scrutiny of training data sources.
Furthermore, this occurs in a highly competitive market. Grammarly, valued at over $13 billion in its last funding round, competes directly with AI writing assistants from tech giants like Microsoft (Copilot in Word), Google (Help Me Write in Docs), and startups like Jasper and Writer.com. In this race for differentiation, "expert" features are a potential selling point. However, this controversy reveals the reputational and legal risks of such strategies. For context, the AI writing assistant market is projected to grow significantly, but user trust remains a critical barrier; a 2024 survey by the Reuters Institute found that only 38% of people trust news mostly created by AI, highlighting the sensitivity around authentic versus synthetic authorship.
What This Means Going Forward
This controversy will force a reckoning for Grammarly and the broader AI-assisted writing industry. In the immediate term, Grammarly will likely need to swiftly remove unauthorized individuals from its expert library and may face legal challenges or demands for compensation from those whose identities were used. The company's response will be a key test of its commitment to ethical AI development and could influence user and investor confidence in a company that boasts over 100 million daily active users.
Going forward, the industry must develop clearer standards and practices. This likely means a shift toward explicit consent and licensing agreements for any AI feature that simulates a specific living person's style or feedback, similar to standard practice in traditional endorsements. We may also see increased regulatory scrutiny; the EU AI Act, for instance, imposes transparency requirements for AI systems interacting with humans, which could be interpreted to cover such synthetic "expert" interactions.
For users, this serves as a critical reminder to maintain a healthy skepticism toward AI-generated content and its attributions. The veneer of authority presented by a named "expert" can be entirely synthetic. The key trend to watch will be whether other companies preemptively audit their own features for similar issues or wait for their own "Grammarly moment." This incident underscores that in the AI era, an individual's digital identity and creative output have become a new frontier for commercial exploitation, demanding new protections and ethical frameworks.