In a development that has sent ripples across the media industry, The New York Times has severed ties with freelance journalist and author Alex Preston after it was revealed that he used artificial intelligence tools to produce a book review. The controversy emerged when a reader identified notable similarities between Preston’s article and a previously published review in The Guardian.
The incident has sparked a broader conversation about the ethical use of AI in journalism, particularly in forms of writing that rely heavily on individual voice and critical interpretation.
Discovery of Similarities
The issue first came to attention when a reader compared Preston’s review in The New York Times with another review that had appeared earlier in The Guardian. The similarities were not limited to thematic overlap but extended to phrasing, structure, and argumentation, raising questions about originality.
Following the reader’s observation, editors at The New York Times initiated a review of the piece. It was during this process that Preston acknowledged using AI tools in drafting the article. While AI assistance in journalism is not universally prohibited, its undisclosed use—especially in a published review—was deemed a violation of editorial standards.

Immediate Editorial Response
The New York Times responded swiftly, confirming that it would no longer work with Preston. The publication emphasized that its contributors are required to maintain strict standards of originality and transparency. Any use of external tools, including AI, must not compromise the integrity of the work or mislead readers about its authorship.
In a public statement, Preston admitted fault, saying he had “made a serious mistake.” He expressed regret over his actions and acknowledged that his decision to rely on AI had undermined the trust placed in him as a contributor.
The decisive response from the newspaper underscores the seriousness with which established media institutions are approaching the integration of AI into their workflows.
AI and the Nature of Criticism
The controversy is particularly significant because it involves a book review—a genre that depends on subjective analysis, literary sensitivity, and the reviewer’s unique perspective. Unlike straightforward reporting, reviews are expected to reflect the critic’s personal engagement with a text, making authenticity central to their value.
The use of AI in such a context raises difficult questions. Can a machine-assisted review truly represent an individual’s critical voice? And if AI-generated text inadvertently mirrors existing work, where does one draw the line between coincidence and plagiarism?
Experts note that generative AI systems are trained on vast amounts of written material, which can sometimes lead to outputs that resemble existing texts. While this is not always intentional, it creates risks for writers who rely on such tools without thorough verification and rewriting.
Industry-Wide Implications
This incident comes at a time when news organizations around the world are grappling with how to regulate the use of artificial intelligence. Many publications have begun to draft or implement guidelines that permit limited AI use for tasks such as research, transcription, or data analysis, while strictly prohibiting its use in generating publishable content without disclosure.
The case involving Preston may accelerate these efforts, prompting stricter enforcement and clearer policies. It also highlights the importance of transparency—not only in sourcing information but also in the tools used to produce it.
For freelance writers, who often operate without the same level of oversight as staff journalists, the episode serves as a reminder of the professional risks associated with cutting corners. The ease and speed offered by AI tools can be tempting, but they come with ethical and reputational costs if misused.
The Role of Readers in Accountability
One notable aspect of the controversy is that it was a reader—not an editor or an internal system—who first identified the similarities between the two reviews. This underscores the continuing importance of audience vigilance in the digital age.
As journalism becomes more accessible and widely distributed online, readers play an increasingly active role in scrutinizing content. Their ability to cross-reference and question published material adds an additional layer of accountability for media organizations.

A Turning Point for AI in Media
The fallout from this incident is likely to influence how both journalists and publishers approach AI moving forward. While the technology offers undeniable advantages in efficiency and scale, its use must be carefully balanced against the core principles of journalism: accuracy, originality, and trust.
For The New York Times, the decision to part ways with Preston sends a clear message about its priorities. For the wider industry, it highlights the urgent need to establish norms and safeguards as AI becomes more deeply embedded in the writing process.
As the boundaries between human and machine-generated content continue to blur, maintaining clarity about authorship will be essential. In the end, the value of journalism lies not just in the information it conveys, but in the credibility of the voices behind it—a standard that remains as important as ever in an age of technological transformation.








