Fantasy and romance author Lena McDonald is under fire after readers discovered an AI-generated prompt left in the final version of her novel, Darkhollow Academy: Year 2. The incident has stirred a broader conversation about the use of artificial intelligence in self-publishing—and raised concerns about authenticity, ethics, and the future of marginalized authors in an increasingly saturated digital marketplace.
Readers Spot AI Prompt in Published Book
The controversy erupted when fans noticed a glaring error in the third chapter of McDonald’s reverse harem romance novel. A line that clearly belonged to a generative AI prompt had made it into the published edition:
“I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.”
The passage was not only out of place—it openly admitted to mimicking the voice of bestselling author J. Bree, whose gritty, emotional style has become a benchmark in supernatural romance. The oversight quickly went viral as readers posted screenshots on Reddit, Goodreads, and other platforms. McDonald quietly updated the book on Amazon, removing the offending line, but the damage was done.
Online Backlash and Reader Reactions
Social media users were quick to express frustration and disappointment. Many called the error “embarrassing” and questioned the integrity of authors who rely on AI. One user wrote, “What is the point of writing books if you aren’t going to write them? Don’t people enjoy writing?” Others worried the use of AI could signal a wider trend, with another commenting, “I don’t think she’s the only author doing it… so many books lately have been changing author voice midway through.”
The Goodreads rating for Darkhollow Academy: Year 2 plummeted to 1.6 stars, reflecting a collective sense of betrayal from readers who expected genuine storytelling.
Lena McDonald Responds with Apology
Following the backlash, McDonald issued a public apology on her Amazon author page—though the statement has since disappeared. She confirmed the use of AI during the editing phase, citing time and financial constraints as a full-time teacher and mother:
“I used AI to help edit and shape parts of the book… My goal was always to entertain, not to mislead.”
McDonald emphasized that she never intended to deceive readers and took full responsibility for the inclusion of the prompt. She pledged to review the book, make necessary corrections, and be more transparent about her writing process going forward.
However, critics argued that her apology failed to address a more troubling aspect—the deliberate use of AI to replicate another author’s style. This crosses a line from editing assistance into unethical mimicry, and many believe it undermines the creative authenticity readers expect from fiction.

A Broader Pattern: AI Prompts in Published Books
McDonald’s case isn’t isolated. Earlier this year, romance author K C Crowne was called out for a similar blunder. In her book Dark Obsession, readers found the following AI-generated suggestion embedded in the final text:
“Certainly! Here’s an enhanced version of your passage, making Elena more relatable and injecting additional humor while providing a brief, sexy description of Grigori.”
Crowne also addressed the issue publicly, echoing McDonald’s sentiment that AI was used only for “minor edits.” Nonetheless, the presence of AI-generated content in published books has left readers questioning how widespread the practice truly is.
Self-Publishing’s AI Problem
The rapid rise of generative AI tools like ChatGPT has made it easier than ever for authors to create and publish content at lightning speed. While some use AI responsibly for outlining or editing, the lack of industry oversight—especially on platforms like Amazon’s Kindle Direct Publishing (KDP)—has allowed more egregious uses to slip through.
In 2023, Amazon introduced a requirement for authors to disclose AI usage during book uploads, along with a publishing limit of three titles per day. However, these measures rely heavily on self-reporting and do little to curb the misuse of AI in self-publishing.
BookBub’s recent survey of 1,200 authors revealed that 45% are using generative AI in some capacity. While most report using it for research, a noticeable portion also use it for writing and editing.
Consequences for Marginalized Authors
One of the most concerning effects of AI proliferation in publishing is its impact on marginalized voices. For many LGBTQ and BIPOC authors, self-publishing has offered a pathway around the historically exclusive gates of traditional publishing. But with AI-generated books flooding the market, these authors may find it increasingly difficult to gain visibility.
If AI continues to crowd digital shelves with mass-produced content, discoverability will plummet for genuine authors—especially those who rely on the self-publishing ecosystem to tell stories from underrepresented perspectives.
A Question of Ethics and Authenticity
At its core, the AI-in-publishing debate raises important ethical questions: What constitutes authorship? Where do we draw the line between assistance and automation? And how do we ensure readers are getting honest, human-crafted stories?
Lena McDonald’s case is just one example in a growing trend that shows no signs of slowing down. As AI becomes more sophisticated, the publishing industry—particularly self-publishing—must grapple with how to adapt without sacrificing integrity, creativity, and fairness.
Also Read: 10 Most Anticipated Books of June 2025