At first, she didn’t think much of it; she reads and responds to writers daily as part of her job, receiving anywhere from 700 to 750 stories a month. But when another story, also titled “The Last Hope,” came in a couple of weeks later by a writer with a different name, Williams became suspicious. By the time yet another “The Last Hope” came a few days later, Williams knew immediately she had a problem on her hands.
From the moment of first sending, Williams received more than 20 stories called “The Last Hope”, each of which was sent from different authors and from different e-mail addresses. Williams believes that they were all created with the help of AI, as well as hundreds of other similar applications, which have overwhelmed small publishers in recent months.
In January, Asimov’s Science Fiction received about 900 stories for consideration, and this month plans to receive 1000. Sheila Williams says that almost all growth was at the expense of works that seem to be created by artificial intelligence, and she has read so many that she can now often tell from the first few words whether something might not be written by a human.
In addition to repeating titles, there are certain names of characters that tend to appear frequently, says Williams. Sometimes the manuscript contains a different name than the one indicated in the online form. In optional cover letters, some authors add instructions on how to transfer money for their story, which has not yet been accepted.
At the same time, Asimov’s Science Fiction receives stories with dozens of similar names: “The Last Echo,” “The Last Message,” “The Last Day of Autumn,” and “The Last Voyager.”
Williams and her team have learned to recognize works created by artificial intelligence, but the flow of applications is still disappointing. Outlets like Asimov’s are getting overwhelmed by AI chum, taking up the time of editors and readers and potentially crowding out genuine submissions from newer writers. And the problem could only get worse, as the wider availability of writing bots creates a new genre of get-rich-quick schemes.
“I just basically go through them as quickly as I can,” Williams says of the pieces she suspects are AI-generated. “It takes the same amount of time to download a submission, open it, and look at it. And I’d rather be spending that time on the legitimate submissions.”
For some editors, the inflow of artificial intelligence made them stop accepting new work.
Last week, popular science fiction magazine Clarkesworld announced it would temporarily close submissions due to a flood of AI-generated work. In the previous blog post editor Neil Clarke had noted that the magazine was forced to ban a skyrocketing number of authors because they had submitted stories that were generated using automated tools. According to Clarke, only in February Clarkesworld received 700 works written by people and 500 stories generated by the machine.
Clarke believes that spam comes from people looking for fast earnings and who have found Clarkesworld and other publications online. For example, one website is loaded with SEO bait articles and keywords around marketing, writing, and business and promises to help readers make money quickly. The article on the site lists almost two dozen literary magazines and websites, including Clarkesworld and Asimov’s, as well as larger editions, such as BBC, indicating rates of fees and details for sending materials. The article encourages readers to use artificial intelligence tools and contains affiliate marketing links to Jasper, software for writing artificial intelligence texts.
Most publications pay a small price for a word, about 8-10 cents, while others pay fixed fees up to several hundred dollars for the materials accepted. In his blog, Clarke wrote that a “high percentage of fraudulent submissions” were coming from some regions but declined to name them, concerned that it could paint writers from those countries as scammy.
In some cases, Clarke corresponded with people who were banned through works generated by artificial intelligence – and they said they just needed money.
Clarke, who built the submission system his magazine uses, described the AI story spammers’ efforts as “inelegant” — by comparing notes with other editors, Clarke was able to see that the same work was being submitted from the same IP address to multiple publications just minutes apart, often in the order that magazines appear on the lists.
“If this were people from inside the [science fiction and fantasy] community, they would know it wouldn’t work. It would be immediately obvious to them that they couldn’t do this and expect it to work,” says Clarke.
The problem goes beyond the framework of science fiction and fantasy. Flash Fiction Online (FFO) accepts works of various genres, including horrors and fantasy. On February 14, the publication added a notice to its submission form:
“We are committed to publishing stories written and edited by humans. We reserve the right to reject any submission that we suspect to be primarily generated or created by language modeling software, ChatGPT, chat bots, or any other AI apps, bots, or software.”
The updated terms were added around the time that FFO received more than 30 submissions from one source within a few days, says Anna Yeatts. Each story contained cliches that Yeatts saw in works created by artificial intelligence, and each had a unique cover letter, structured and written not as the publication usually sees. But Yeatts and her colleagues have suspected since January that some of the works that were sent to them were created with the help of artificial intelligence tools.
Yeatts began to experiment with Chatgpt in December, giving the tool to create stories of certain genres. The system was able to reproduce technical elements, including the definition of the main characters, the establishment and introduction of the conflict, but could not create any “deep point of view” – the endings were too neat and ideal, and emotions often went into melodrama. Everyone has “piercing green eyes”, and stories often begin with the fact that the characters are sitting. Yeatts estimates that from about a thousand works that FFO received this year, about 5% were probably generated by artificial intelligence.
In the past, FFO has published mainstream works that had a more traditional writing style and a level available for different readers. According to Yeatts, stories created using artificial intelligence tools can go beyond basic requirements.
“It does have all the parts of the story that you try to look for. It has a beginning, middle, and end. It has a resolution, characters. The grammar is good,” says Yeatts.
The FFO team is working to teach regular editors to look for certain story elements when they first check applications.
Yeatts is concerned about the fact that a growing wave of work created with artificial intelligence can literally displace real people’s work. The outlet uses Submittable, a popular submission service, and FFO’s plan that includes a monthly cap on stories, after which the portal closes. If hundreds of people send ineligible AI-generated work, that could prevent human authors from sending in their stories.
Other members of the community monitor the problem that inundates other publishers and consider response ways before it spreads. Matthew Kressel, a science fiction writer and creator of Moksha, an online submission system used by dozens of publications, says he’s started hearing from clients who have received spammy submissions that appear to be written using AI tools.
Kressel says that Moksha wants to remain “agnostic” when it comes to the value of materials created with chatbots. According to Kresel, publishers have the opportunity to add a checkbox to which the authors can confirm that their works do not use artificial intelligence systems, and consider adding an option for publications that would allow them to block or partially limit submissions using AI tools.