The ChatGPT neural network is listed as the author or co-author of at least 200 books on Amazon's Kindle Store, Reuters reported. It is likely that the actual number of books written by bots on the platform is much higher, since Amazon's policy does not explicitly require authors to disclose information about their use of artificial intelligence.
People are making a writing career out of using neural networks, said seller Brett Schickler, who published a children's book on Kindle. His work is a 30-page tale, The Wise Little Squirrel: A Tale of Saving and Investing, written and illustrated by AI.
The digital version of the book costs $2.99 and the print version costs $9.99. Since the tale was published in January, Schickler said, it has earned him less than $100, but he has spent hours creating the book.
Also in the Kindle store are the AI-written children's tale The Power of Homework, the poetry collection Echoes of the Universe and the sci-fi epic about the interstellar brothel Galactic Pimp: Vol. 1.
Such books will fill the market and leave many writers out of work, said Mary Rasenberg, executive director of the Authors Guild. She believes that the process of creating works must be transparent, otherwise many poor-quality books will be released.
Meanwhile, the sci-fi magazine Clarkesworld has temporarily suspended publication after receiving a large number of short stories written using AI. Neil Clark, editor of the publication, noted that such works are based on "some very obvious patterns." He added that the number of these short stories is growing at such a rate that the magazine will need changes. The technology will only get better, so identifying irregularities will become more difficult, Clark stressed.
Clarkesworld has begun banning AI-written articles; in February, the magazine blocked more than 500 users for submitting content suspected of using neural networks. The publication pays $0.12 per word.
In addition, ChatGPT, Microsoft Bing AI and Google Bard are prone to spreading misinformation and learning from human-generated content without the authors' knowledge.
Last year, the tech publication CNET began using its own AI model to write at least 73 economic explanations. The neural network reported its authorship in the publication, and the article contained numerous factual errors and nearly identical wording from other content. CNET then suspended use of the tool.