
Update (Nov. 18, 2024): Harpercollins has announced an AI licensing deal for nonfiction backlist titles; authors can opt-in or opt-out and are reportedly being paid $2,500. Learn more.
The train has left the station, the ship has sailed, pick your preferred metaphor.
In July 2024, the Copyright Clearance Center (CCC) announced the ability for publishers and other rights holders to include AI training rights as part of licensing arrangements. An article in Publishers Weekly notes this licensing would be limited to internal use for licensees, which means it would not extend to public-facing models such as ChatGPT. However, it’s easy to see how such licensing could eventually turn into something more extensive.
What is the CCC? It is a for-profit company that manages collective copyright licensing for corporate and academic publishers. Generally, its mission is to help publishers earn money off copyright and expand copyright protections for rights holders.
In the CCC’s announcement, the president of the Association of American Publishers says, “Voluntary licensing solutions are a win-win for everybody in the value chain, including AI developers who want to do the right thing. I am grateful to organizations like CCC, as they are helping the next generation marketplace to evolve robustly and in forward-thinking fashion.”
A handful of book publishers have already struck deals with the AI companies directly.
Wiley, a major academic publisher who is also known for the Dummies series, announced two deals in June, to the tune of $44 million. Many major media organizations, like News Corp and The Atlantic, have also struck deals. (Here is a running list. And here is another.)
I think it’s fair to say that, before long, every major publisher will be earning money through AI training, whether it’s through the CCC, another collective licensing agency, or directly with tech companies, if they are big or desirable enough (as Wiley is).
How do writers protect themselves?
I’m asked this question a lot, and often I say things like “Join the Authors Guild,” since they’re deeply involved in the issue of compensation for authors and advocate for their rights.
But increasingly, I’m also pushing back on the question: What do you need protecting from? While the AI companies will always carry the original sin of training on copyrighted work without permission or licensing, they’re now going through appropriate channels to obtain training material. Yes, there are lawsuits underway (by the New York Times and the Authors Guild, among others) that have to play out and may settle out of court. But even if the rights holders win in the end, the models will not shut down. The AI companies will not go out of business. Instead, remedies will be found for rights holders and business will continue as usual.
Recently, Mary Rasenberger at the Authors Guild told Publishers Marketplace (sub required) that they see AI licensing as a good source of income for writers down the road and that they’ve been talking to publishers for months about who owns AI training rights and how to work the revenue splits. She said, “I am completely optimistic there will be joint agreements between publishers and authors on this. It is not the hardest problem in the world.” Fortunately, she says publishers so far agree they need permission from authors to license books for AI.
Theoretically, authors could object and withhold their material from training, but that would be turning down free money. The average author’s concerns about AI training or ingestion often betrays a misunderstanding about what today’s large-language models are intended to do. They are not databases where you retrieve information. They are not machines that intend to steal, plagiarize, or regurgitate. (If and when they do, the developers consider that a flaw to be worked out.) Benedict Evans has expressed this eloquently: “OpenAI hasn’t ‘pirated’ your book or your story in the sense that we normally use that word, and it isn’t handing it out for free. Indeed, it doesn’t need that one novel in particular at all. In Tim O’Reilly’s great phrase, data isn’t oil; data is sand. It’s only valuable in the aggregate of billions, and your novel or song or article is just one grain of dust in the Great Pyramid.”
That said, authors might certainly object to the AI companies themselves, how they are run, the ethics of the people behind them, the future implications of AI use, etc—and avoid involvement for that reason. But refusing to engage at all with the technology may end up penalizing yourself more than them—not because there’s going to be some incredible revolution (I don’t buy into most of the hype surrounding AI), but because you’ll end up working harder or spending more money than everyone else who is using these tools. The technology is destined to be integrated into daily life, for better and for worse.
Authors and publishers are using AI to write and publish—today.
And it plays a role at all stages of the writing and publishing process that many professionals would find acceptable and ethical. While it may be unethical for someone to use AI to generate 5,000 spammy reviews, in other cases people prefer AI content, like when it’s used to improve summaries of scientific articles.
Publishers are beginning to differentiate between two types of AI use in the writing and publishing process. During a Book Industry Study Group panel about AI use, Gregory M. Britton, editorial director at Johns Hopkins University Press, discussed these two types. One is content creation, which publishers have legal concerns about, and the other is the content management, or the editorial tools, which JHU encourages. “I think it would be foolish for an author to submit a manuscript without running spell check on it before they turn it in,” and he sees AI editing tools as analogous.
One of JHU’s authors, José Antonio Bowen, used AI to find all the places where he may have been repetitive in the manuscript, and he also used AI successfully to help him with fact-checking and citations. He disclosed all of this use to his editors. Some may be surprised that AI can find factual errors in a manuscript, given the problematic results it can generate, but much depends on the tool, the user, and the prompt. Which brings us to the next important point.
Authors are responsible for the quality and correctness of their work, whether they use AI or not. Even if the use of AI in content creation blurs the lines of intellectual property and originality, authors remain accountable for the quality of their work. That means you can’t blame the AI for getting something wrong; you remain responsible for vetting what the AI does.
Even those who question the ethicality of generative AI believe that writers and students today should (or must) learn to use it. “What faculty and teachers call cheating, business calls progress,” Bowen said during the panel. “If you say you can’t use a tool or refuse to use it, your colleagues who use the tool will complete their work faster and better.” In other words, AI is raising the average. However, Bowen said, “AI is better than 80 percent of humans at a lot of things, but it’s not better than the experts. … The best writers, the best experts are better than AI.”
AI is being used to fuel translated works.
Machine translation has been around for a long time, but advances in generative AI are leading to a new renaissance in book translation. Once again, a Book Industry Study Group panel examined how AI is being used right now to translate and to assist human translators; panelists included Robert Casten Carlberg, the CEO and co-founder of Nuanxed, a translation agency.
Because AI-assisted translation is incredibly cheaper and faster, it has the potential to grow the market for translations and lead to new jobs in the management of translations. Founded in 2021, translation firm Nuanxed works mostly on translating commercial fiction between European languages, using a hybrid process that includes AI tools before, during, and after translation. They pass savings onto the publishers while still paying a good market rate to human translators. Carlberg said, “Most publishers we start working with are very skeptical to the way we are working but realize once they’ve tried it, the quality is good, and the readers really like it.” And the authors also like it, he added.
Carlberg’s firm is growing fast, and he’s hearing from more translators who want to work with Nuanxed. He says their big value add is that they pass every translation through the appropriate “cultural lens” and make sure the work is coherent throughout.
Yes, there are still problems and valid fears.
Some writers fear that AI use will pollute the market (as it’s doing now) and lead to various types of AI fraud—the kind of thing that happened to me. Some form of this fraud has existed for as long as Amazon KDP or digital publishing has existed, only it’s more prevalent now and easy to execute with AI tools. I sometimes get upset about the pollution as well and what it might mean for writers and publishers over the long term. But I’m hoping we’ll also gain methods of filtering the garbage just as we have in the past.
The other concern is that AI-generated work will be less creative and interesting in the long run, since it tends to generate what’s rather average or what’s already dominant in the culture. For example, a recent study showed that AI could boost creativity individually, but it lowers creativity collectively. (A friend of mine who reads a lot of genre fiction that’s heavily AI-assisted or AI-generated said she’s read five novels recently all featuring a main character named “Jaxon.”) That’s what AI does. Revert to the mean or what’s most predictable. I expect more progress and more tools that modify these predictable outcomes when they’re not desirable for the user or the output.
I’ll close with the words of The Atlantic’s CEO Nicholas Thompson:
AI is this rainstorm, or it’s this hurricane, and it’s coming towards our industry, right? It’s tempting to just go out and be like, “Oh my God, there’s a hurricane that’s coming,” and I’m angry about that. But what you really want to do is, it’s a rainstorm, you want to put on a raincoat and put on an umbrella. If you’re a farmer, you want to figure out what new crops to plant. You want to prepare and deal with it.
And so my job is to try to separate the fear of what might happen and work as hard as I can for the best possible outcome, knowing that because I have done a deal with an AI company, people will be angry because AI could be a very bad thing, and so there’s this association. But regardless, I have to try to do what is best for The Atlantic and for the industry.

Jane Friedman has spent her entire career working in the publishing industry, with a focus on business reporting and author education. Established in 2015, her newsletter The Bottom Line provides nuanced market intelligence to thousands of authors and industry professionals; in 2023, she was named Publishing Commentator of the Year by Digital Book World.
Jane’s expertise regularly features in major media outlets such as The New York Times, The Atlantic, NPR, The Today Show, Wired, The Guardian, Fox News, and BBC. Her book, The Business of Being a Writer, Second Edition (The University of Chicago Press), is used as a classroom text by many writing and publishing degree programs. She reaches thousands through speaking engagements and workshops at diverse venues worldwide, including NYU’s Advanced Publishing Institute, Frankfurt Book Fair, and numerous MFA programs.




Jane:
Excellent summation of current events regarding AI and writing/publishing. Many writers are nervous about AI, and I am one of them. But many are willing to incorporate AI as a tool to do better work, and I am one of them too.
Two questions:
1.Do you have a recommendation for translation services specifically for authors? I clicked on Nuanxed, but are there others that might help us poor writers out?
2.Have you considered the impact of AI on voiceovers and audible recordings? I may have missed your comments on that.
Great article!
=rds
Hi Ronald: Right now, AI narration is being used selectively by publishers for backlist, academic/scholarly titles, some nonfiction, and other instances where the economics do not work for human narration. Self-publishing authors are also using it for all types of work, especially romance. If you’re interested in synthetic (AI) audio narration, I believe you can find options via Google, Apple, and Amazon KDP (Amazon only by invite at the moment or the last time I checked). You could also look at DeepZen, which has a partnership with Ingram. I don’t think AI narration will replace human narration for big traditional publishing books, but it will certainly become prevalent for smaller publishers and smaller titles, especially in the nonfiction realm.
I haven’t seen AI translation services targeted to authors yet, unless you consider working through LeanPub. The founder of that company, Len Epp, was on the panel with the Nuanxed founder if you’re interested in hearing more about the quality of that option. https://leanpub.com/global_author/buy
Jane:
When publishers use AI to assist human translators (who receive a fair wage), do they simply list the translator’s name on the copyright page? Or do they have to say that they used AI?
This is the credit I’ve seen on books that Nuanxed handles:
Translators: Nuanxed / Nicola West
I’m surprised nobody has commented on the last sentence of the caption under the AI image. I took that exact prompt and entered it in Photoshop and the first of three selections were of two non-white people, the second image was of two bearded white men in suits, and the third was more cartoonish and they looked like two oriental men. Make of it what you will. I thought the comment was unnecessary to the discussion of the use of AI.
The point I’m trying to make: Public-facing AI models generally deliver what’s dominant in culture and society. What they deliver will depend on how the model is trained, what it is trained on, and of course the user prompt. Photoshop has obviously amplified the “diversity” factor in its model if it’s giving you diverse people. ChatGPT, which I used, is not.
This is quite relevant to discussions about AI training and its effect on creative work.
Thanks for this piece that lays out the issues clearly. One point I am confused about. If I publish a book, article or scholarly work, my understanding is that I am the copyright holder unless it was specifically work made for hire. The rights to uses beyond ordinary publication are mine to sell, not my publisher’s. I’m sure no contract before, say, 5 minutes ago, mentioned anything about AI training use rights one way or another, which to me suggests they are retained by the creator, not the publisher. It is, I suppose, better that AI companies are negotiating and paying for this content instead of just stealing it outright as they’ve been doing, but if they are negotiating with the wrong party, they are still in receipt of stolen goods.
While Wiley has not said anything specific about their deal, my theory right now is they struck a deal only for books where they hold the copyright or the contract is written in such a way that grants them leeway to do this. The Dummies series, I believe, is done as work for hire. And academic/scholarly publishers are known for hanging on to copyright as well. So my guess is they didn’t have to seek permission from authors to strike their deal because no authors’ rights were involved. If they had been, I assume we would have heard a lot of noise from either the Authors Guild or the AALA (agent association). Or the authors themselves.
As the article indicates, the Guild and agents are proceeding as if these rights reside with the author, but in conversations I’ve seen/heard, I don’t hear a lot of certainty on this, just assumptions.
Jane, this is intriguing. Thanks for helping calm our fears. Here’s an AI topic I haven’t seen addressed anywhere: Will writers seeking traditional publishing be able to use AI to produce irresistible query letters to send agents? What will that do to workshops and experts who teach us to do it ourselves? And, how will agents deal with a flood of perfect queries to find worthwhile manuscripts?
Yes, writers are now using AI to produce queries and synopses, at least first drafts. I haven’t seen AI produce a pitch perfect query, but it can provide something to work from. So far, I don’t foresee expert advice and guidance going away because experts remain better than AI in doing these tasks and in addressing questions, outlying cases, special cases, differences in pitching different types of work, etc.
Agents/publishers are already flooded with bad/mediocre queries. I don’t think they’ll get more queries than they receive now; the quality will just be better, and I expect that will be welcome.
Thanks so much for this helpful article on the use of Al in the publishing industry. I look at it as a tool for some tasks, but it doesn’t replace the creative writing needed for good stories. I have a job as a writer on contract for a web marketing firm where I write articles for attorney websites. Al can help with those articles, though I still must heavily edit them. I’m finding it’s easier and more enjoyable sometimes to just write the article myself. But I agree with you that Al isn’t going away so we’ll need to deal with many issues as writers.
On the one hand, voluntary licensing is a great step forward. On the other hand, it does nothing to correct the historic wrong of using copyrighted material without consent. It shouldn’t be an excuse for the techbros to put it behind them like nothing happened. As the Scarlett Johansson voice debacle demonstrates, they are still a bunch of entitled assholes who think they have a right to do whatever they want without facing any consequences. Give them an inch today and they’ll make you pay for a mile tomorrow.
I don’t buy the “one grain of sand” excuse because it’s not single grains, it’s thousands or millions of grains, and if you take enough, the pyramid still falls down. And that pyramid is copyright, the edifice which sustains our industry, and which the techbros would tear down in a heartbeat to win that next round of VC funding or sell their startup to a bigger fish.
It’s also an argument which serves the interests of powerful financial interests who see individual creatives as nothing but production units. Grains of sand is all we are to the pharaohs.
As for the that image, as Jane points out it says so much about AI. Not only does the central character bizarrely have one woman’s shoe, and the woman next to him only one distorted foot, they’re exactly the self-satisfied spawn of Gordon Gecko corporate jerks who have never read a book since high school that wasn’t about self-enrichment or self-grooming. I’d bet that those shelves are like the Great Gatsby’s library, full of unbent spines and uncut leaves.
You might be interested in learning Cory Doctorow’s take on all this. He argues that stronger copyright laws play into the hand of big corporations rather than protecting creators. Here’s a summary from one of his recent talks about AI and copyright: https://mailchi.mp/hotsheetpub/rwa-bankruptcy#mctoc4
Two questions: Do these agreements with publishers cover both fiction and non-fiction?
What about independent publishers? I publish my own books on Amazon–both fiction and non. Are the AI companies interested in such books?
The companies that have struck deals for AI training so far publish nonfiction. Currently, AI companies are primarily pursuing deals with large corporations, not individuals or small players. If individual authors are able to sell training rights in the future, it would likely be done through collective licensing agencies such as the CCC.
Perhaps a little late to the party, but thank you for this great article. I’m wondering if you have any advice for querying writers navigating the AI question on submission forms. Some agents on QueryTracker ask for a simple yes/no, and even when they don’t, I’m sure the topic will come up eventually.
I want to learn the tool and stay competitive, but I also don’t want to jeopardize my chances. Does checking “yes” risk an auto-reject? Does checking “no” imply I’ve never used AI in any form, even when it’s built into search engines and spellcheckers? And where is the line, and how could an author even prove they haven’t crossed it? I tried running my pages through an AI detector the other day and it flagged my original work as AI-generated simply because the language was “too sophisticated” lol
Hi Demi: Checking “yes” at QueryTracker on the AI question can lead to an auto-reject. It depends on what rules the agent has set up for the form, but I know that some will auto-reject for sure. However, I would interpret this AI question as meaning “Did you use AI to write any part of this work?” That’s the biggest issue for them.
If you used AI to assist you with research or used an editing tool that has AI features, like Grammarly, that’s of less concern, and I wouldn’t find it necessary to disclose that kind of use at query stage. But later on, I’d be honest about any tools you used or will use. Likely they will ask you as well.
Thank you so much for taking the time to answer. Incredibly helpful!