
If you primarily learn about generative AI and large language models from social media, or from the opinions of average people, then it’s likely you are misinformed about it. The majority of what I encounter on a daily basis in online forums is outright false. Even agents and editors get basic things wrong about the law surrounding AI.
This FAQ is drawn from the reporting and analysis I’ve written and published over the last couple years. It is intended as a judgment-free zone: no opinions, no advocacy, just the clearest available statement of what the law and courts currently say about AI, primarily in the United States.
It covers these topic areas:
- copyrighting AI-assisted work
- AI training and fair use
- AI licensing
- AI and traditional book publishers
- disclosure of AI use + AI detection
- proving or certifying human authorship
I will keep this guide updated as new cases, rulings, and policies come to light. If you have a question about the facts or the law surrounding AI, please leave a comment. Comments about AI and its ethicality or morality will be deleted; that is not the purpose of this resource.
The usual disclaimer: This article does not constitute legal advice. Again, I must emphasize that laws and court rulings in this area are evolving rapidly.
Copyright for AI-assisted and AI-generated works
Can I copyright a book or other written work if I used AI to help create it?
The short answer: it depends on how much AI was involved and what role you played in manipulating the output. The US Copyright Office (USCO) confirmed in its Part 2 report on AI that existing copyright law is adequate to handle AI-assisted works, and that no new legislation is needed. The USCO states: “Whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.”
Human authors are entitled to copyright for:
- The creative selection, coordination, or arrangement of AI-generated material in a work
- Creative modifications they make to AI outputs
- Any expression in the work that is genuinely their own
Wholly AI-generated text without modification is not copyrightable. If you use AI to write entire paragraphs or sections, you cannot claim copyright over those specific passages.
Side note: I have yet to find universally agreed upon labels for AI-assisted work and AI-generated work. My colleague Dave Malone discusses what such labels could look like, but be aware this remains an area of debate. It’s rare that anyone agrees on the meaning of “AI generated” and “AI assisted” in the writing world.
Example: Elisa Shupe, a retired US Army veteran, wrote and self-published a novel with extensive assistance from ChatGPT. In 2024, the US Copyright Office granted her limited copyright protection—but not for the full text of the work. The Copyright Office considered Shupe the author of the “selection, coordination, and arrangement of text generated by artificial intelligence.” That means:
- No one can copy the book as a whole without permission
- Specific AI-generated sentences or paragraphs are not individually protected under copyright
This decision is consistent with prior Copyright Office positions on AI-assisted works. Learn more.
Is prompting an AI enough to claim copyright over its output?
No. Both the US Copyright Office and the Authors Guild have stated that prompting alone—even with complex or multiple prompts—is not sufficient to claim copyright protection over AI-generated output. The USCO’s position is that prompting does not constitute authorship in the legal sense.
What if I use AI only for brainstorming, research, outlining, or editing—does that affect my copyright?
No. Using AI to research, brainstorm, generate outlines, or edit your own work does not affect your copyright, and does not need to be disclosed or disclaimed in any legal sense. This includes using common tools like Grammarly, Microsoft Word, etc that have AI-powered features. Also, asking AI to change your own work does not affect the underlying copyright. While any expression the AI adds in the process of editing would not itself be copyrightable, the copyright in the original human-written work remains intact.
Can I copyright an AI-generated translation of my book?
Under current US law, a wholly AI-generated translation is not copyrightable. However, the underlying original book remains protected, which means it would be copyright infringement for anyone else to copy or publish an AI translation of your work without permission—but you yourself cannot claim copyright in the AI-generated translation text.
The legal situation differs in other countries:
- In the UK, the Copyright, Designs and Patents Act 1988 does recognize copyright in purely computer-generated works. The author is considered to be the person who made the arrangements necessary for the creation of the work.
- German law does not currently have a specific statute granting copyright to fully AI-generated works.
Amazon’s KDP now offers an AI translation service (Kindle Translate, in beta). Authors who use it should be aware that the resulting translation will not be protected under US copyright law. The Authors Guild has also warned that authors and publishers should be on the lookout for pirate translations as AI translation tools become more widely available.
Can I use AI-generated or AI-assisted artwork on my cover?
Yes. There are many published books from both traditional publishers and self-publishers that are wholly AI-generated or assisted by AI. The same copyright laws apply to art as to text, meaning a wholly AI-generated cover is not protected under copyright law.
Most major design softwares today (e.g., Adobe Photoshop and Canva) incorporate AI tools. Their terms and conditions may free you of any legal liability for using AI should there be a lawsuit related to your specific use of their tools.
Be aware that some book prizes/awards will consider your work ineligible for entry if you use AI in any way, including the cover. Some readers and online communities will also exclude your work from consideration if there is AI use, known or detected.
Will these copyright questions ever be fully resolved?
Not quickly. Publishing consultant Bill Rosenblatt has noted that because copyright cases involving AI will be decided on a case-by-case basis, “We’re in for a good solid decade of uncertainty, at a minimum.” The US Copyright Office has recommended that Congress not pursue legislation in this area, leaving it to courts to decide. That means a body of case law will build up over time—similar to fair use, which is notoriously fuzzy.
AI training on copyrighted works & fair use
Is it legal for AI companies to train their models on copyrighted books without permission?
This is the central legal question in many ongoing lawsuits, and courts have begun to weigh in—but the issue is far from settled. Two US federal court rulings in 2025 represent the first decisions specifically on this question. In both cases (involving Anthropic and Meta), judges found in favor of the AI companies on training, but for different reasons and with important differences.
The current short answer: Training on legally acquired books has been found to be fair use in at least one US federal court. Other courts may rule differently.
What exactly happened in Bartz v. Anthropic?
Authors sued Anthropic, the maker of the AI chatbot Claude, for training on their books without permission. In June 2025, Judge Alsup of the Northern District of California issued a split ruling:
- Training on copyrighted books: ruled fair use. The judge found this use “spectacularly” transformative. He also found it fair use for Anthropic to digitize print books it had legally purchased, treating that as a format change.
- Retaining a library of pirated books: ruled copyright infringement. The judge found that for Anthropic to knowingly download and copy pirated books for its library (from which it made subset copies to train its AI) was not fair use.
The Authors Guild has full commentary on the decision and disagrees with aspects of it.
The case did not go to trial. Instead, the parties settled out of court for $1.5 billion. That settlement:
- Does not create any ongoing licensing scheme for future AI training
- Does not cover claims based on AI outputs that might infringe copyright
- Does not affect Anthropic’s ability to train on lawfully acquired materials going forward
The settlement money goes to rights holders, which includes publishers in many cases, not just authors.
What happened in the Meta case (Kadrey v. Meta)?
In Kadrey v. Meta (also Northern District of California), the judge ruled for Meta—but not because he agreed with Meta’s fair use arguments. As the Authors Guild explained, the ruling was “only on technical grounds, a matter of procedure, not on the merits of the law.”
The plaintiffs failed to provide sufficient evidence of market harm, so the judge could not rule in their favor even though he found Meta’s fair use arguments unpersuasive. Notably, the Meta judge:
- Rejected the Anthropic judge’s analogy between AI learning and human learning
- Acknowledged the potential for AI to be “really devastating to the writing profession”
- Explicitly invited someone to bring a better-evidenced case in the future
- Dismissed the harm from lost licensing income, but on different grounds than the Anthropic judge
The two rulings are in direct conflict on the question of market harm analysis.
Are these rulings the final word on AI training and copyright?
No. These are early-stage rulings from a single district court.
What are the other major AI lawsuits writers should know about?
There are dozens of AI-related lawsuits underway. The cases most directly relevant to writers include:
- Authors Guild v. OpenAI. A class action in the Second Circuit focuses specifically on copyright infringement in the training of AI on fiction. Representative plaintiffs include John Grisham, George R.R. Martin, Jodi Picoult, and others. This case is widely considered the most significant for fiction writers. Unlike other cases, it does not sue over AI outputs—only over training ingestion.
- New York Times v. OpenAI and Microsoft: Consolidated with other cases in the Second Circuit. Discovery is ongoing and has been contentious.
Learn more from The Authors Guild.
If people use AI models to write, aren’t they engaging in plagiarism?
It is exceedingly common for authors and publishing professionals to describe generative AI as “plagiarism,” but that is either a misunderstanding or a deliberate mischaracterization of the technology. Large-language models generate new content based on their training on vast amounts of data. LLMs do not work like databases; they do not pull from a lot of sources quickly to generate an answer that constitutes plagiarism. However, if you ask it a very specific question where only a handful of sources are available, plagiarism becomes more possible if the AI model does not have preventive measures (“guardrails”) in place. Most of the commercial models cite sources and quote sources and do not plagiarize.
AI licensing
Do publishers have the right to license my book for AI training?
Under most book publishing agreements, publishers do not own AI training rights—those rights remain with the author. The Authors Guild published a formal statement to this effect, and the major publishers (including Big Five publishers) have so far proceeded as though they agree: they are seeking author permission before licensing books for AI training.
That means publishers cannot sell your book for AI training without your consent under existing standard contracts. However, as AI licensing becomes more common, new contract clauses specifically addressing AI training rights are increasingly important to negotiate.
If a publisher wants to license my book for AI training and asks my permission, what’s a fair split?
The Authors Guild recommends a 75 to 85 percent split in favor of the author for AI training license revenue. In practice, publishers are offering 50-50 splits.
Are publishers already licensing books for AI training, and what has happened so far?
Yes, many deals have been struck, particularly among professional and academic publishers. For example:
- Wiley: The major academic publisher announced two AI licensing deals in June 2024 totaling $44 million.
- HarperCollins: In November 2024, the first Big Five publisher to strike an AI licensing deal for nonfiction backlist. Authors were given the ability to opt in or opt out and were reportedly being paid $2,500 per title.
- Other media organizations: News Corp, The Atlantic, and others have also struck deals directly with AI companies.
The Authors Guild has stated it sees AI licensing as a potentially good source of income for writers and has been in discussion with publishers about revenue splits.
There are various types of AI licensing, so not all licensing agreements are for the same thing. The two most common types of AI licenses are for AI model training (what most people are familiar with) and RAG (retrieval-augmented generation), which involves teaching AI systems to fetch relevant data from external sources. The latter type is prevalent among nonfiction, academic, and scholarly publishers, as well as news publishers.
Can I opt out of having my books used to train AI?
Practically speaking, the options are limited. If your books are published under a traditional contract, the AI training rights remain with you under standard agreements, so you have the right to refuse. However, AI companies have already trained on vast amounts of content, including pirated copies of books, and the models cannot “unlearn” ingested data. The Anthropic case also opens the door to any company to legally acquire books (e.g., simply purchase from a wholesaler, distributor, retailer, or publisher) and use them for AI training, assuming Judge Alsup’s ruling holds. That has led to Ingram introducing mechanisms for client publishers to opt out of selling books to AI companies for training purposes, but Ingram is not the only source from which to acquire books, and they can’t know the intents and purposes of every buyer.
Adding a note to your book’s copyright page prohibiting AI training will have little or no effect if US law considers AI training on texts to be fair use.
Can AI companies create a model that imitates my style or offers expert advice with my name on it without permission or licensing?
Style isn’t protected under copyright, so it’s possible that an AI model could be trained to output in a specific style without permission or licensing. (Here’s a legal discussion.) However, it’s generally not permissible under some states’ right of publicity laws to use someone’s name and likeness for commercial purposes without their permission. Either way, this is an area involving active litigation across creative industries. Some authors are deciding to trademark their name for protection.
AI and traditional book publishers
What AI-related clauses should writers look for—or push for—in publishing contracts?
The Authors Guild has introduced model contract clauses addressing AI. Writers should consider pushing for all of them when negotiating publishing agreements:
- No AI training use: Prohibits the publisher from using or sublicensing the book to train generative AI technologies without the author’s express written consent.
- Audiobook narration: Prohibits the publisher from using AI narration for audiobooks without the author’s prior and express written consent.
- Translation: Requires author consent before any AI translation of the work.
- Cover design: Prohibits fully AI-generated cover art without the author’s prior express approval.
How do traditional publishers use AI so far?
This varies by publisher. Some publishers, such as Microcosm, have essentially declared themselves to be anti-AI. (Read their policy.)
However, all of the Big Five publishers have some amount of internal AI use, formal or informal, and most mid-size, commercial publishers are, at minimum, using AI for business administration and back-office functions, marketing and promotion tasks, metadata and backlist management, and more. Recent publishing contracts ask authors to agree to some use of AI by their publisher for “business operations”; such language exists in the boilerplate contract for Penguin Random House, among others. It’s helpful to take a look at Veristage and Alighieria for examples of AI-powered systems now being marketed to and adopted by publishers. Virtually all publishers are being approached or pitched by AI companies of all kinds, but very few publishers are making public announcements, signing agreements, or making their intentions known.
Some publishers, especially scholarly and professional ones, are using AI to release audiobook editions (ElevenLabs is a leading service for this), as well as to produce translations (see GlobeScribe).
In spring 2026, HarperCollins announced partnerships with AI firms, such as Toonstar, to adapt specific works into short-form videos and animations.
What responsibility do editors have if they use AI on a manuscript without the author’s knowledge?
This is an emerging concern with no clear legal standard yet established. A few things are known:
- Professional editorial norms hold that editors are not supposed to substantially rewrite an author’s manuscript. Their role is to improve, not generate.
- Regardless of what changes an editor makes, the author must review and approve all edits. An author who approves edits is considered responsible for what ends up in the final manuscript.
- If an editor uses AI without the author’s consent, and the author unknowingly approves the resulting text, the copyright and disclosure implications could fall on the author.
Writers should raise the subject of AI use directly with their editor, and make their expectations explicit in writing before work begins. Do not assume your editor’s policy—ask.
What’s to stop publishers, literary agents, or others inside the publishing industry from feeding my manuscript/book into an AI tool for the purposes of summary or analysis?
In spring 2026, The Bookseller reported, based on conversations at the 2026 London Book Fair, that some editors are using AI to generate summaries of manuscripts. The main concern seems to relate to assessment of manuscripts being considered by editors prior to contract.
The situation raises numerous questions: Are editors and others using AI systems that are private and secure to generate their summaries? Do they have permission to do so? Do they have permission explicitly from authors and/or agents? What are the legal repercussions if they do this without express permission?
I have heard anecdotally of AI summarization among scouts, especially to support rights sales in foreign languages, but that may be looked upon more favorably as part of business operations, to support sales. Regardless, big publishers have already been deploying their own internal and/or proprietary AI models, available only to staff.
Disclosure of AI use & AI detection
Why do some agents ask whether AI was used in a submission or to prepare submissions materials?
Agents who ask about authors’ AI use are either primarily concerned about copyright protection for the work (typically a necessary condition for selling any work to a publisher), and/or they do not want to work with writers who use AI in any way, shape, or form. However, given the complicated nature of AI and copyright as discussed earlier, it’s not true that if you use AI for certain tasks (brainstorming, outlining, editing) that you are giving up copyright protection. Some agents and publishers prefer to be cautious or take it off the table as a concern, assuming writers disclose and are honest about their AI use. Today that is a big assumption.
Can an agent or publisher reliably detect AI use in a manuscript?
AI detection software exists and is being used by some publishers (mainly academic/scholarly and professional publishers) on contracted work, but it operates in probabilities, not certainties. Also, not all AI detection software is created alike; some tools are much better than others. The current industry leader is Pangram. Some AI detection sites serve as ways to scare people into purchasing “humanizing” services, which is a good way to identify the less credible services.
As of early 2026, no Big Five or major commercial publisher is known to make widespread, institutional use of AI detection software to screen manuscripts at any stage of the editorial process. However, some editors at nonfiction houses in particular will use it selectively, as a starting point for conversations with authors who they suspect may have leaned too heavily on AI to produce their manuscript.
AI models trained on my work. Doesn’t that mean AI detection tools will falsely flag my writing as AI generated?
A common misconception among authors is that because large language models (LLMs) have been trained on their work, AI detection tools will incorrectly flag their own writing as AI-generated. That is not how AI models or AI detection works. Models are trained on vast datasets and no single author has produced enough work to influence outputs in a recognizable, signature manner, unless the model is specifically prompted to imitate a particular style. (That is a separate issue.) The “tells” that AI detection tools look for relate to the curious characteristics of AI-generated writing, not to any individual author’s presence in the training data.
When you see screenshots showing that the US Constitution or Mary Shelley’s Frankenstein is likely to be 100% AI generated, you are seeing (1) a poor AI detection product and (2) the repetition and prevalence of AI model training on public domain work.
What are publishers’ policies about AI use in manuscripts?
Publishers are still developing policies but generally expect authors to submit original work that is eligible for copyright protection. Major publishers do not typically consider AI assistance (brainstorming, outlining, research) to be a breach of contract. If unsure, ask.
What can a publisher legally do if they suspect AI use in a contracted manuscript?
This is a challenging issue, and publishers are still working through it. Because AI detection software is not considered legally reliable, publishers cannot simply terminate a contract by saying “our tool says this is AI.”
What publishers can do instead:
- Find the manuscript unacceptable: Publishers may find it more defensible to terminate on the grounds that the manuscript does not meet contractual quality standards, rather than making a direct AI accusation they cannot prove.
- Bring up hard evidence: If concrete evidence of AI use exists—for example, an AI prompt accidentally left in the manuscript—that provides a more straightforward basis for action.
- Worst case: AI chat records (from tools like Claude, ChatGPT, etc.) are potentially discoverable in litigation. If a lawsuit develops over allegations of AI use, a publisher could attempt to compel disclosure of the author’s AI tool history through legal discovery.
What happened in the Hachette/Shy Girl case, and why does it matter?
In early 2026, Hachette Book Group canceled a forthcoming horror novel, Shy Girl by Mia Ballard, due to suspected AI use. The case is significant because it is one of the first instances of a major traditional publisher publicly pulling a contracted title over AI concerns.
Key facts of the case:
- Shy Girl was self-published in early 2025; it was then picked up by Hachette UK. Hachette’s UK edition had been on the market since November 2025.
- Reader communities identified suspected AI use and the discussion spread widely online even before Hachette acquired the book.
- Readers say the author acknowledged AI involvement prior to the New York Times article, attributing it to an acquaintance who edited the work.
The case raises a number of unresolved questions about publisher responsibility, the reliability of AI detection, and the legal risks all parties face when AI use is alleged without definitive proof.
Can my AI chat records be used against me in a legal dispute about my writing?
Potentially, yes. If a lawsuit develops over alleged breach of contract, fraud, or copyright issues related to AI use, the other party could seek access to your AI tool history through legal discovery.
When you use tools like Claude (Anthropic) or ChatGPT (OpenAI), you typically agree to a privacy policy that grants the company the right to disclose your data to a third party in connection with litigation. Writers who use AI tools should be aware that their sessions are not necessarily private in a legal context.
Proving or certifying human authorship
How can authors prove human authorship?
There is not yet any agreed-upon publishing industry standard for providing evidence or proof of authorship. Some writers are creating extensive documentation of their writing and editing process in the event they feel proof is needed or they face spurious accusations. But thus far no publisher or agent is seeking or reviewing such documentation.
What is the Authors Guild Human Authored Certification, and what does it actually certify?
The Authors Guild launched its Human Authored Certification program for members in January 2025, then expanded it to all authors (non-members included) in early 2026 in partnership with the UK’s Society of Authors. The program allows authors to apply a registered certification mark to their books indicating that the text was human written.
Key facts about the program:
- Cost: $10 per book for non-members, free for members
- Permitted AI use: A small amount of AI-generated text is allowed for grammar and spell-check applications. AI use for research or brainstorming does not disqualify a work, as long as the text itself is human written.
- How it’s verified: Currently on the honor system—the Authors Guild does not analyze the manuscript for AI use before issuing certification. Authors represent and warrant that the work is human authored.
The certification is pending registration with the US Patent and Trademark Office. A public searchable database of registered titles allows anyone to verify a book’s certification.
Is this certification meaningful?
The Authors Guild acknowledges that no AI detection tool currently exists that is sufficiently accurate for certification purposes. If and when such a tool becomes available, they intend to use it. Other certification programs exist that do attempt to measure the amount of human-generated material, such as Verify My Writing.
Whether the certification carries meaningful market value for readers and publishers remains an open question. No studies currently demonstrate increased sales or commercial benefit from human authorship certification.
Questions that remain open
Much of the legal landscape surrounding AI and writing remains unresolved. The following questions remain open.
- How much human involvement is “sufficient” to claim copyright over AI-assisted content? There is no defined threshold.
- Will AI training on lawfully acquired books ultimately be found to be fair use by the courts? The early rulings are not final.
- How will courts define market harm from AI outputs that dilute or compete with human-authored works?
- What are authors’ rights when AI companies purchase copies of their books for training without any license or permission?
- How will publishing contracts evolve to address AI rights as the technology matures?
Further reading
I maintain an AI reading list that is useful if you wish to explore this issue from many different angles, including applied use of AI. Also, the following articles are available at this site with a subscription to my paid newsletter, The Bottom Line.
Court Rulings & Lawsuits
- Two Court Rulings on AI: Neither Gives Authors What They Want, But There’s Still a Long Road Ahead
- Federal Judge Rules That AI Model Training Is Fair Use, With One Big Caveat
Copyrighting AI-Assisted Work
Licensing & Publishing Contracts
- AI Licensing for Authors: Who Owns the Rights, and What’s the Split?
- Like It or Not, Publishers Are Licensing Books for AI Training—And Using AI Themselves
AI Disclosure & Submissions
- Agents and Publishers Confront AI Use in Submissions
- Hachette Pulls Novel Due to AI Use. What Does This Mean for Publishers and Authors?
Human Authorship Certification
- Free: My Concerns About the Authors Guild Human Authored Certification—and Their Comprehensive Response

Jane Friedman has spent her entire career working in the publishing industry, with a focus on business reporting and author education. Established in 2015, her newsletter The Bottom Line provides nuanced market intelligence to thousands of authors and industry professionals; in 2023, she was named Publishing Commentator of the Year by Digital Book World.
Jane’s expertise regularly features in major media outlets such as The New York Times, The Atlantic, NPR, The Today Show, Wired, The Guardian, Fox News, and BBC. Her book, The Business of Being a Writer, Second Edition (The University of Chicago Press), is used as a classroom text by many writing and publishing degree programs. She reaches thousands through speaking engagements and workshops at diverse venues worldwide, including NYU’s Advanced Publishing Institute, Frankfurt Book Fair, and numerous MFA programs.




What a minefield! And an excellent resource.
Hi Jane
While I read every post you share, this one is particularly timely.
Last year I self-published a small book (21,334 words) principally about Noting & Sharing Personal Achievement On-The-Job. BORING topic, yes, but valuable content. This is a good book BUT we all know that doesn’t get a book read.
In an attempt to make word count even smaller while maintaining the informational integrity of the book I am preparing to run each chapter through Chat GPT asking C GPT to retain the msg while reducing word count further.
I’m not sure how this will turn out but happy to keep you in the loop if interested.
Best!
Sure, let me know how it goes.
Thank you for this. During a time when the lines are blurred, seeing some guideline in print makes it more tangible.
Super helpful to see this all in one place, Jane. Many thanks.
The discoverability in litigation of an author’s ChatGPT (f.x.) prompt transcripts may act as a deterrent to overusing the technology – at least for those who are aware of that vulnerability. What do you think?
Possibly! I find it hard to predict.
Thank you.
Thank you, Jane!
You are a treasure, Jane. Thank you for your meticulous research on this important subject.
“Using AI to research, brainstorm, generate outlines, or edit your own work does not affect your copyright, and does not need to be disclosed or disclaimed in any legal sense.”
I meant to comment about this in your last post.
If I understand the above quote correctly, I’d be wary of a nonfiction book on health, baby care, self-help, diet, nutrition, etc. that contained undisclosed passages of research derived from an AI source. When we read nonfiction, we want to trust that the author is personally, experientially familiar with the subject…not that he/she has inserted research gleaned from an AI source and with which the author has no experiential familiarity.
There are dangers presented, certainly, if authors are using AI to research and not confirming/verifying the results. But honestly? Authors get things wrong all the time without AI – as long as books have been published. Publishers don’t fact-check books, and they rely on the author to get things right. A recent book, THE EMPIRE OF AI, got a major fact wrong, and it wasn’t due to AI, but author error. https://blog.andymasley.com/p/empire-of-ai-is-wildly-misleading
So AI is just another tool for research and it can be used well, or it can be used poorly, just like any other research method.
Thank you so much, this is really timely and such an important discussion. My current ‘day job’ is in the UK National Health Service where AI is being wholeheartedly embraced to deliver efficiencies for free healthcare. We are working hard to make safe policies and processes – humans need to check AI work, basically.
So I’ve had a ton of training and I’m on various AI networks to keep abreast of things. I have had a good look at using AI for all the steps of creation, for spreadsheets, business cases, reports, etc. For creative writing, I also looked at what AI can do. It’s bonkers! At the start of this year, I made the decision to never use AI for my own creative work. I don’t really need it, and having that policy means I can avoid worrying about where and how I’ve used it.
I think all authors, artists and creatives should have an AI policy statement to say where and how they generally use AI, for the sake of transparency.
I agree with you, Nan, about transparency. Unfortunately, as long as authors morally/ethically condemn the AI use of other authors, or engage in public shaming (which is highly prevalent right now), I find that many folks are unlikely to be transparent about it.
Thanks for the work you do on this, Jane – it’s a great reference.