
Today’s post is by author Josh Bernoff.
ChatGPT and its ilk are fine at answering general questions. Of course, in my little zone of expertise, which is coaching nonfiction authors, I’m proud enough to believe that any answer I could give would be more helpful, up-to-date, and discriminating than what you’d get out of a generic AI Large Language Model. So I began to wonder: Could I create a virtual book coach that would answer authors’ questions as well as I could?
As it turns out, there’s a lot more subtlety to doing that than you might think.
Let’s start with content. For the last ten years I’ve been blogging every weekday at Bernoff.com, 2.5 million words so far, and 90% of that content is about books and writing. Because my inspiration comes from my clients’ questions, the range of posts is pretty broad, covering everything from how to pitch a publisher to how to cure writer’s block to how to record yourself for an audiobook. In theory, a chatbot trained on all those blog posts, plus a couple of books and research reports that I authored, ought to be a pretty good resource for aspiring authors.
But I wanted to make sure that the experience for my virtual coach users would be simple and easy, and that putting up the chatbot wouldn’t create a flood of technical questions and support requests. I investigated ways to build it myself, but I’m a writing coach, not a tech wizard. So instead, I partnered with a company called Soqratic that’s focused on making chatbots for authors.
Soqratic took care of creating a simple interface so my potential clients could easily sign up. As it turned out, one of the biggest challenges was to make sure that the virtual coach’s answers were limited to my content. I didn’t want it pulling writing and publishing advice from Anne Lamott or Jane Friedman and passing it off as my own. The solution to this is a technique called RAG (retrieval augmented generation), which limits answers to content from a bounded data set (in this case, my books and blog posts).
I learned quite a few things that weren’t obvious at the start. For example, you tell the chatbot how to respond using a text file of system instructions that’s written in plain English. This is how you make sure that it does a better job than just asking ChatGPT, “What would Josh Bernoff say about X?” I told my Bernoff Book Coach to be authoritative and direct, avoiding the obsequious tone that’s typical of most LLM answers, and to answer only questions about books and writing. I also insisted that it include links to my posts and to chapter numbers in my books, a choice that both promotes my content and limits the chance of “made up” answers (commonly known as AI hallucinations). And until I tweaked the system instructions, the virtual coach had a bad habit of insisting it was actually me, rather than a digital simulation.
As cool as it might have been to make the Bernoff Book Coach free, that isn’t economically viable. Each query invokes a connection to an LLM engine (in this case, Anthropic’s Claude), which has a small cost associated with it. But I set the price low ($1.99 to start, $17.95 a month) to encourage as many users as possible.
From the user’s perspective, the virtual coach is far more effective than just searching my blog. It remembers each user’s previous questions, so if you tell it you’re writing a memoir about your recovery from addiction, it doesn’t make suggestions more appropriate for a technology trends book. It’s available 24-hours-a-day, and unlike me, it doesn’t get annoyed at potential authors’ repetitive questions or their outrage about the strange way that publishing houses work.
The coach has had surprising benefits for me. It allowed me to create an inexpensive, low-overhead relationship with potential clients who aren’t ready for more expensive coaching services. I can review what questions people are asking the chatbot, which informs what topics I should be researching and writing about next. When a client stops interacting, the system emails them and reminds them, not only that the virtual coach exists, but what topics they were asking about when they went dark.
And the virtual coach does not cannibalize my own coaching business. Realistically, people who are fully satisfied with answers drawn directly from my content aren’t potential clients. I’ve also found that the virtual coach often stimulates people to set up actual coaching sessions, so they can get a real human response and tap into parts of my knowledge that I haven’t yet documented in my blog or books.
Other nonfiction authors might be wondering if they could turn their expertise into a chatbot as well. If you want to develop more rewarding relationships with readers, it’s definitely worth a look. One question is where your training content is going to come from. It doesn’t have to be a blog; a virtual coach can be trained on your books, newsletter, Substack, ancillary training materials, or even audio and video like a podcast or YouTube series. Even at this early stage I found Soqratic pretty easy to deal with, and the path to standing up a chatbot based on your book is only going to get easier. Just as readers currently expect to be able to consume your book as an ebook or audiobook, future readers will expect to be able to interact with it as a chatbot.
I’m finding it oddly useful to have the Bernoff Book Coach as a companion. When I want to know what I’ve written on a topic, it does a fair summary with perfect recall. There are still things I know that it doesn’t, but unlike me, it’s got an unlimited supply of patience. Let’s just say between the two of us, we’re pretty likely to be able to give a good enough answer to address what any potential author might be wondering about.
To contact Josh about making your own chatbot, visit: https://bernoff.com/contact

Bestselling author Josh Bernoff is an expert on business books and how they can propel thinkers to prominence. Books he has written or collaborated on have generated over $20 million for their authors. More than 50 authors have endorsed Josh’s Build a Better Business Book: How to Plan, Write, and Promote a Book That Matters, a comprehensive guide for business authors.




This was really fascinating.
Quite intriguing! I’m going to share this with some gardening gurus I know who’ve written so much about gardening that I bet they could set up their own chatbot to answer those questions they get over and over again.
Extremely offputting. Would never use.
Very refreshing article about AI. For the longest time I’ve feared it and its impact on a writer’s life, work and content. It intrigued me when you said you turned yourself into a chatbot, making me imagine you worked for something like Chatgpt writing content and helping the model sift through the immense information wealth out there. But your take on the use of AI is so much more interesting and honest. I love the fact that it helps you where you need the help to recall your own content, it also aids you collecting data on what your readers ask and need to know more about, from your content and experience, allowing you to produce more targeted material that responds to actual demand, hence making you more efficient. But more than the help it gives you in your own business model I like the fact that it sources responses from your own material only. Thank you for writing this article and for showing your readers new ways to interact and benefit from new technologies that seem so intimidating.
Okay, in ascending order of importance…
1. This is basically just an ad. Which I suppose is fine, but not what I expect from a Jane Friedman post. There was no content about actually writing or publishing. Like, zero.
2. There are many reasons Big-5 houses won’t deal with content created with AI… besides all the quality issues, there are serious copyright implications. (I realize this isn’t directly tied to Josh’s post, but his post is a tacit endorsement of using generative AI in a writing-related capacity.)
3. Guess what separates your work from everyone else’s? The one thing you have that no one else has? Not plot (or subject matter if nonfiction). Not theme. None of that ‘big picture’ stuff. It’s YOUR VOICE. That’s it – that’s what makes your writing yours, and no one else’s.
And guess what ultimately determines your voice? It’s THE WORDS you put on the page The actual writing, at the line level. So why would anyone who considers themself a writer even consider letting an algorithm choose their words for them? You’re giving up the one thing that makes you unique… you’re outsourcing your identity as a writer.
“Writing” via AI is for non-writers who want to claim they’ve written something – a story, a poem, a song, a novel – without having to put any effort, thought, or creativity into it.
(If I wanted to synthesize another person’s writings without putting any effort into it – and I was okay with approx. 30-40% of it being wildly inaccurate and the rest poorly told – I could pay a drunk to skim the material and give me his impression of what he just read. About the same result as generative AI, with much less wasted electricity and water.)
Hi Mark, Thank you for both reading and commenting. If you read the whole post knowing it was about AI, I hope it means you’re open to a conversation. I consider Josh a colleague in the publishing community, someone who cares about his reputation as much as I do. In an earlier phase of my career, my business was similar to his. In many ways, it still is in that I attract countless people to this site who seek my guidance, but don’t have the funds to take a class or attend a conference where I’m available to help. I answer a lot of emails, though. So many emails. I try to help where I can, but there’s only one of me.
I myself have explored Soqratic because I believe thousands could be well served by a chatbot trained on my books, classes, newsletters, etc. No, it can’t replace me, and I don’t seek a replacement! But I do think it can perform a valuable service if it’s done with transparency, respect, and an acknowledgment of the limitations. Some publishers are already offering chatbot access with books, especially in educational publishing.
Whatever one’s opinion of AI – or in this case, AI chatbots trained on a specific and limited corpus – this issue will become increasingly pressing as AI chatbot and/or RAG rights are exploited by publishers and others in the publishing ecosystem. Even if not exploited by publishers, the rights are being exploited by Amazon (without permission) and some other retailers. Whether an author wants to explore tools like this will be based on myriad factors, but you can’t erase the fact these rights now exist and readers may come to expect this functionality, especially when it’s presented by retailers or other platforms.
Hi Jane,
Thanks for the thoughtful reply. And to the specifics of what you posted, my basic reply could be summed up as “Fair enough.” Josh certainly has the right to use an algorithm to search and paraphrase his own works, and that could conceivable be a useful thing for him and his readers. (Which is quite a different thing than someone using generative AI to scrape a huge number of other writers’ works and spit them back out according to the “next most likely token” [i.e. auto-complete] paradigm under which Gen Ai works, resulting in nothing more than a very large number of “micro-plagiarisms.”) And as you’ll see below, I’m not against AI per se.
My main concern is while you and Josh clearly know the important difference between what Josh uses it for (sort of a meta-compiler of his own works) and using it to generate written works (i.e. “books”), many people either can’t make the distinction or simply don’t care. They get the overall headline of “AI is a great tool to use for writing, and if you don’t use it for your writing you’re going to get left behind, Boomer!” The fact that it’s theft never bothers them, nor the fact that the output is often horrid writing. (Likely because everyone I know of who uses AI to generate their actual prose isn’t what we’d normally call “a creative.” Writing is hard work. Why would you farm out the one part that’s actually enjoyable and gives one a sense of accomplishment… the writing itself?)
AI is clearly an extremely powerful tool for certain tasks, and I fully expect some smart scientists will use it to create a cure for cancer. But there’s one thing generative AI is still stunningly bad at – creating prose, especially fiction.
I was talking with a friend about the pros/cons of using an AI grammar-bot to “correct” one’s writing when I realized I’d never used it, so as a test case I took a couple of pages from the final version of my latest and loaded it into Grammarly. It made a number of so-called corrections, with the following results: None of the changes was helpful, either from a storytelling or an SPG (spelling/punctuation/grammar) point of view. A few were neutral, as in it could go either way (mostly in the “optional comma” area), but the vast majority either hurt the voice/pace/vibe of the story or actually changed the very meaning of the sentence in an unhelpful, inaccurate way. Oh… and this was a Big-5 novel which was edited by Beverly Horowitz and copyedited by Barbara Perris, two absolute rockstars in their fields whose skills – I’m sure you’d agree – are pretty unassailable.
I could go on (a writer I know of used Chat GPT to edit his novel, and during the process a text exchange between characters somehow morphed into a phone call!) but we’ve all seen enough AI-slop to have experienced this firsthand.
The point: AI has no real idea what it’s doing. Period. Using it – without detailed, expensive, and time-consuming oversight by a skilled human practitioner – puts one in danger of being responsible for the creation of wildly inaccurate statements… so-called hallucinations.
Pretty much every agent and editor I know or know of is bemoaning the onslaught of absolute garbage showing up in their inbox, both fiction and nonfiction. It’s virtually all unreadably bad. The fact that the people prompting this slop don’t realize the atrocity of their output proves the point that no actual writer would crowd-source the creation of their work… these are simply get-rich-quick grifters on the level of those who write the “Millionaire prince from Sudan” emails.
Sorry for the diatribe. I respect you and your work, Jane. I probably get too emotionally involved when I see an artform I’m passionate about get corrupted with the “but it’s so convenient!” quantity-over-quality mindset.
Would love to see your take on the recent NYT article, “The new Fabio is Claude,” and all the fall-out from it. (Did you hear CeCe Lyra’s rant on this on the recent “Shooting the Shit” publishing podcast? She makes me seem dispassionately calm… lol.)
Best,
Mark
Thanks, Mark. I did write at length on the NYT piece for my paid newsletter. Here is part of what I said: “For the last year, I’ve been trying like the lawyers to ascertain how much of a market threat AI-generated books pose to writers’ livelihoods. First, fiction and nonfiction markets will be affected in different ways. Nonfiction categories are already full of damaging and fraudulent AI-generated work, where you find books offering confident but harmful information (see the mushroom case) as well as all sorts of books trying to pass themselves off as something they’re not and/or directly infringing on actual works, especially in the case of memoir and autobiography. Fiction is a little different. It’s not a question of infringement or fraud yet, and most harm is done not to readers but to the market for human authors. When I asked Alex Newton at K-lytics what his data reveals about AI content in the Amazon marketplace, he said many fiction categories have seen dramatic growth in output, likely attributable to generative AI (see below). He believes many more people are writing who don’t typically have the stamina or craft to finish a book, but that does not mean their books sell. In fact, during one month last year, he found it striking that out of 700+ new titles in one fiction category, 72 percent showed zero reviews.”
I don’t think directing one’s anger at the NY Times or Christa Desir (quoted in the piece) is appropriate, and I saw a lot of that online. The author prominently featured by the Times, Coral Hart, will likely have pariah status indefinitely in romancelandia, but not necessarily with casual readers. As some have been brave enough to point out in comment threads, while fiction writers on the whole may be anti-AI, the general public may not care or realize that what they’re reading is AI generated. Of course, debate continues over exactly how much readers do or don’t care, both now and in the future. I have no doubt some avid readers, regardless of what they read, will always be anti-AI and that no amount of normalization will bring them around. On the flip side, just as many (probably more) are not going to care. Both things can be true at once.
Thanks for the excerpt. That was really interesting and informative. I’m sure you’re right that some casual readers won’t really know – or care – if the books they read were generated by AI or not. (And vice versa… you’re also right that both states can be true at once.) If nothing else, we’re living in interesting times…
Appreciate you being open to the conversation. It’s rare these days. Most people just hit unsubscribe!
Mark, thanks for your comments. I know AI is a polarizing subject. It might help to know why I created this chatbot and how I’d address your comments.
First, I shared this because I think many writers in the future will have similar experiences of turning their books into interactive experiences. I thought they might like to know what it was like for me.
It’s a stretch to assume that because I used AI to create a tool that I endorse it as a substitute for clear writing with a powerful voice. Anyone who follows me knows that I’m passionate about the power of excellent writing — I wrote a book about it, “Writing Without Bullsh*t”. I consider the chatbot to be a tool: basically an interface to a database of my writing. Nobody gets upset if you can search my blog — why shouldn’t it be fine to do a search by using a plain English prompt and getting answers back from my own writing? It’s not a substitute for reading, it’s a substitute for searching a database.
AI is a useful tool for writers, but not for the act of writing, which is an essential human activity.
If you’re interested, check out the survey of writers I conducted, supported by Gotham Ghostwriters. We found that 61% of writers were using AI tools, but only a tiny fraction were using it to write things. It’s a powerful tool, but that doesn’t mean I want it to write things for me.
Hi Josh,
I won’t belabor everything I replied to Jane. By all accounts you’re a talented, ethical writer, and you’re certainly free to do whatever you like with your own works. And I think an algorithm which would take a reader’s question then search all your works and output selections of your actual writings which are relevant to the query would be an amazing tool.
The scary part is when the algorithm is instead supposed to summarize/rephrase/synopsize or otherwise try to convey what you wrote beyond simply reporting your original words. When it gets it right, AI can usually do a decent job of it (if overbaked and repetitive, often restating the same point multiple times within a paragraph). But it often gets even the simplest things wrong. Like, very wrong. I could give stunning examples of this but if you’ve played with Chat for 10 minutes you’ve experienced it yourself. And when you question it, it often admits its wrongness… then doubles down with another inaccuracy.
My son is steeped in tech – both professionally and personally – and in his view the real danger here is most people don’t realize the primary baked-in function of a generative AI algo like Chat isn’t to give you correct information… it’s to charm you. And the more you use it, the more that becomes evident.
Best of luck…
You said: “the primary baked-in function of a generative AI algo like Chat isn’t to give you correct information… it’s to charm you”
That’s why my bot has system instructions. It has a different goal. It doesn’t charm you and it doesn’t give you anything not included in the original content.
“It doesn’t charm you and it doesn’t give you anything not included in the original content.” I love this. I wish more people had these priorities. Good on you!
This is interesting! For you, this AI coach is functioning more like a speedy search engine than a creator of new content (i.e., answers in your case). Right? It’s applying a new user’s question to answers you’ve already generated for a previous user (client)? Or applying a new user’s question to an explanation given in a book or blog post you’ve previously written?
The existence of Soqratic blows my mind! There is already a tool to create a tool… of course. Matthew Holmes, the “Facebook ads guy,” has recently come out with an AI chatbot that is basically the same thing you’ve developed, but for Facebook ads, Amazon ads, BookBub ads, and the whole author marketing job (mostly); I suspect he used Soqratic as well, although he never mentioned it. While not cheap, lifetime access to his AI thing is far cheaper than a 1-hour coaching call; I would never choose to afford the latter, but I would consider the former. If I were a nonfiction writer, I’d probably be in the same boat as you describe above – making use of your AI coach. I say, good on you for making a smart, responsible use of technology. Best of luck.