AI and Libraries: Why Librarians May Become Arbiters of Reality

Image: at a library in Ciudad de México, the floor is so highly polished that the book stacks and windows are reflected, creating a somewhat surreal and confusing scene. A single visitor is seated at a table in the center.
Photo by Julio Lopez

I’ve written a lot for my newsletter The Bottom Line about AI and publishing—the copyright lawsuits, the controversy over AI detection and disclosure, the new AI-based editorial services—but a recent Book Industry Study Group webinar offered specific insights I hadn’t heard before: librarians as the publishing industry’s early warning system. Librarians are not working in the realm of “what if” when it comes to AI; they’re managing the real-world effects right now.

BISG executive director Brian O’Leary presented survey data about libraries and AI, followed by a conversation with R. David Lankes, a library scholar who has conducted research in the field.

BISG’s survey on AI and what it says about librarians

BISG conducted a survey on AI use across the publishing supply chain, drawing responses from publishers, librarians, and industry partners. The full dataset was presented in an earlier webinar; this session focused specifically on the library segment.

The library respondents skewed toward those with experience and working with larger institutions (100+ employees). More than half reported 11+ years working in the field, with the largest single group falling in the 15 to 25 year range. The headline finding: librarians have a notably higher rate of resistance to AI. About a third reported not using AI and having no plans to do so. (Across the full dataset, it was 20%.)

Less than half of library respondents said their institution was actively using AI. Thirty-one percent said they were unsure, a figure O’Leary said may reflect the reality of working in larger institutions where not everything happening organizationally is visible to individual staff. The most common response to a question about institutional AI policy wasn’t encouragement or discouragement but no policy at all.

Librarian objections to AI included catalogs increasingly filled with low-quality AI-generated work, staff time consumed by identifying and removing “slop,” and the burden of countering false or misleading information—that is, dealing with work that runs directly counter to librarians’ core mission.

About 34% of librarians described themselves as ethically opposed to AI use, quite significant when you consider who is responding.

Librarians resist AI not out of ignorance but skepticism

The BISG session highlighted distinct conversations about AI playing out simultaneously across publishing and libraries alike, conversations in which people are often talking past one another.

One conversation relates to AI literacy. Some argue that AI is a tool and information professionals should understand tools, therefore librarians should learn to use AI and help patrons navigate it. Versions of this argument come along with every major information technology shift, and it carries an implicit assumption: that resistance is unfamiliarity, and education will resolve it. It’s how the field responded to the internet, to mobile, to social media.

But Lankes argued this model doesn’t so far fit what’s happening in the library community. The resistance isn’t coming from people who haven’t engaged with the technology. It’s coming disproportionately from people who have. When he conducted focus groups with librarians in the “never AI” camp, he found people who could explain large language models, discuss retrieval-augmented generation, and articulate technically why they considered the tools unreliable. They’ve concluded that a library’s adoption of AI would send the wrong signal about what a library fundamentally is.

Nonetheless, AI can obviously aid in libraries’ mission

Lankes offered an example that shows, despite skepticism, there is a clear role for AI to support libraries’ work. A collection of music materials at the University of Texas—thousands of albums and recordings—was effectively invisible to users because it had never been fully cataloged. The resources to do so through traditional means simply weren’t there. So a team developed a solution: use AI to create stub records, each tagged with a confidence factor indicating how certain the AI was about its own identification. The least-certain records could be flagged for future human review and high-use materials prioritized for fully human-generated cataloging.

This is not an AI literacy question but a mission question. Can AI help librarians do what they exist to do? There are, of course, parallels to publishing. In conversations I’ve reported on around accessibility, without AI tools, certain backlist titles might not receive alt text, metadata, and formatting updates needed to serve readers with disabilities. The choice isn’t necessarily between AI and a better alternative, but AI and nothing.

AI and the scalability problem

One of the more fascinating parts of the conversation came near the end, in relation to peer review. Peer review is foundational to academic publishing and, more broadly, to the mechanisms by which society establishes what is known and credible. Until now, that system has functioned because the rate of human knowledge production and the rate of human capacity to review it have been roughly matched. AI is breaking that equilibrium.

As Lankes put it, peer review is simply not scalable given the volume AI can produce. We can’t use the same AI tools that are accelerating content creation to evaluate that content’s credibility, because the trust isn’t there. The result is a genuine threat not just to publishing workflows but to the broader infrastructure by which society determines what is true.

I find this problem is often overlooked in discussions about AI: the question isn’t only what AI does to writing and publishing, but what it does to the information environment our industry depends on. You can see it clearly right now in how much authors worry that declining page reads in Kindle Unlimited may be tied to a greater glut of titles—a result of AI-assisted and AI-generated works entering the Amazon marketplace. If readers have less trust that unfamiliar books will meet their needs or be trustworthy, they’re more likely to stick with what they know and not take a chance on something new.

Libraries as arbiters of reality

Lankes suggested that libraries may be moving toward a new function that I would describe as “arbiters of reality.” He described librarians as serving as “trusted humans” in the loop, or people who can vouch for a source because they have it in their collection, know where it came from, and can go look at it. The trust that librarians have maintained over decades, even as institutional trust has declined in our society, gives them something AI systems don’t have: credibility with the people they serve. When a student gets an answer from AI, Lankes noted, they’ve been trained to be skeptical. When it comes from the library, they tend to believe it.

I imagine that’s why 34% of library respondents are ethically opposed to AI. They’re trying to protect something real. Lankes suggested that AI literacy—teaching people to be appropriately skeptical of AI-generated content—may be hitting a psychological ceiling. If people are required to question everything they encounter, the cognitive burden becomes unsustainable. He suggests that what’s needed isn’t more instruction but something closer to a wellness intervention: relief from constant epistemic vigilance.

Feedback

After I published this piece, I heard from Evan Ottenfeld, who wrote, “I wanted to share with you a project my organization, the Urban Libraries Council, has been working on with the University of Albany, exploring four of our library members and how they’re utilizing AI: internally, helping patrons navigate it, and even helping their city governments in some cases. The goal is to explore how libraries can empower communities to better understand AI and ensure its ethical design and application. I know your piece focuses mostly on how some libraries are critical, but this looks at how libraries are helping their communities navigate the emerging technology (often, by request from patrons, and often critically). Currently, we have published three case studies, looking at Queens, Palo Alto, and Schaumburg in suburban Illinois, and a final one from Frisco, Texas, is on the way. You can find the project here.

Subscribe to comments
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

12 Comments
oldest
newest most voted
Inline Feedbacks
View all comments
Linnea R Michel

I was a library manager back in the dawn of the Internet, working for a forward-thinking library director, at a time when reference staff were anticipating the arrival of Google, seeing it as a potentially important tool for reference work. Our library director was actively positioning librarians as guides and navigators of the internet. She asserted that librarians are uniquely qualified to be arbiters of information and trained her staff to help patrons evaluate the authority of websites and the reliability of information found on the internet, and we developed a healthy skepticism. It comes as no surprise that librarians are deeply skeptical of AI. But I expect they will keep their eyes on it, evaluating, as it develops.

Taylor Eckstrom

Good point! I remember as a kid we learned how to use Wikipedia and never used it as a source. The thing about Wikipedia is it is at least edited by humans. GenAI spits out an answer with confidence, essentially saying “I’m right”, but its sources are often questionable and it pulls random things out of context. If it’s a quick “Hey Google, when were traffic lights invented?” that’s a starting point for research, not the research itself and I think that’s the key difference. It scares me that people just get all their answers and “research” from ChatGPT.

Sally M. Chetwynd

And Wikipedia, although edited by humans, is often fallible, because no one is monitoring what humans add to it. I once came across an entry in Wikipedia that stated that Mary Todd Lincoln was married to Edgar Allen Poe. Who believed that and entered it? Who didn’t fact-check it?

I use Wikipedia for general information. When I need more depth, I use other online sites, and (most often) print books on my subject of interest.

Another, related concern I have is the shift away from print books to electronic books. Hackers have and use their illegal skills to break into electronic files and rewrite the content to suit their agendas, with no compunctions about honoring author copyright. Therefore, I will always produce my books as print copies first, so that there is dated proof of what I said, how I said it, and when I said it. Any hacked electronic copies can then be proven as adulterated.

Renée Silvus

I appreciate how you’re monitoring this topic and have now included librarians. They’ve always been our arbiters of sound information and direction. Now what? It’s messy developing AI slop radar while making the occasional slip, and we’re overwhelmed enough, to your point about “constant epistemic vigilance.” Thank you for some studied forewarning as we navigate the churn.

Last edited 25 days ago by Renée Silvus
Louis Shalako

“If people are required to question everything they encounter, the cognitive burden becomes unsustainable.” When I was a little kid, it struck me that my world was encompassed in a radius of five miles and after that, I was taking my parents’ word for it. The cognitive burden has been sustainable so far.

Dave Malone

Thank you for this illuminating post, Jane. I found this particularly compelling: “As Lankes put it, peer review is simply not scalable given the volume AI can produce. We can’t use the same AI tools that are accelerating content creation to evaluate that content’s credibility, because the trust isn’t there. The result is a genuine threat not just to publishing workflows but to the broader infrastructure by which society determines what is true.”

Ellen Foster

Thank you for a very tight debrief on valuable research! This was a very helpful read. My background is insight development & covers all kinds of data + AI tools, with a couple of years in an ad agency library. HT to other commenters – I prefer guide/expert/navigator/professional type terminology to skeptic which carries that touch of fear/luddite. Are there other studies you recommend for further reading? Thank you again.

Taylor Eckstrom

This article is insightful Jane, thank you for this information.

I am an archives/history grad student so thinking about AI in libraries is incredibly disheartening. It seems like all of my classes discuss AI use, and it’s important I guess to know how to deal with it, but our conversations pretty much boil down to 1. It won’t save us time and 2. It’s unethical. Catalogers have to go back and correct errors made by AI all the time when they could have just done it themselves correctly to begin with. Faster is not necessarily better. Of course we want to make things accessible to users, but there are better ways to do that than “let the robot do it”. I’ve read about so many archival cataloging projects that justify using AI because of staffing issues, etc. Meanwhile, my friends and I are applying to any internship we possibly can with no response. I have offered to help with cataloging projects (for free) for the experience, and was turned away only to learn that AI is doing it instead. I specialize in music history/archives and I could go on and on about the issues with relying on AI to identify and catalog music.

Are we going to have AI do description? Because there are *so* many ethical issues with that as well. As an archivist my job is to document, preserve, and make accessible human activity and cultural/collective memory. Having a robot do that is so ironic and depressing.

Laura Sanders

As a librarian in my forties, I am in favour of judicious use of AI in libraries. Early in my career, I witnessed many “old guard” librarians resist online research, e-book implementation, graphic novels, etc. and insist on 20th century ways of doing things. (When I started my current job in 2015, there was still a card catalogue!) I feel strongly that this was to the detriment of the profession, because it contributed to the common misperception of libraries as outdated and irrelevant. To be frank, it’s taken my generation of librarians decades to undo the damage our tech-averse predecessors caused and demonstrate that libraries still play a vital role in society.
So I don’t want to be responsible for that happening again with AI. I have embraced AI and tried to use in ways that improve my workflow rather than replace it. For example, I conducted a collection analysis with it, which gave me useful insights into my patrons’ borrowing habits that I would not have noticed if it was just me analyzing the data. I use it to correct professional communications in my second language. But I apply critical thinking in my use. I would not use it for cataloguing, for example, as it requires precision and nuanced decision-making that AI is not capable of.
I would need to look at the BISG’s survey to see if it addressed this, but there is another issue around the idea of librarians as “arbiters of reality.” Unfortunately, due to the continued uptick in censorship efforts across North America, public trust in librarians has been steadily eroding. We are believed to be pushing a “woke agenda” when we are simply trying to ensure fair access to resources. This concerning trend may be contributing to the psychological ceiling Lankes mentioned. Not because of anything to do with AI, but because library workers’ professional qualifications are increasingly dismissed and minimized.

Last edited 23 days ago by Laura Sanders
Sarah Eiseman

Hear! Hear!

Linnea R Michel

Until I worked in a library, I never appreciated the role of libraries in protecting intellectual freedom, nor the critical role of intellectual freedom in a democracy. It’s critical—and we’re living with proof of that.