Artificial Intelligence in Museums: Six Ethical Questions Cultural Institutions Must Address
This article examines six critical ethical questions museums must address when implementing AI systems: accuracy and AI hallucinations, the use of Retrieval-Augmented Generation to ground AI in institutional knowledge, cloud infrastructure and data storage, intellectual property and ownership rights, the role of AI in extending (not replacing) human interpretation, cultural narratives and ensuring authentic representation of marginalized voices, visitor data privacy, and environmental responsibility. The article argues that AI systems should function as interfaces to curated museum knowledge rather than autonomous authorities, emphasizing that technology must serve the mission and values of cultural institutions. WonderWay is presented as an example of how AI can be built responsibly, accessing museum-approved.
Artificial Intelligence in Museums: Six Ethical Questions Cultural Institutions Must Address
When we began building AI systems for museums, the first questions that occupied our team weren't technical ones. They were ethical. How do we prevent misinformation? How do we protect the intellectual labor of curators and writers? How do we preserve institutional ownership of knowledge? And how do we build technology that strengthens cultural institutions instead of weakening them?
Museums occupy a unique place in society. They are not simply producers of content. They are institutions of memory and scholarship. Visitors trust that what they encounter has been researched, debated, and reviewed by experts.
Artificial intelligence has the potential to expand access to this knowledge dramatically. But if it's implemented carelessly, it could also undermine the credibility museums have spent centuries building.
Recognizing this challenge, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence in 2021, the first global framework addressing how AI should be designed and governed. The recommendation emphasizes transparency, human oversight, cultural diversity, and accountability. For cultural institutions, these principles are especially relevant. Museums are not simply adopting a new technology. They are shaping how knowledge, history, and cultural narratives will be accessed in the future.
Accuracy and the Problem of AI Hallucinations
Accuracy is foundational in cultural institutions. A mistake in a museum label does more than misinform a visitor. It can distort historical understanding or misrepresent a culture.
General AI systems struggle with this problem because they generate responses based on statistical probability rather than verified evidence. A peer-reviewed study examining AI-generated academic references found hallucination rates of 39.6% for GPT-3.5 and 28.6% for GPT-4, with other systems producing incorrect citations more than 90% of the time (Chelli et al., 2024).
These errors are not random. They reflect the way large language models work. They predict plausible sentences rather than verifying facts (Liang et al., 2022). For cultural institutions this behavior is unacceptable. Systems deployed in museums must rely on verified knowledge rather than probabilistic guesses.
Retrieval-Augmented Generation: AI as a Reader of the Museum's Library
One way to address the accuracy problem is through Retrieval-Augmented Generation, often called RAG.
Instead of generating answers based solely on training data, a RAG system first retrieves information from curated knowledge sources before responding. In practice this means the system searches the museum's own research archives, collection databases, and educational materials before generating an answer. The system is not inventing facts. It's reading from the institution's own library.
Researchers who introduced the method describe RAG as combining language generation with external knowledge retrieval so that responses remain grounded in verifiable sources (Lewis et al., 2020). At WonderWay this principle guides the architecture we build. The goal is not to generate knowledge but to make institutional knowledge accessible dynamically while remaining anchored in the authority of the museum.
As digital strategist Nick Hodder has written, "Digital transformation isn't just about tools and technology—it's about aligning those tools with what your museum wants to achieve." AI systems must serve the mision of cultural institutions rather than distract from it.
Where the Knowledge Lives: Cloud Infrastructure
Another question concerns where museum knowledge is stored.
Much of the digital infrastructure used by museums already resides in cloud systems operated by companies such as Amazon, Google, or Microsoft. Many institutions rely on platforms such as Google Workspace or Microsoft 365 for email, collaborative documents, and storage. Industry analyses estimate that more than 60% of enterprise data worldwide is now stored in cloud environments rather than local servers (IDC, 2023).
Even institutions that believe their information remains internal often rely on global cloud infrastructure.
AI systems built for museums can access institutional knowledge through several architectures. The retrieval layer can run entirely on servers managed by the museum itself, or through a trusted platform such as WonderWay that connects to the museum's knowledge libraries without absorbing them into external training models. Each approach has tradeoffs. Internal hosting offers maximum institutional control but requires technical resources and may introduce latency depending on server quality and location. External infrastructure may simplify deployment but it has to be a trusted enterprise.
As Loïc Tallon, former Chief Digital Officer at the Metropolitan Museum of Art, has noted, "Digital practices in museums must serve a sustainable and mission-driven direction." Understanding where institutional knowledge lives is part of that responsibility.
Intellectual Property and Ownership
Copyright and ownership are another critical concern.
In many museums, exhibition texts and interpretive writing are created by staff members as part of their employment. Under U.S. copyright law these works are typically classified as "works made for hire," meaning the institution holds the copyright (U.S. Copyright Office, Circular 30). However, when content is created by external scholars, guest curators, or researchers, rights may remain with the author unless formal agreements transfer them.
In discussions we've had with museums while developing WonderWay, we've consistently reached the same conclusion. AI knowledge libraries should include only materials clearly owned or licensed by the institution. Older catalogues, external publications, and third-party texts cannot simply be uploaded without verifying rights permissions.
Organizations such as the World Intellectual Property Organization, Creative Commons, and the Authors Guild are currently exploring how copyright law should evolve in the age of generative AI. The legal framework is still developing.
AI and the Role of Human Guides
Museums hold extraordinary collections, but visitors encounter only a small portion of them during a typical visit.
Visitor research shows that the average museum visit lasts roughly 90 minutes, yet visitors engage meaningfully with only a fraction of the objects displayed (Falk & Dierking, 2013). Studies of exhibition behavior reveal that most visitors read fewer than 20 percent of labels and spend only seconds scanning them (Serrell, 1996). The challenge is not lack of curiosity. It's the sheer scale of information presented.
Human guides help bridge this gap. A skilled guide can answer questions, adapt explanations, and connect visitors to objects in meaningful ways. But the scale of museum audiences makes it impossible to provide that level of personalized interpretation to everyone.
Can institutions realistically offer guides in every language spoken by visitors? Can they tailor explanations to every age, interest, and learning style? In most cases the answer is no.
AI systems can help extend interpretation into these gaps. They can provide multilingual access and adapt explanations to individual curiosity. But they rely entirely on the expertise created by curators, educators, and scholars.
As UNESCO cultural leaders recently emphasized, "Technology should support cultural interpretation rather than replace it, reaffirming the human essence at the heart of culture." Systems like WonderWay are designed to amplify human interpretation, not replace it.
Cultural Narratives and Multiple Voices
Museums have spent decades confronting another challenge: how to represent cultures and histories that were previously marginalized or misinterpreted.
Many exhibitions created decades ago reflect perspectives institutions are now revisiting. Updating them requires funding, scholarship, and time. Digital interpretation layers may help expand narratives while institutions work toward larger updates.
AI systems could incorporate perspectives from scholars, community representatives, and cultural experts who were not part of the original exhibition. But this raises important questions. Who decides which narratives enter these systems? How should institutions ensure communities are represented authentically?
Recent debates over the removal of historical narratives about slavery in some American museums show how sensitive these questions remain. AI platforms may offer space for multiple perspectives, but how those perspectives are curated remains an open question.
Visitor Data and Privacy
Another ethical question is whether museums should collect visitor data at all.
Museums historically know very little about how visitors experience exhibitions. Surveys and observational studies provide some insight, but much of the visitor experience remains invisible. At the same time, modern digital life is built on data collection.
Smartphones record location and usage patterns. Browsers track browsing behavior. Email systems analyze communication patterns. Voice assistants process spoken requests. According to the Pew Research Center, 81% of Americans believe the risks of companies collecting personal data outweigh the benefits, yet most people rely daily on services built on that infrastructure (Pew Research Center, 2019).
When handled responsibly, anonymized and aggregated data can help museums better understand their audiences. It can reveal which stories resonate, which topics spark curiosity, and which collections remain overlooked. Such insights may help institutions design more inclusive exhibitions and demonstrate their educational impact.
At the same time, museums operate within a public trust. Visitors should know what information is collected, why it's collected, and how it will be used. This conversation is only begining.
Environmental Responsibility
Artificial intelligence also raises environmental questions.
Data centers that power digital infrastructure currently consume roughly 415 terawatt-hours of electricity annually, representing about 1 to 1.5 percent of global electricity demand (International Energy Agency, 2024). In the United States alone, data centers account for roughly 4 percent of national electricity consumption, and demand is expected to grow significantly in the coming decade (Pew Research Center, 2025).
Museums themselves also consume significant energy through climate control systems required to protect collections. The question therefore is not whether AI consumes energy. Many infrastructures do. The question is how responsibly systems are designed.
The Responsibility Ahead
Artificial intelligence will increasingly shape how people encounter knowledge.
Cultural institutions cannot ignore this transformation. But they must approach it carefully. Museums have spent centuries preserving knowledge and presenting it responsibly to the public. AI systems should not become autonomous authorities. They should function as interfaces to curated institutional knowledge.
As Australian Museum director Kim McKay has observed, "Museums have to evolve... we have to utilise new technology as it evolves." The challenge is not simply adopting new technology. It's ensuring that the values of scholarship, care, and cultural responsibility remain at the center of how that technology is used.
Bibliography
Chelli, M. et al. 2024. "Hallucination Rates and Reference Accuracy in Large Language Models." Journal of Medical Internet Research.
Falk, John H., and Lynn Dierking. The Museum Experience Revisited. Routledge, 2013.
International Data Corporation. 2023. Worldwide Global DataSphere Forecast.
International Energy Agency. 2024. Electricity 2024: Analysis and Forecast to 2026.
Lewis, Patrick et al. 2020. "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks." Advances in Neural Information Processing Systems.
Liang, Percy et al. 2022. Holistic Evaluation of Language Models. Stanford Center for Research on Foundation Models.
Pew Research Center. 2019. Americans and Privacy.
Serrell, Beverly. Exhibit Labels: An Interpretive Approach. AltaMira Press, 1996.
UNESCO. 2021. Recommendation on the Ethics of Artificial Intelligence.
World Intellectual Property Organization. 2023. Artificial Intelligence and Intellectual Property Policy Discussions.
Creative Commons. 2023. Copyright and Artificial Intelligence Training Data.
Author: Hélène Alonso
Hélène Alonso is founder of WonderWay and a professor at New York University. She is a museum technology leader with over two decades of experience at institutions including the American Museum of Natural History, Liberty Science Center, and the Wildlife Conservation Society. Her work focuses on artificial intelligence infrastructure for museums, institutional knowledge systems, and the future of cultural interpretation.
Key Concepts Covered in This Article
Artificial intelligence in museums
Ethics of AI in cultural institutions
Accuracy and misinformation in AI systems
Retrieval-Augmented Generation (RAG) explained
Museum knowledge libraries and AI
Ownership and intellectual property in AI systems
Scholarly attribution and citation in AI
AI hallucinations and misinformation prevention
Environmental impact of artificial intelligence
AI infrastructure for cultural heritage institutions
Frequently Asked Questions
What are the ethical concerns about AI in museums?
The main concerns include misinformation, AI hallucinations, lack of source attribution, unauthorized data scraping, intellectual property protection, and the environmental cost of large AI systems.
How can museums prevent AI hallucinations?
Museums can prevent hallucinations by using Retrieval-Augmented Generation (RAG), which allows AI systems to retrieve verified information from curated institutional knowledge libraries instead of generating answers from general internet training.
Does AI scrape museum content from the internet?
Responsible AI systems designed for cultural institutions do not scrape museum content from the open internet. Instead, they access curated institutional knowledge through controlled retrieval systems while preserving ownership of the data.
Can museums maintain ownership of their intellectual property when using AI?
Yes. Using architectures like Retrieval-Augmented Generation, museum data remains stored within institutional systems. The AI accesses it temporarily to answer questions but does not absorb it into external training models.
How can AI credit museum scholars and writers?
Responsible AI systems can include citation layers that reference curatorial research, exhibition texts, and scholarly publications. Source transparency ensures that intellectual labor is recognized rather than erased.
What is Retrieval-Augmented Generation (RAG)?
RAG is an AI architecture in which a system retrieves information from a trusted knowledge base before generating a response. This allows answers to be grounded in verified sources such as museum archives, collection databases, and curatorial research.