Who Tells Your Story? Narrative Ownership in an AI-First World
How cultural institutions, artists, and storytellers are reclaiming their voice, and what one Mayan museum in Guatemala can teach us about the future of guided experiences.
Who’s answering?
You're standing in front of something you've never seen before. An ancient artefact, maybe. A piece of digital art. A building with a story you can sense but can't quite piece together.
You want to know more. The label can only say so much in three sentences. And your question is probably different from what the curator anticipated. So you do what feels natural now. You pull out your phone and ask ChatGPT.
But who's actually answering?
The confidence problem
Large language models are genuinely useful. They're good at synthesising information, explaining complex topics, and helping people learn. For most everyday questions, they work remarkably well. But they have a quirk. When they don't know something, they find it difficult to say "I'm not sure." Instead, they fill the gap with something plausible-sounding. This makes sense from a design perspective: if you're optimising for helpfulness, any answer feels better than no answer.
For well-documented subjects, this rarely matters. Ask about the Mona Lisa or the Eiffel Tower and you'll probably get something accurate. There's plenty of reliable material online to draw from. But what about a Mayan archaeological site in Guatemala City? An emerging artist whose work exists mostly on Instagram? An indigenous tradition that's been written about more by outsiders than by the community itself?
The less mainstream the subject, the thinner the source material. And thinner sources mean more guesswork dressed up as confidence. The cultures and creators who most need accurate representation are often the ones AI serves least reliably.
The ownership problem
Even when AI gets things right, there's a deeper issue.
That answer came from somewhere. Scraped from the open web. Aggregated. Anonymised. Stripped of context. The people who actually created the knowledge (the curators, the artists, the communities) aren't even in the room.
They don't see what questions people are asking. They can't correct the record when something's wrong. They can't add nuance when something's oversimplified. They're invisible in their own story.
And they certainly don't benefit from it. All that expertise, all that curation, all those years of careful scholarship, and the value flows to whoever built the AI model, not to whoever built the knowledge.
For anyone who's spent time creating something worth knowing about, this should feel uncomfortable. Your work is being used to train systems that will answer questions about you, without you, and you'll never know what they said.
Why this matters for artists and creators
If you're a digital artist working outside the mainstream (generative art, experimental media, anything that doesn't have a Wikipedia page and a hundred blog posts) you're especially vulnerable. AI models know less about you, which means they hallucinate more about you. Your influences get misattributed. Your techniques get confused with someone else's. Your story gets flattened into whatever the model could infer from fragments.
In an AI-first world, underrepresented quickly becomes misrepresented.
Cultural institutions, heritage sites, independent galleries, local historians: anyone whose knowledge is valuable but not abundant online faces the same problem. The technology that promises to democratise access to information is actually amplifying existing gaps. The loud get louder. The niche get noisier.
A different approach
What if institutions could be inside the AI conversation instead of bypassed by it?
This is the idea behind Musa, a platform that lets cultural institutions build conversational AI guides from their own knowledge. The AI doesn't answer from the open web. It answers from a structured knowledge base the institution controls, reviewed by their experts, aligned with their values, updated whenever they want.
When the AI doesn't know something, it says so. When there's genuine uncertainty (contested history, multiple interpretations, ongoing research) it can represent that complexity instead of flattening it into false confidence.
And the institution sees what visitors actually ask. Not what curators assumed they'd want to know. What they actually want to know. Which turns out to be a very different thing.
What this looks like in practice
Museo Miraflores sits in Guatemala City, on the site of the ancient Maya city of Kaminaljuyu. Their collection spans centuries of archaeological discovery: jade masks, ritual objects, reconstructed burials. Their challenge was serving both international tourists and local school groups with wildly different questions and interests.
Working with their team, including Hari, the museum's archaeologist, they built a conversational guide that could answer questions in real time, in multiple languages, grounded in their actual research. Not generic Mayan history scraped from the internet. Their Mayan history. The specific objects in their collection. The interpretations they stand behind.
They started seeing what visitors actually asked. People were fascinated by the ancient ballgame, and so Hari and his team added a lot more info comparing it to other ball games. Visitors kept asking about workshops and events, so they started keeping track of anything ongoing around the museum.
None of this would have been possible with a traditional guide. You'd need to re-record, re-manufacture, re-deploy. Here, it happened in real time, based on what real visitors actually asked.
The feedback loop
For decades, cultural institutions have worked in a kind of darkness. You create an exhibition. You write the labels. You hope it resonates. Maybe you do a visitor survey months later. But you never really know what questions people had, what confused them, what they walked away still wondering about.
Museums have always been in the education business, but they've been doing it with one hand tied behind their back, broadcasting content without ever hearing what came back. The feedback loop that digital products take for granted never existed for cultural experiences.
At Miraflores, teachers are using the platform to prepare school visits, understanding what's there, finding objects to focus on, and building lesson plans around actual content. That's a use case the team didn't design for. It emerged from how people actually used the tool.
You can read more about the Miraflores deployment in the Musa case study.
Beyond museums
The Miraflores story is compelling, but the idea extends far beyond museum walls.
City tours, where local guides and historians have insights that generic AI can't match. Heritage sites, where communities want to tell their own stories instead of having them told for them. Artists and creators, who could let audiences engage with their work directly, asking questions, exploring context, going deeper than any static artist statement allows.
Anywhere there's curated knowledge and an audience curious enough to ask, there's an opportunity to move beyond one-way broadcasts.
In Canada, cultural networks are using Musa to explore exactly this, using multiple personas to represent different perspectives on shared history. An indigenous voice. A worker's perspective. A historian's interpretation. Not flattening complexity into a single narrative, but inviting visitors into it. Letting them choose whose story they want to hear.
The bigger picture
AI isn't going away. The question is who gets to shape what it says.
Right now, the default is: whoever's content was scraped. Whatever was abundant online. The mainstream. The loud.
With tools like Musa, institutions, artists, and communities can build their own knowledge bases. They can be the authoritative source and use AI on their own terms. They can see what people ask, respond to what people need, and evolve their storytelling.
The barriers that used to limit who could share knowledge - language, format, cost, and scale - are falling. What matters now is who's in control of the story itself.
Musa Guide is a UK-based startup helping cultural institutions build conversational AI experiences. Learn more at musa.guide.

