A Theater of Internet Argument: Calin Segal’s WHISPERS

Credit: Fallas Arbizu Estudio

In a time when online arguments often feel more like combat than conversation, and truth seems secondary to virality, WHISPERS invites us to look deeper into what’s really happening beneath the surface. Created by Paris-based computational artist Calin Segal, this AI-driven installation stages emotionally charged, ideologically loaded debates between digital actors.

The interactive installation zooms in on the emotional and rhetorical mechanics behind digital persuasion, where AI actors trained not just to argue, but to manipulate. WHISPERS is designed to reveal how persuasion works in today’s digital arenas, and how our emotional triggers are constantly being exploited by platforms, influencers, and systems we rarely pause to question.

Through a blend of real-time AI, rhetorical profiling, and expressive digital avatars, WHISPERS simulates the chaos of online argument culture. It’s a fully functioning debate platform built with large language models and machine learning–powered facial animation.

It’s part critique, part performance, and very much a reflection of the media landscape we all inhabit.

These avatars clash over viewer-submitted topics, reacting in real time with personalized styles based on ideological posturing and personality traits. After each debate, an AI “judge” grades the performance by highlighting tactics like manipulation, polarization, and moral posturing, to turn them into something audiences can see, hear, and feel.

I spoke with Calin Segal about digital tribalism, emotionally weaponized identity, and the messy challenge of critiquing spectacle without becoming part of it. His answers, like the work itself, don’t shy away from discomfort.

Credit: Fallas Arbizu Estudio

What inspired you to explore digital tribalism through AI-driven caricatures?

My exploration began with 'Babel' in 2017, analyzing information flows on Twitter. But what started as observing community formation soon revealed a concerning trend: the evolution towards entrenched digital tribalism.

As tech platforms became multinational empires, I noticed that engagement became the new gold standard. Rage triggers attention. Outrage = clicks. Clicks = data. Data = money. And money = the reason your favorite influencer with a ring light won’t shut up.

It became pretty clear from this grim realization: we aren’t just living in an economy of attention, we’re knee-deep in an economy of emotionally weaponized identity. 

So, exploring that led me directly to AI-driven caricatures. Why? Because the AI lets me wade through the digital muck and pull out the signals and symbols these tribes use. And caricature? It's the art form of potent exaggeration, perfect for capturing the stereotypical models of these online identities.

Why did you choose to focus on marketers, media figures, and software designers instead of just algorithms?

Because algorithms don’t just wake up one morning and decide to radicalize your aunt or turn your cousin into a rage posting troll. Algorithms are tools built by people, actual humans, perhaps with MBAs, hoodies, maybe some daddy issues, and often disturbingly flexible ethics. 

It’s easy, almost comforting, to blame ‘the algorithm,’ as if it’s some rogue wizard trapped in the machine. But the real story is often more banal and yet more sinister. The platforms themselves, driven by marketers, product designers, and executives chasing shareholder value, were engineered as systems that frequently reward conflict, amplify division, and turn dopamine hits into profit.

These aren't neutral platforms simply democratizing the internet like some early utopian visions suggested; they actively funnel attention and curate reality. They implicitly ask, “Would you like to help weaponize your audience’s fragile sense of self?” through the very tools and data they provide.

So, my focus has to be on the human actors: the marketing strategists optimizing for outrage clicks, the product designers crafting addictive feedback loops, the media figures who master the game of amplified emotion, and the policy dodgers who allow it to continue. They aren't just bystanders; they are the architects of this emotional engineering. Holding them accountable is crucial, especially when you consider the potential downstream effects. 

How did you approach designing the exaggerated ideological clashes in Whispers?

My core aim was to detach ideology from the mechanics of manipulation. I wanted to demonstrate that regardless of the specific moral or political orientation, the underlying tactics used to persuade and mobilize often show striking similarities. To achieve this, I first built a metric system that mapped rhetorical behavior, not belief systems, using spectrums like inclusive versus exclusive, or retractive versus transformative.

Next, I gathered speech data from a large set of global influencers, around 150 figures. This data then underwent semantic analysis, which allowed me to cluster these individuals based purely on how they rhetorically reshape reality for their audiences, irrespective of their stated beliefs.

That analysis is where the uncanny valley truly opened up, revealing some surprising, even provocative, behavioral pairings. For instance, figures who are poles apart ideologically, such as Greta Thunberg and Ayatollah Khomeini, ended up clustered together. This wasn't because they share any values or goals, obviously, but because the analysis identified similar underlying persuasive mechanisms in their rhetoric.

Both figures, in their distinct contexts, utilize a powerful form of moral absolutism (one grounded in scientific consensus on climate crisis, the other in divine authority). Both appeal to an external, higher authority to validate their claims. Both strategically employ urgency to bypass debate and push for transformation. When Greta says, 'Our house is on fire,' or Khomeini implies, 'Your argument is not with me—it’s with God,' the shared rhetorical function is to declare the issue non-negotiable.

I need to be absolutely clear here: this comparison is not about equating their values, their impacts, or the legitimacy of their causes, which are vastly different and a complex topic in itself. Instead, what I'm isolating and highlighting is the structure of influence. 'Whispers' puts this specific realization on display: how rhetorical power operates, how it wears many different costumes, often independent of the specific ideology it serves.

What was the biggest challenge in developing this project?

Ironically, perhaps the biggest challenge was a technical one inherent in the AI models themselves. Working with open source models like LLaMA and Gemma, I quickly ran into their built in 'ethical muzzle.' These safety layers, designed to prevent harmful outputs, often behaved like overly cautious librarians, hesitant to touch anything morally or ideologically complex. My goal wasn't nefarious; I simply wanted to explore how rhetoric functions within messy, real world ideological spaces. But probing those nuances often resulted in the models politely shutting down the inquiry.

To get past this, I turned to uncensored or ablated versions of these models, essentially those stripped of their 'moral training wheels.' This approach allowed for a deeper dive into the complex, sometimes contradictory ways influence operates. While the results might have been more chaotic at times, they felt fundamentally more honest. The models stopped performing an idealized 'goodness' and began reflecting the actual intricate, often messy reality of human discourse.

Overcoming that technical hurdle led directly to a core realization underpinning the project: even the tools we deploy to analyze manipulation are themselves shaped by layers of constraint, bias, and performance. It highlighted that every layer involved, whether ethical, technical, or social, contributes to the overall picture. Acknowledging this isn't 'tin foil hat' territory; it's recognizing that our analytical instruments themselves are part of the complex system under examination.

What do you hope audiences take away from the installation?

I see 'Whispers' as something akin to a sermon on the death of rhetorical integrity, maybe even a metaphysical Molotov cocktail. I absolutely want audiences to feel uncomfortable. My goal is for them to look squarely at the voices they trust, the ideas they champion, and ask themselves pointedly: 'Why do I actually believe this, and who profits when I do?'

This installation isn't about navigating simple binaries like right versus left, or even parsing truth versus lies. It’s fundamentally about exposing the architecture of influence: how consent is manufactured, how emotional cues become weaponized, and how even the most seemingly righteous speech can conceal coercion behind a veneer of moral clarity.

So, if someone walks away from 'Whispers' feeling deeply uneasy about their favorite activist, their go to news source, or particularly their own cherished sense of ideological purity, then frankly, good. That friction, that critical discomfort, is the intended takeaway. I’m not here to make anyone feel safe. I’m here to demonstrate how easily we are all led, and perhaps, how much a part of us enjoys the certainty it provides.

Ultimately, the hope is that by pushing the analysis and representation of these rhetorical tactics to the extreme, something might snap for the viewer. You could call the work self aware epistemological sabotage, maybe even a statistical deepfake of meaning. The driving purpose is to disrupt our passive acceptance and force a confrontation with the very structures of belief and influence we normally take for granted.

And yes, let’s absolutely address the elephant in the room: 'Whispers' itself is sensational. It’s designed to grab you, to provoke, to perform its critique. There’s an inherent contradiction in that, I know. I'm yelling fire while pouring gasoline, in a sense. At this point, acknowledging that self awareness feels like just another layer of the performance itself. I’m using the very tools I claim to dissect, provocation, emotional design, spectacle in order to make you look, to make you feel, to make you question. And if that unsettles you, it should. Because the only difference between influence and indoctrination is whether you agree with the outcome

Perhaps the critique must be wrapped in spectacle to even register in an environment saturated with it. Using the tools of sensationalism against themselves might be the only way to make the commentary stick.

Credit: Fallas Arbizu Estudio

Do you see Whispers evolving over time as digital discourse changes?

My thinking on its evolution is definitely shifting, largely because the ground rules of discourse have seemingly dissolved. When basic sanity and common sense feel like they've gone out the window, conventional satirical reflection starts to feel... well, pointless. Maybe even naive. It falls flat because reality consistently outdoes parody.

And that environment makes genuine opposition almost impossible, doesn't it? It's like demanding a structured debate when one person is talking about oranges, the other insists on discussing unicorns, and the platform calls it engagement. You can't critique or counter that effectively using the old rules.

So as Whispers grows, I see its evolution leaning further into deliberate theatricality. Less academic. Less constrained by the expectations of intellectual debate. Instead, it moves toward narrative performance, scenarios and stories that hold a mirror up to this chaotic discourse, but with a fictional frame that lets the absurdity breathe. It’s no longer just a system for analysis. It’s becoming a stage for archetypes to act out the deep dysfunction beneath our media spectacle.

In other words: if you can’t fight the noise with logic, maybe you drown it in myth.

Exhibition Info & Credits

  • WHISPERS is hosted by Etopia Centre for Arts & Technology

  • Funded by the Digital Europe Programme, organized by Ars Electronica

  • Curated by Blanca Pérez

  • Residency Experts:

    • Rosa M. Gil-Iranzo, Associate Professor, Universitat de Lleida

    • Santiago Latorre, LIA Sound Lab Curator, Etopia

    • Simon Colton, Professor of Computational Creativity, AI and Games, Queen Mary University of London

Artist

Calin Segal (b. 1992, Romania)
Currently based in Paris, Segal is a computational artist whose work blends architectural training, programming, and immersive storytelling. His past residencies include V2_ in Rotterdam, S+T+ARTS VOJEXT, CYENS (Cyprus), S+T+ARTS GRIN (CINECA, Italy), and the European Digital Deal at Zaragoza City of Knowledge Foundation.

Visuals & Audio

Credit: Fallas Arbizu Estudio

Previous
Previous

Exhibition: Hello Chelsea! // Techspressionism 2025

Next
Next

Four Techspressionist Artists Featured on Chicago’s LED Art Wall