Skip navigation
Search
Updates overview

The ghosts in the magic (?) machine

Essay by Roosje Klap

21 Jan 2025
Nadia Piet: Slow Ai, digital collage, 2024

In the mist of a winter morning, Martine watched a flock of birds dissolve into pixels, their shapes glitching as if slipping beyond the frame of reality. The uncanny valley didn’t announce itself in a headset or VR goggles but here, on the damp edge of the everyday.

She thought, perhaps, she had stepped into the algorithm’s dream—a reality broken into data points, a hyperreal projection. Was this her reality shifting? Jean Baudrillard once described this as the moment when the map overtakes the territory, when the simulation displaces the real so completely that neither remains. His Simulacra and Simulation warned of a world swallowed by its representations. (1) Now, with all the DALL-E’s or MidJourneys accessible to everyone, the warning feels prophetic. These systems don’t represent reality—they fabricate plausible lies, reassembling fragments of training data into a seductive fictional game engine, that don’t mimic life; they become it. Each frame simulates a world, governed by its internal rules yet eerily familiar, reflecting back our own logic and desires. The dissolving birds in the mist uncannily depict that the algorithm doesn’t just watch us; it watches for us, rewriting a distorted perception itself.

Artist and researcher Martine Stig’s (NL, 1972) practice probes these entanglements, straddling the physical and digital, the archival and the speculative. Her collaborative project Wealgo (2020) layers an algorithmic ‘digital ball masqué’: bright dots, colorful lines, and coded distortions mapped onto human faces. These marks—legible to machines, cryptic to us—create a dual image. What the machine sees and what we see are not just different; they conflict. What does this say about photography? Once a medium of memory and truth, it now serves as bait for data extraction. Stig’s critique is sharp: photographs are no longer proofs of presence but engines of predictive power. Her experiments with ultraviolet and infrared photography and film in her series Art for Machines: they reveal spectral images in which the body becomes ghostly, reduced to what sensors can register—light wavelengths invisible to the human eye. The result is unsettling: a body as the machine perceives it, spectral and otherworldly, standing at the edge of visibility and erasure.

affect lab, Prospektor, Babusi Nyoni, et al,

Creative technologist Babusi Nyoni’s (ZW, 1988)  work expands this critique into the domain of representation. His 2019 research project blackgirlhair.js, trained a machine learning model to recognize Black hairstyles and gives black women hair tips depending on the weather forecast, highlights the internet’s structural bias—a space where Black identity is pushed to the periphery. Not just a technical achievement but mostly a counter-narrative, this is an example to reclaim a space where marginalized identities are typically erased. He also tackled another form of erasure in the installation Ctrl.Alt.Img., a collaboration with Çiğdem Yüksel, affect lab, and Prospektor and on show in the Noorderlicht exhibition Pixel Perceptions. It was Yüksel who had already concluded from research that the faceless, dehumanizing portrayal of Muslim women in Dutch media were not neutral. 

These images—silhouettes, backs turned, always obscured—are deliberate, reflecting societal anxieties and biases. This project reveals how these representations have seeped into AI systems. When tasked with generating images of Muslim women, the algorithms replicated the same tropes—proof that the machine doesn’t just learn– but inherits prejudice. The installation fo rces participants to confront this inheritance: visitors enter a photo booth where AI generates a new image of them based on biased datasets. One visitor, an 85-year-old woman, left clutching a distorted photo of herself, shaken by what the machine saw—a fractured projection of her identity. And she refused to agree. Babusi’s work is therefor important: it exposes AI’s complicity in perpetuating racism and anti-Blackness while also questioning the white-supremacist obsession with phenotypical ‘truths’ that machines claim to reveal.

Researcher and artist Daniel Leix Palumbo (IT, 1994) shifted the focus from the visual to the sonic, exposing how power manifests in the extraction of voice. At the German Federal Office for Migration and Refugees (BAMF), asylum seekers undergo voice biometrics tests designed to analyse their accents as a way to determine their country of origin. The process assumes that voices are fixed, like fingerprints. But voices, like identities, are fluid—they are shaped by migration, our environments of socialisation, emotional and physiological variables, adaptation, and make us “us”. Daniel described this as a ‘post-comprehension strategy’, where algorithms bypass narrative accounts and pretend instead to decode truth from acoustic patterns. (2) 

The consequences can be severe: an accent misclassified by the machine can lead to deportation. Here, the voice becomes a site of surveillance, reduced to data points, stripped of narrative and humanity. These are not neutral tools but instruments of control, part of a broader surveillance apparatus that prioritizes efficiency over empathy. How do we reclaim the voice as a site of agency rather than extraction?

Nadia Piet: Slow Ai, digital collage, 2024

Designer and creative coder Nadia Piet’s (NL, 1993) response to such questions is radical: slow down, possibly derived from her yogi practice :) Her call for ‘slow AI’ resists the industry’s obsession with speed and scale, mirroring the ethos of slow food or slow fashion. Piets’ projects like Esoteric AI reframe machine learning as an intuitive, almost mystical practice—likened to ancient divination. Algorithms, like omens, are partial and interpretive, their outputs open to reimagination rather than blind acceptance. Nadia described projects where small-scale AI models were trained to preserve endangered languages or support marginalized communities. Because we can kind of guess that the current pursuit of artificial general intelligence (AGI) is promising a 'safe' and 'beneficial' system for humanity, for the measly price of seven trillion dollars, according to Sam Altman. (3) Yet, AGI is an undefined construct that cannot be meaningfully tested for safety, raising the question: should it even exist? Beneath its rhetoric lies a troubling legacy rooted in Anglo-American eugenics, perpetuating racism, ableism, and inequality while centralizing power.

Timnit Gebru and Émile P. Torres argue that instead of pursuing this speculative ideal, researchers should focus on tangible, defined problems where safety and accountability can be meaningfully achieved, a new utopia, even. (4) The ideas of Piet offer a vision of AI rooted in care and reciprocity rather than extraction and exploitation. Humanizing computers as tools to counter technological anxiety and profit maximization is becoming increasingly important. Instead of Facebook’s early years motto, ‘Move fast and break things,’ we should focus on ‘slow down and heal things.’ Slowing down can be a powerful weapon against the overwhelming technological hype. By reassembling the parts, we can create new stories that allow for radical multiplicity. Finding this position within post-signature collaborations enables us to see universality not as a preexisting truth, but as a shared world built from diverse elements. By slowing down the digital flow, we create space for reflection, connection, and a touch of magic.

In the midst of these provocations, one question lingered when I drove back home in the mist: are we haunted? Haunted by colonial histories, by the exploitative logic of capitalism, and the ghosts of technological determinism? The machine, in its surveillance and synthesis, doesn’t just mirror us; it absorbs and amplifies the biases we refuse to confront. Baudrillard might argue that in this hyperreal state, there’s no turning back. But Stig, Nyoni, Leix Palumbo and Piet suggest otherwise. Their work doesn’t merely critique; it imagines new possibilities: photographs that challenge the vision of the machine, algorithms that amplify marginalized voices, and systems designed not to predict behavior but to actually foster care. To confront the ghosts in the machine is to confront ourselves. Only then can we begin to dismantle the hyperreal—and build something real in its place.

-

The symposium Digital Mirror: Machine Visions with Martine Stig, Babusi Nyoni, Daniel Leix Palumbo & Nadia Piet was organized by Noorderlicht in close collaboration with Lectorate Image in Context (research line Art & Technology), part of Research Centre Art & Society (Hanze) on Thursday 9 January 2025.

Footnotes:

 

  1. Jean Baudrillard (1994). Simulacra and simulation (S. F. Glaser, Trans.). University of Michigan Press. (Original work published 1981)
  2. Andrejevic M (2013) Infoglut: How Too Much Information Is Changing the Way We Think and Know. New York: Routledge
  3. Business Insider. (2024, February). ‘How much Sam Altman wants: $7 trillion — OpenAI's plans compared to other big costs in history.’ Retrieved from https://www.businessinsider.com/how-much-sam-altman-wants-seven-trillion-openai-compare-costs-2024-2
  4. Gebru, T., & Torres, E. P. (2023). ‘The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence.’ First Monday, 28(9). https://doi.org/10.5210/fm.v28i9.13636