Essay

AI as Alteration, Not Augmentation

On another way of working with machines

Ismaël Joffroy Chandoutis March 2026

Everyone seems to agree on what AI does for artists: it speeds things up, it extends capacities, it removes friction. You give it a task, it gives you back a better version of what you already had in mind. This is the dominant paradigm — augmentation. More, faster, cleaner.

I have been working at the intersection of cinema, contemporary art, and artificial intelligence for several years now. And the more I work with these systems, the more I find this framework not just incomplete, but actively wrong — at least for what I'm trying to do.

What I want from AI is not amplification. I want displacement.


The Problem with "More"

Augmentation assumes you already know where you're going. You have a direction, a vision, a project — and the machine helps you get there more efficiently. This is useful. It is also, creatively speaking, a kind of dead end. If the tool only reinforces what you already think, you never leave the territory you know. You produce more work faster, but the work stays inside the boundaries of your existing imagination.

When I was making Swatted — a film built around the practice of false emergency calls, police raids streamed live on gaming platforms — I wasn't looking for a system that would help me edit more efficiently or generate better visual effects. I needed something that could introduce the unforeseen. That could surface connections between online culture, police violence, and spectatorship that I hadn't consciously mapped.

Augmentation would have given me a cleaner version of my initial ideas. What I needed was alteration — something that shifts the gesture before it's completed.


What Alteration Means in Practice

To alter is not to correct. It is not to optimize. It is to introduce a foreign element that changes the direction of what's already in motion.

In practice, this looks like: a model surfaces an unexpected formal association between two scenes I had separated for logical reasons — but whose proximity creates something stranger and more precise. Or it generates a piece of text that is technically wrong, factually unreliable, but reveals a register I hadn't considered for the narration. Or it proposes a structural reading of the archive material that contradicts my intuition — and the contradiction is the point.

In Virtual Kintsugi, I worked with AI systems on the question of damaged and reconstructed images — the idea of repair as transformation rather than restoration. The tool wasn't filling gaps; it was reinterpreting them. Each AI-generated reconstruction was not a return to an original but a proposition: here is one possible surface for what cannot be recovered. That distinction — between restoration and proposal — is where alteration lives.

This is very different from asking a system to do something and evaluating whether it succeeded. It requires a different posture: one in which the output of the machine is not a solution but a pressure. It pushes on the work. You respond to it. Sometimes you reject it entirely, but the rejection itself has clarified something.


Liquid Writing as Method

The methodology I've developed for this kind of practice I call liquid writing. The name tries to describe the actual texture of the process: not sequential, not layered in the conventional sense, but permeable. Phases bleed into each other. Research contaminates the edit. Writing continues during the shoot. An algorithmic accident from three months ago resurfaces and becomes a structural principle.

The idea of liquid writing is partly a resistance to the pipeline model of filmmaking, which separates research, development, writing, production, and post-production into distinct compartments. In that model, AI enters cleanly at the end — in post, as a finishing tool. Liquid writing refuses that. It means the machine is present throughout, not as a service but as a kind of ongoing pressure on the material.

Maalbeek — a film about the 2016 Brussels metro bombing, built from the testimonies of survivors and the reconstruction of an event that cannot be directly filmed — required this kind of permeability. The impossibility of access to the event itself became generative. The film was, in some sense, still being written in the edit.

AI-assisted processes work similarly. They resist the logic of specification and delivery. They are more productive when treated as a kind of unstable material — something you work with, against, through — rather than a system you instruct.


Post-documentary and the Hidden Layers of Reality

The post-documentary practice, as I understand it, proceeds from a specific observation: reality is not simply the surface of visible events. It is made of strata — data, images, signals, traces — that constitute what is actually happening at least as much as what can be directly witnessed. The task of the filmmaker is not to document the surface but to reveal the structure underneath.

This is where AI becomes particularly interesting, and particularly unstable. These systems have been trained on the accumulated image-world — on enormous archives of human representation. When they generate or analyze visual material, they are working inside a kind of compressed history of how things have been seen.

For work like The Goldberg Variations — a project I'm currently developing, tracing the trajectory of Joshua Ryne Goldberg, an internet provocateur who moved through extreme ideological spaces online — AI systems offer something specific: the ability to navigate massive archives of online trace, to detect patterns in language and behavior that operate below the threshold of conscious reading. Not to explain the subject, but to map the topology of the spaces they moved through. The machine doesn't understand; it detects. And detection, when properly framed, can be a cinematic act.


Against Fetishism, Against Nostalgia

I want to be clear about what this position is not.

It is not technophilia. I'm not interested in AI as spectacle, as demonstration of capability, as proof that a machine can do something surprising. The technological gesture for its own sake is boring and, ultimately, a kind of conservatism dressed as novelty. The question is never what the machine can do. The question is what the work needs.

And it is not nostalgia. There is a persistent strain of anxiety in cinema culture about what AI will do to craft, to authorship, to the artisanal value of the image. I don't find this anxiety particularly productive. The boundaries of what cinema is have always been disputed, always been expanded by new technical means.

What I'm proposing is simpler, and stranger: a working relationship with AI that is oriented toward productive friction. Not comfort, not efficiency, not the amplification of existing intent. A collaboration in which the machine can genuinely surprise — and in which that surprise is not an error to be corrected but a material to work with.

The work that interests me most is the work that couldn't have been made by me alone, or by the machine alone. It exists in the interval.