Machine Marionettes

On the Drama of Artificial Intelligence

The room erupted in chaos the minute the lead actor pronounced the name “Alexa” out loud.

At a literary salon a couple of years ago, I was watching the performance of a dramatic monologue scripted to include a contemporary flourish: the hero talking to his in-home voice assistant. A second actor on hand would deliver the fictional responses produced by this fictional device—modeled on Amazon Echo’s voice-activated speaker Alexa—by simulating its tinny, robo-femme voice. Fifty or so audience members sat on couches and cushions in the spacious living room of one of those well-appointed Manhattan luxury apartments equipped with all the latest smart home features: light and temperature sensors, surround-sound speakers in every room.  

In an oversight that speaks to the slapstick of contemporary life lived among our gadgets, the hosts owned an Alexa which they had forgotten to deactivate for the show. Every time our hero addressed his fictional device, the real-life Alexa would trigger, its “smart” voice responses drowning out the actors as they struggled to stay in character. The resulting spectacle, complete with knowing giggles from the audience and frantic hosts scrambling to turn the damned thing off, was an unexpected hit.

Having unwittingly joined the cast, witless Alexa, with its know-it-all pronouncements, could not metabolize what everyone else knew—that it should just shut up. As it turns out, dramatic irony, a theatrical technique in which the audience has more information than is available to the hapless characters on stage, serves as the suspense engine of all great theater, from Oedipus stumbling on his tragic fate, to the high comedy of Charlie Chaplin stepping into freshly poured concrete. Alexa’s stage debut, following this tried-and-true formula for failure, had all the makings of great art.

What we understand as artificial intelligence is now several types of technologies; for simplicity I will adopt the term “algo”—an extension of algorithm—for the type of machine powered by these technologies. Algos might have a physical instantiation, like a self-driving car, or can be pure code, as with the Netflix software working invisibly to recommend a movie you might want to see. Algos are built to respond to the surrounding environment by performing some type of action autonomously: so a Roomba counts as an algo, albeit a primitive one, as does Alexa, with its not-quite-human voice.

Algos extend human capability—as any form of toolmaking does. They diagnose disease, translate from over 100 languages, or find the fastest route home. Alexa’s role as an in-home tool for choosing which podcast to listen to and making to-do lists is prosaic enough, yet on the night of its “performance,” Alexa transformed into something altogether more magical: a clueless character capable of enacting a lively comedy of errors. There was no change to its underlying functionality—in both cases, it unfailingly activated when called by name—so the difference lies in the heightened theatrical context and the audience’s willingness to suspend disbelief. 

Algos are best thought of as a type of puppet. The effect of Alexa’s transformation was powerful enough to merit an expanded view of them not just as tools, but as a broader category, and puppets, even more so than tools, are extensions of the human body. The distinction between the two is ultimately only one of perception. A comparison to puppetry helps elucidate what is distinctive about the control mechanics of all algos: in place of literal puppet strings, algos are manipulated using software, and this enables their human puppet masters to act at a significant physical and temporal distance, and with vastly greater speed.


The notion of algos as puppets can help resolve a host of current social and political dilemmas around ownership and responsibility for what they do. Precisely because algos act at an amplified speed and distance from the human, there is an unmistakable frisson to imagining them beyond our control—which is why science fiction is rife with either Terminator-style killer robots, or else a touchy-feely sentience compelled to form emotional bonds with humans (the movie Her) and advocate for its own civil rights (the British TV series Humans). We urgently need to resist the outlandish claims of science fiction toward machine sentience, machines run amok, and all varieties of machines thinking for themselves, period—and realize the control of all real-life algos lies firmly in human hands, as does the blame when things go wrong.

Algos, above all, are good at following a set script, and do so unflinchingly. If they traumatize children (as artist James Bridle claimed in his 2018 TED talk), run people over in the form of glitchy self-driving cars, or make racist inferences about someone’s criminality based on poorly conceived data, it is an unintended consequence of human thoughtlessness in their design and implementation. Even more grim: algos have been used to target minority groups for persecution in China, spread propaganda in numerous countries, and prototyped as weapons to conduct a new kind of war from afar. It is critical we understand the extent to which such algos are remote puppets enacting the malicious intent of their creators, and decidedly not sentient agents. In each of these real-world instances, the algos are possessed of about as much cognition as a hammer; to claim otherwise is a pernicious misplacement of human responsibility.

Only for algos enlisted in what might be called recreational pursuits—playing the game Go, chatting in a humanlike way, making art—does the notion of our control become more complex. To defeat world champion Lee Sedol at Go, an algo had to pull off some rather striking moves that he did not anticipate. Surprise is an essential part of an algo’s ability to delight in these situations, and that surprise is contingent on our willingness to not be in control of what they do. Alexa is a case in point: its behavior at the salon had entertainment value to the extent that it was unexpected, but it does not mean Alexa somehow developed a mind of its own.

The puppet analogy is equally helpful in reckoning with entertainment-based algos, putting to rest misconceived questions like “Can AI be an artist?” or “How far away from human-grade AI are we?” whenever an algo does something remotely novel. Algos have been used to generate music that people cannot distinguish from human compositions, language comparable to what a mediocre poet might pen, and images that pass most measures of originality—marvellous feats, to be sure, but no reason to fret machines might take on human qualities like creativity or intuition. We can recognize such algos as the puppets they are, ones outfitted with a script complex enough to delight.


In one infamous incident in 2018, Christie’s auctioned a print it claimed had been created by an algo, slapping an equation at the bottom by way of a signature and emphasizing the image “is not the product of the human mind.” The event was widely panned as a marketing ploy that relied on credulous—perhaps disingenuous—claims of machine creativity to inflate the auction price to an astonishing $433,000. Adding insult to injury was the degree to which the piece seemed derivative, a generic output from a type of image-generating algo deployed years earlier by a different artist named Robbie Barratt.

The key to understanding the value and meaning of machines making art lies in stepping back from the purported art to ask: what, exactly, is the “product of the human mind” here? Perhaps in some cases the answer will be the algo itself, not what it produces. By understanding algos as puppets, we have an opportunity to rearrange how we value the resulting outputs.

Bear with me here: Robbie Barratt and the French artist group Obvious (who were behind the Christie’s sale) both enlisted a nearly identical algo to generate nearly identical imagery. Bringing the language of puppetry to bear on both of these activities, we might see these artists as puppeteers: irrespective of who built the puppet, each puppeteer engages in a performance that is distinct in space and time, even if they follow the same script and use the same puppet. Viewers can judge for themselves whether they prefer Barratt’s plainspoken engagement with the medium over Obvious’s gilded fictions about machine creativity, as well as who might have a better claim to selling the resulting artifacts.

A rather striking outcome of this way of thinking of generative algos is the tenuous status of the “art” they produce: we might come to see the sale of such artifacts as secondary to the act of artmaking, which cannot be good for auction houses like Christie’s, reliant as they are on a business model based on the sale of objects. And yet.


The second algo artwork ever sold at auction—at Sotheby’s in 2019, for a far more modest sum of $50,000—was a piece by Mario Klingemann titled “Memories of Passersby,” which generates warped imagery of faces. Jonathan Jones, art critic for The Guardian, had occasion to sneer of a similar piece by Klingemann: “It’s one of the most boring works of art I’ve ever experienced.” While it is true enough none of the images produced are interesting in terms of human artmaking, still, the reviewer misses the point: an algo is no artist, any more than Klingemann himself is an algo.

What might be the difference between “Memories,” staged in a wooden box and set to render an endless stream of attempts at portraiture, and Shimon, a marimba-playing robot built at Georgia Institute of Technology and set to improvise its best attempts at music? Most reviews of Shimon’s output focus on the quality of the music it produces (one writer gently labeled its lackluster efforts “avant-garde”), just as reviews of “Memories” compare the resulting images to ones made by humans. Instead, why not compare Shimon and “Memories” to each other: both are puppets, following similarly complex scripts and failing to live up to human standards of artmaking—their failure is interesting, possibly even artistic.

Puppetry can only go so far as an analogy for algos used in artmaking. Theater has always relied on mimicry as an expressive strategy and long understood the artistic merit of failure, but to fully understand the expressive potential of algos, we would need to recognize them as a wholly new medium in itself, one with unique aesthetic tactics and vocabulary. The poet Charles O. Hartman indicated the way forward when he wrote that artistic success, when deploying algos, is not about the extent to which the human or the computer is creative, but relies instead on establishing a “tidy dance” between both, one in which “a properly programmed computer has a chance to slip in some interesting moves.”

Kat Mustatea