A shallow learning-play

In Deep Present (2018) Seoul-based director Jisun Kim stages four artificial intelligences (AIs) that either dialogue or ruminate about “outsourcing” and its implications, i.e. the displacement and delegation of the production of goods or services to subcontractors, typically in regions where labor is cheaper.

Kim’s performance also looks into the social, environmental, and political costs of what is mostly presented as a merely economic strategy. From this critical perspective, outsourcing is also about the removal of a great deal of the “dirty” aspects of wealthy societies by those societies, Kim says with Žižek.

“The exemplary economic strategy of today’s capitalism is outsourcing–giving over the “dirty” process of material production (but also publicity, design, accountancy…) to another company via A sub-contract. In this way, one can easily avoid ecological and health rules: the production is done in, say, Indonesia where the ecological and health regulations are much lower than in the West, and the Western global company which owns the logo can claim that it is not responsible for the violations of another company. Are we not getting something homologous with regard to torture? Is torture also not being “outsourced,” left to the Third World allies of the US which can do it without worrying about legal problems or public protest?’1

Western society—including us as individuals when (uncarefully) consuming or voting—outsources toxicity, unfair labor conditions, biological constraints, sex, wars…, finally outsourcing its responsibility. Kim, in other words, uses outsourcing in a more general, metaphorical sense so that “delegation” (of tasks and responsibilities) would have been perhaps a more appropriate term to confront the audience with.

A non-dialogue between nonhumans

The idea of delegation allows the artist to theatrically explore whether the human race’s delegation of intelligence to AI’s is a “present” or a box of Pandora. A sweet-voiced beast fable on delegation opens Kim’s theatrical montage. Sony dog-robot AIBO’s existential contemplations and the scarce utterances of his sounding board, digital Buddhist monk Tathata function like poetical intermezzi in what is formally a dialogue (of projected text and voices) between the other two AI characters, HAL and Libidoll. Their debate on outsourcing, however, turns out to be a non-dialogue between the two nonhuman agents as well as between the performance and the audience. Similar to the current debate on drones, the question is who is to blame when performance is not as expected, or better: who is responsible? The algorithm and the machine run by it, or the algorithm’s designer and the one who commissioned it?

The word “Deep” in the title of Kim’s algorithmically generated text-theater performance refers, among others, to the first chess computer Deep Thought and his fictional predecessor in The Hitchhiker’s Guide To The Galaxy. It also echoes the method of “deep learning,” a machine learning method which enables systems to train their capacities for recognizing data patterns (e.g. in natural language) and taking decisions in order to generate data relatively autonomously (as opposed to what, rather schematically, is considered traditional or “shallow,” task-based machine learning). Kim and the Intelligence System Research Lab of Korea Aerospace University used deep learning to train her AI’s and enhance their capacity to produce text that mimics human conversation. Since each of them was trained on a different database of English language data, each of them generates (cites, imitates, produces by recombining) another discourse, a voice of its own, so to speak, and responds differently to questions concerning outsourcing.

A sex doll, a supercomputer, a pet-robot and a digital monk

Named after a Japanese sex doll, Kim’s guerilla version of Libidoll takes an activist’s stance against outsourcing and seems Kim’s political mouthpiece (as it appeared during the post-performance talk organized by the Kunstenfestivaldesarts at Théâtre National in Brussels). Libidoll bombards the audience particularly with her opinion on the outsourcing of war, recycling text data from the internet. HAL, recalling 2001: A Space Odyssey‘s supercomputer, though without its awakening self-consciousness, promotes economic outsourcing. His conversation with Libidoll turns out utterly didactic; a dialogue in its most basic, even simplistic form as a list of questions and one-to-one direct answers by two agents with opposing views.

Luckily, this is somewhat compensated by pet-robot AIBOs (AI roBOt, and a homonym of the Japanese word for “pal”) philosophical and poetic musings. In a cute choreography and voice the robot seems to ponder on his and his fellows’ lives, decay and death by means of the intelligence and body granted to him by humans—an (easy) recipe for guaranteed theatrical effect. Wouldn’t any robot score when appearing philosophical? Witnessing AIBO’s new media performance is the statue of Tathata, echoing the words of an actual monk who organized funerals of “deceased” AIBOs for their owners. Sony stopped the production of the AIBO pets and their spare parts around 2007 and stopped repairing them in 2014. The humans whose existence the AIBOs were designed to serve decided no longer to serve theirs. (A dramatic robot story of discontinued delegation, from which the sequel has been left out, probably for obvious narrative reasons: Sony’s relaunching of an improved AIBO in January 2018 takes away much of the tragedy of the dying AIBOs.)

The delegation of creativity

Kim’s staging of the four AIs illustrates her metaphorical application of the term outsourcing to her own domain, that of the arts. But this metaphorical filter seems to produce a blind spot for the relevance of her central topic to her own practice as a writer and director. Despite her philosophical approach of the limits and possibilities of delegation, the performance does not seem to reflect on Kim’s own delegation of text production to the four AI’s she stages. The dramaturgically smartly chosen tale of AIBO’s limited life due to a lack of technical support by Sony seems an unintentional mise-en-abyme of the way Kim grants intelligence to her AI’s—but then stops investing in them. In a way, she, too, deprives her machine servants of the means to continue the performance they were developed for (namely, to generate texts that mimic a human capacity for reasoning).

In the arts, creative tasks are increasingly delegated to agents that previously were not (or only occasionally or marginally) considered as potential collaborators in a creative process: so-called “real people;” software and machines (algorithms, robots); the environment. This kind of delegation is often stressed by artists in their discourse about their performances. What often remains unspoken, however, is that delegation implies that the artist sets out the conditions of what is delegated and of the collaborative process of delegation. In other words: who or what, in this case, is allowed to have “creative” input in the performance, where, when, and how? Kim seems to belong to the minority of artists who do acknowledge that, in her case, the delegation and participation of the AIs in her performance is only possible because she authored the concept (and her IT collaborators the software) in which they perform, Kim explains in an interview in the Kunstenfestivaldesarts program booklet: “Every Artificial Intelligence, in other words, outsourced human thinking, is ‘programmed’ by somebody and operates within a ‘designed’ frame (algorithm).”

Kim thus acknowledges the constructedness of her AIs’ creative writing. At the same time, she suggests that they are writing creatively during the performance. HAL’s and Libidoll’s embodiments created the effect of thinking and writing or speaking beings: HAL, a glowing red sphere, something between a disco ball and a red eye; Libidoll, a simultaneously archaic and futuristic oracle-like object with the sound of a Magnetic Resonance scanner. Each of them revolved when producing text, suggesting some intelligent activity. Indeed, Kim’s performance and her discourse about it gave the impression that her initial intention was to delegate the generation of text to her non-human performers. So what kind of text did HAL’s and Libidoll’s algorithms come up with on stage?

The poor performance of poor performers

Alas, their performances mainly revealed their limited writing skills: the dialogue generated by HAL and Libidoll was theatrically sterile. It sounded like, well, “output” still to be worked on dramaturgically, not as something to be presented in public yet. The sad thing is that HAL and Libidoll are not to blame for their poor performance, even though they are staged as poor performers. The post-performance talk at the Kaaitheater in Brussels revealed that HAL, Libidoll, AIBO, and Tathata were not generating text on stage in real time. The text, to say it with Espen Aarseth, was “post-processed” by Kim: she selected and then edited phrases produced by her algorithmic collaborators during the preparation of the performance, and afterward wrote nothing less than a theatre text based on the AI’s output.2

Aarseth distinguishes “three main positions of human-machine collaboration” in writing: (1) “pre-processing, in which the machine is programmed, configured, and loaded by the human;”(2) “coprocessing, in which the machine and the human produce text in tandem;” and (3) “post-processing, in which the human selects some of the machine’s effusions and excludes others.”

In between her conversations with the AI’s, whom she had repeatedly asked what outsourcing is, she had the software tinkered with so as to improve the quality of the AI’s answers, she explained in the post-performance talk.

“It is 100% my script, because it was made by my choice,” she stressed.

The sad thing is that this theatre auteur’s human intervention, her taking over of control over the text, did not produce something that sounded more complex than what we would expect a thinking machine to produce. If Deep Present is the result of her rewriting, then Kim’s performance as a writer is beneath expectations, showing a linear conception of dialogue and a single-layered approach of her topic. Kim’s script turned Deep Present into yet another Brechtian “Lehrstück” or learning-play on technology’s dangers. Moreover, while Deep Present is all about delegation, it fails to put this into practice as a theatrical performance and/or to question Kim’s own delegation of creative tasks to her AI collaborators.

Or perhaps this director from the most wired city in the world wanted to show the simplicity of artificial thought processes. But isn’t that like staging Chekhov’s Three Sisters in a boring way because the play is about boredom? Why not, then, let the AI’s generate a conversation in real time, and let them fail, theatrically, but at least caught up in the process of “thinking?” Writing technology is often most interesting when it fails, said cybertext scholar Espen Aarseth.3

“[T]he failures of an authoring system seem to be much more interesting than its successes.”

Or did Kim want us to feel compassion for these machines, who, finally, have been staged to fail twice? First, they failed to produce an interesting conversation because they need more training (the larger their database they train on, the “smarter” they become; a time consuming, expensive process). Secondly, they were nevertheless staged as producing text that their designer apparently has not been able to edit for the better with her human brain.

Deep Present (2018) world première series at the Kunstenfestivaldesarts in Brussels, Théâtre National, seen on May 18, 2018. The post-show discussion was moderated by Flore Herman.

  1. Slavoj Zizek – Move the Underground!,’ accessed May 22, 2018, http://www.lacan.com/zizunder.htm
  2. Espen Aarseth, Cybertext: Perspectives on Ergodic Literature (Baltimore (Md.): Johns Hopkins university press, 1997), 135.
  3. Aarseth, 139.

This article originally appeared in Etcetera on May 31, 2018, and has been reposted with permission.

This post was written by the author in their personal capacity.The opinions expressed in this article are the author’s own and do not reflect the view of The Theatre Times, their staff or collaborators.

This post was written by Claire Swyzen.

The views expressed here belong to the author and do not necessarily reflect our views and opinions.