What are the implications of using one’s voice to improvise with a neural network? Composer Jennifer Walshe and artist Memo Akten present ULTRACHUNK, a neural network trained on a corpus of Walshe’s solo vocal improvisations. Here, Walshe wrangles with an artificially intelligent duet partner—one that reflects a distorted version of her own improvisatory language and individual voice.
This is a documentation of the first performance of ULTRACHUNK (2018), a collaboration between Memo Akten and Jennifer Walshe. For one year, Walshe engaged in a daily ritual of performing solo improvisations in front of her webcam, collecting hours and hours of both video and audio material. Akten created a number of neural networks — including GRANNMA (Granular Neural Music and Audio) — which was trained on Walshe's improvisations. In the performance, GRANNMA navigates the hypersphere, generating ca. 20 frames of video and 44,100 16-bitsamples of audio per second in real time. The video and audio are neither sampled nor processed — every single frame and sound is generated live, constructed from the fragments of memories in the depths of the neural networks. The original and virtual Walshe inhabit the Uncanny Valley together, singing in duet, improvising, listening and responding to each other.