In 2017, I began to explore the frontiers of AI as a creative collaborator and playmate. I wondered, what are the limits of synthetically-guided creativity?  What kinds of images can evolve when we play with technology analogous to the eye (the camera) and the brain? To explore these questions,  I created a Recurrent Neural Network (RNN), which can generate their own outputs from datasets. The BBC offers one such dataset; 16,000+ sound effects online for public use. Each sound effect is arbitrarily given a prosaic, descriptive title, such as ‘Shouts of encouragement at a wrestling match’ or ‘large bird taking off’. This data set provides me with the raw material to play with how image and language can signify each other under the influence of a neural-like AI system.

I feed my RNN the BBC’s descriptive text and it engages in a game of probability to create its own outputs with surprising results. ‘River Pigs Loading’, ‘Pale Windy Grimace and ‘Comedy Gold, Before Scream’, are a handful of its creations. These nonsensical failures produced by early RNNs are the happenstance, poetic absurdities I seek to architect. I then playfully create images that visually articulate an interpretation of each output’s linguistic meaning. In other words, I move from sound, to text, to AI, to text, to image.

As the technology improves, I am engaging in an ever-challenging battle of creative authorship with the machine. Recently, a new breed of AI image-generators have broadened the possibilities of magical thinking and image making. These expand our understanding of how creativity can be synthesised, amplified and manipulated – and further complicate the notion of photographic veracity.