Dalí to DALLE - Dr Will Renel
We’ve been interested in technology focused research and practice for a long time. In 2013, Touretteshero collaborated with Dr Tom Mitchell and Dr Joseph Hyde on The Alchemy of Chaos which converted a year of ticcing episodes into music that was presented at a TEDx talk at the Royal Albert Hall.
In 2015, I collaborated on a performance called 10 Minutes of Nothing which invited a small audience to experience Touretteshero ‘doing nothing’ for 10-minutes at the South London Gallery. During the performance a piece of software that we’d developed together – ‘the nothing machine’ – visualised her involuntary movements by using webcams and microphones to track them. The aim was to capture moments when ‘nothing’ actually happened (i.e., when she was completely still and silent).
In recent years there’s been a surge in AI art generators – software that uses an algorithm to create artwork based on information that you give it (such as a line of text). A few months ago, Leftwing Idiot told me about some new software he’d discovered called DALL· E 2, which is a new system from OpenAI that can ‘create realistic images and art from a description in natural language’. We were interested in how DALL·E 2 might respond to Touretteshero’s tics, but at the time you had to contact the company and join a waiting list to access the software. I joined the list and a couple of months later we received the big thumbs up:
‘you are invited to create with Dall E – we can’t wait to see what you create. As one of the first to access this technology, we trust you to use DALL E responsibly’.
We decided that it would be interesting to select tics from the Touretteshero archive using the ‘random tic generator’ and feed these into DALL·E 2. Creating artwork inspired by Touretteshero’s tics is the foundation of the gallery section of our website, but asking an algorithm to lead on the creative process of interpreting and visualising the tics was a new and exciting prospect for us all. DALL·E 2 recommends that you suggest an object, place and style (e.g. an octopus in the jungle as a line drawing) to get the most from the software. I created a longlist of around 50 tics and then steered towards those that loosely followed the object-location format. Here are some of the results:
Batman in a pastry shop in Leeds with the whole of Hufflepuff
Batman in a pastry shop in Leeds with the whole of Hufflepuff
I’m an 18th century snake charmer
I’m an 18th century snake charmer
All I want for Christmas is a biscuit and the building blocks of life
All I want for Christmas is a biscuit and the building blocks of life
I have 1400 men in my pocket
There’s a cat in the garden called Barbara Windsor crawling around on its hands and knees
There’s a cat in the garden called Barbara Windsor crawling around on its hands and knees
There’s a cat in the garden called Barbara Windsor crawling around on its hands and knees
There’s a penguin coming down the road, dressed as a panda
There’s a penguin coming down the road, dressed as a panda
Plastic bag, one day you’ll be on Antiques Roadshow
Plastic bag, one day you’ll be on Antiques Roadshow
Pest control to Major Tom
International dog food awareness week, Richard Hammond speaking
The images are visually striking, funny and varied in style – reflecting the humour and creativity that it so often present in the tics themselves. Every time I entered a tic and clicked the generate button I was filled with excitement as the software made sense of the words and worked its magic to bring them to life.
Nine times out of ten I laughed out loud the moment the results appeared on screen. The algorithm seems to do a pretty decent job of finding the key objects or people (e.g. Batman, the building blocks of life, a cat called Barbara Windsor etc) within each tic. The simpler locations (e.g. my pocket and the garden) are clearly represented. The algorithm has a good go at visualising the more complex locations, but these tend to end up as scenes that are less specific than the original location suggested by the tic. For example, ‘a pastry shop in Leeds’ becomes a pastry shop and ‘Antiques Roadshow’ becomes a general bidding scene or shelves in an otherwise nondescript shop.
I also experimented with generating artwork from tics that didn’t follow the object-location format suggested by OpenAI. When presented with more surreal or abstract concepts, the software still finds recognisable visual references as the basis for the artwork. However, it’s safe to say that the more abstract the tic, the more abstract the artwork. Here are a couple of examples:
Don’t overreact God, it’s only original sin!
Don’t overreact God, it’s only original sin!
Tortellini Tortoise and the Great Bucket Famine of Forty-Four
Tortellini Tortoise and the Great Bucket Famine of Forty-Four
Having spent some time with DALL·E 2 over the past few months, the more surreal and abstract AI generated artworks are often my favourite.
One final bit of good news is that from September 2022 the DALL·E 2 software is free to use without a waiting list and from November 2022 you can integrate DALL E into your own apps and products through a new application programming interface (API). So if you’re interested to explore DALL·E 2 further, head to the OpenAI website.
We’re excited to see and hear more about how disabled and neurodiverse people are experimenting with creative software like DALL·E 2. These tools offer a brand new way to explore and bring to life the complexities of our minds, bodies, language and experiences.
Leave a Reply
Login Register