Q&A: Artist Dr Alexandra Daisy Ginsberg on deepfaking the dawn chorus with AI
In late 2019, contemporary artist Dr Alexandra Daisy Ginsberg worked with Faculty to produce ‘Machine Auguries’, a unique immersive art installation commissioned for Somerset House’s exhibition ‘24/7: A Wake-Up Call For Our Non-Stop World’. Using AI, we helped her to recreate the sound of the dawn chorus, in which thousands of birds of different species sing at sunrise to defend their territory and call for mates. The exhibition ran from October 31, 2019 – February 23, 2020.
What led you to focus on the dawn chorus as your latest exhibition topic?
The exhibition is about the effects on us of our modern non-stop, 24/7 lifestyles, but for this commission I really wanted to think about how our choices affect other species before thinking about how that impacts us.
Birds such as sparrows, blackbirds and great tits have been found to sing higher, louder, and earlier in urban environments. Near airports, blackbirds sing for longer and modify their song; research has shown the chorus starting 23.8 minutes earlier in those environments. Echoes fade faster in built up areas, so birds have been found to sing higher-pitched notes that may be easier to pick out. Since certain bird species can’t adapt to these demands, we’ll see changing, homogenising, or diminishing bird populations. Our world may sound very different.
Where does AI come in here?
Birds are not just singers and mimics, they are also listeners and composers. In Machine Auguries, the call-and-response flourishes birds add to their melodies are replaced by the ‘epochs’, or training cycles, of the machine learning process.
Faculty’s machine learning researcher, Dr Przemek Witaszczyk, created an artificial dawn chorus by pitting two neural networks against each other, through what’s called a Generative Adversarial Network, or GAN. We used multiple datasets to train the GAN, including countryside dawn chorus recordings by Geoff Sample, and recordings of individual species like the dataset of great tits provided by Sara Keen at Oxford University. Dr Witaszczyk also performed a statistical analysis of chorus recordings by Richard Beard; he wanted to explore whether there was a new way to discover a discernible structure in the chorus, as little is known about this today.
With input from the leading nature recordist Chris Watson, we then created a ten-minute narrative arc of a dawn chorus, compressed from the natural hour or so. We mixed generated clips from the machine-learning epochs with recordings of natural birdsong made by Watson. The score starts with a call and response between natural and artificial birds from the early epochs (where they sound like machines). As the chorus peaks, the artificial birdsong, which is increasingly hard to distinguish from the real, takes over, until a few natural birds return at the end. The piece is accompanied by an ambient light installation that mimics the rising dawn, from darkness to light.
In the past, you’ve primarily worked with synthetic biology to tell your stories. Why did you switch your focus to AI for ‘Machine Auguries’?
My work focuses on humans’ relationships with nature and technology. I’m fascinated by the paradox of human interest in creating new life forms, while neglecting what already exists.
I spent 10 years researching synthetic biology, where engineers and scientists are engineering or designing new life forms for human benefit. My artworks examine the ethics and politics of these emerging technologies. Increasingly, machine learning and AI are being used in synthetic biology to progress the science.
My more recent works explore how AI is also on a quest to develop artificial life, and considers the overlaps between novel digital and biological life forms created by humans.
What were some of the challenges of building such a complex piece of work?
We knew that we needed a really broad, rich dataset of birdsong if we were going to train our system to recognise and imitate the dawn chorus.
Getting hold of that data was a huge struggle! In the end, we pulled together our own data set from open-source recordings and generous nature sound recordists. Our final dataset was made up of everything from recordings of the London dawn chorus from the British Library’s wildlife archives, to thousands of species-specific songs from the recording sharing site xeno-canto.
But the real challenges arose when we started working with the data and putting it through the GAN. It’s an incredibly complex process as it is, but using GAN to deepfake sound, instead of images, added another layer of complexity; the human ear is much more sensitive to variation than the eye, so it’s much easier to make a convincing fake image than a convincing fake sound. Dr Witaszczyk and his team had to be constantly experimenting and innovating. It was fantastic to have a scientist working with us in the studio, so we could explore and experiment together – technically and conceptually – to create something completely new.
From 20 March 2020 – 14 Jun 2020, ‘Machine Auguries’ will open in Liverpool as part of the exhibition ‘AND SAY THE ANIMAL RESPONDED?’ at FACT. ‘Machine Auguries’ was commissioned by Somerset House and A/D/O by MINI. Additional support came from Faculty and The Adonyeva Foundation. You can find out more about the work at daisyginsberg.com