Experimenta – AI Pavilion
Sound design with the power of man and machine.
Not a day goes by without a new application that makes stunning use of Artificial Intelligence (AI). Is it hype or the promise of the solution to all of humanity’s challenges?
At Experimenta – Das Science Center in Heilbronn (Germany) they looked for the story behind AI: what is it actually, how did it originate, and what can we expect from it in the future?
Together with interactives design company Yipp, museum builder Bruns and exhibition-designcompany NorthernLight we worked on the design of a brand new pavilion where visitors can learn and experience for themselves how the basic principles of AI work.
By explaining the basic principles of AI, this exhibition also attempts to allay the fear that many people feel about the seemingly all-encompassing rapid rise of this “new” technology. There are examples of intelligent systems from the past, from before the invention of the computer: mechanical machines such as Shannons Maze in which a mechanical mouse learns how to get through a maze in the fastest way through circuits, mainly used in telephone exchanges at the time.
Although AI will increasingly influence our lives, this exhibition also teaches visitors that AI can ultimately only function in good harmony with its user: the thinking person. AI ultimately depends on this to provide real added value.
SonicPicnic was asked to take care of the complete audio design of the pavilion. We created the sound design for a series of installations that teach the visitor about the various basic principles of AI in a playful way.
They answer questions such as: what is machine learning? How does a language model work? How can a computer recognize drawings or photos and name what is shown?
And perhaps the most important question: what is the place of the human brain and creativity in all of this?
In the design and construction of the pavilion, extensive use was made of natural materials such as wood and daylight was taken into account. The whole thing looks futuristic but also human and soothing. We were asked to think about a suitable interpretation in terms of sound. The installations largely consist of interactive presentations with information and small games, often using touchscreens. We combined sounds with a natural basis, for example wood-like clicks and taps, tonal sounds from acoustic instruments, but also electronic sounds. We always made sure that it continued to sound light, friendly and human.
In addition to a shared set of sounds, the installations often also have unique sounds that always remain within the same timbres, so that it remains a coherent whole.
At the start of the project we started recording a variety of acoustic instruments and various objects into a granular sampler using an Eurorack modular system. This resulted in all kinds of recordings, which were carefully cut up into very short pieces of audio. By later supplementing this more natural sound base with synthesizers and short motifs, a slightly futuristic sound palette is created, in which organic/natural elements can also be heard.
Creating sound effects with modular synthesizers.
The entire exhibition is based on the idea that AI will always need human creativity and input to be of value. To express this, there is an installation in the middle of the exhibition called The Source. Here the visitor can sit and rest on circular benches that are reminiscent of the edges of a fountain or source. This is the place where people's creative ideas are converted into technology and input for AI systems. It is also the source of the soundscape that sounds from here in all directions through multiple speakers above The Source. Together with Yipp, we thought about how we could emphasize the collaboration between AI and humans in sound, resulting in an adaptive soundscape that both humans and machines influence. This soundscape had to match the atmosphere of the pavilion: light and optimistic, but also dynamic and changeable. The sounds had to be man-made but also influenced by AI.
We started by making different layers of sound: atmospheric, continuous sounds, often based on natural phenomena such as wind and water. In addition, there were more composed, musically arranged layers with musical motifs, harmonies, chords, etc. Finally, we added several percussive layers.
All this material was made in such a way that it would always sound pleasant in different combinations. We used Ableton Live, a widely used Digital Audio Workstation (DAW) program in combination with Max/MSP, a visual programming language focused on interactive image and sound. This made it possible to sound the created material in different ways and edit it in real time at the time of playback. The AI we used is ChatGPT, the well-known chatbot based on language models.
We then looked for external influences on the basis of which AI could determine how these human-made parts of the soundscape would be used. Through trial and error we arrived at a number of factors that actually made sense, were relatively easy to measure and ultimately ensured a suitable and good-sounding end result.
The first parameter we chose was the number of people present in the pavilion. We found a relatively simple measuring method for this. We looked at how many of the installations were in use by visitors at any given time. This gave us a pretty good idea of how busy the exhibition is. We also looked at the weather: is it sunny outside? Does it rain? Is it calm or is the wind blowing very hard? For this we used an online weather API that provided the weather forecast at regular intervals. AI was also allowed to use the time of day, day of the week and month and the current season as input.
In addition, we used 2 parameters that were directly related to the output of 2 other installations at the exhibition. With the Poetry – Help Me Express installation, the user can generate a poem using AI by inputting a subject, an emotion and a poetic form. The emotion most chosen by the users of this installation was also included as a parameter for the soundscape. And finally, the latest headline of the news in the field of AI was also used as input.
We then gave AI the ability to adjust a specific set of aspects of the soundscape, based on the input of the parameters mentioned above. For example, the AI could choose from a number of different keys, each with their own atmosphere and emotion. Everyone knows the difference in feeling between a major (happy) and minor (sad), but we also added a number of more exotic scales (for enthusiasts: mixolydian (unstable), whole-tone (magical) and Lydian (ethereal, surprising).
Furthermore, AI could also determine the relationship between the more atmospheric sounds and musical sounds. For example, wind and water sounds could only sound if the AI chose to do so based on the given parameter values.
The AI could also choose from different tempos, many or few notes (density), different instruments and a choice of brighter or duller sounds.
We then summarize all this information and options in an extensive question (called a prompt) to ChatGPT. This happens every 10 minutes, so that new variations arise regularly.
Once we had made this system work properly, it became mainly a matter of testing and listening carefully. What choices did the AI make based on the information given and how did that translate into sound? It took several rounds of adjustments on both the AI side and the soundcape generator. By wording the prompt to ChatGPT in a different way and changing certain settings and by adjusting the audio material and edits accordingly where necessary. Ultimately, we arrived at a system that provided enough variation in the soundscape to suit the situation without the sound going “out of line” with too extreme choices.
The underlying technology at work!
It was surprising to see that ChatGPT, based on the given parameters (the weather, the visitors present, etc.), always came up with a very well-formulated and understandable justification regarding the choices of adjustments to the soundscape. To make this clear to the exhibition visitor, there is the option to use their own smartphone to view a website on which this accountability is visualized via a QR code. In fact, the AI functioned like a DJ who assesses his audience and the situation and then uses the sound options made available to him as effectively as possible. A good collaboration between man and machine!
Working on the sound design for AI Pavilion was a great experience for us, allowing us to perfectly combine our skills in creating applied music and sound and co-creating technical solutions.
You can read more information about the exhibition here.
Would you like to discuss how innovative use of sound and music can take your project to a higher level? Please contact us!