麻豆官网首页入口

Using Audio Orchestrator for installation and performance

Author's name
Hans Tammen
Author's role
Composer

Western classical composers have worked with sound that is coming from different places for at least 500 years, originally by positioning choirs at different locations in the church. This practice of working with spatialized sound fell out of favour towards the end of the 1800s, but resumed in the 1940s using loudspeakers. E.g. Disney's Fantasound (the sound system developed for the Fantasia movie) employed 9 loudspeakers surrounding the audience, and another one hanging from the ceiling.

Composing with space is as important for me as working with dynamics, timbre, rhythm, melody and harmony. The practice started out in the late 1990s, when I experimented with guitar pickups on both ends of the fretboard, and sent the sounds individually to amplifiers at opposite places in the room. Soon I used my laptop to work with larger multichannel systems, in the last 20 years I have worked with anything from 3 to 56 speakers.

One project idea of mine, however, never made it to the stage. Radio Panspermia was based on the panspermia hypothesis, which proposes that life on earth may have come from bacteria distributed through interstellar space. People would carry radios small and large throughout the space, and I would transmit sound from my computer to these radios. This would have been an audience-participatory installation and performance, in which the audience decides from which location the sounds are coming from, and in which direction sounds are projected. Plus, due to the different sizes of the radios, they would also determine the timbral characteristics of the sound.

I discovered 麻豆官网首页入口's Audio Orchestrator during the pandemic, which turned out to be an excellent opportunity to finally realise Radio Panspermia. It took me literally 5 minutes to get the first piece up and running. I had enough material from the last 20 years lying around on my harddrive, I only needed to make sure all voices had the same length, and the files were properly numbered.

Composing for the Audio Orchestrator faces interesting challenges, though. Usually we work with a set number of speakers, but here you do not know how many people will log in at the same time. You also do not know where people are in the room, or if they are walking around, or in which direction they will point their cellphones. Whatever one does, the piece has to be flexible.

Which spatialization concepts work for the Audio Orchestrator? We usually distinguish between the following multichannel sound concepts:

1. In a loudspeaker orchestra the sounds are stationary, they do not move from speaker to speaker. The advantage is that one does not need special tools to create the piece, one just puts sounds on the tracks in some audio editing software. Plus, as speakers are independent of each other, one does not necessarily need to determine each location beforehand. This approach is perfect for the Audio Orchestrator, that uses unpredictable speaker locations.

2. Sound objects move between speakers by employing 3-D panning tools. Some of these tools are expensive and not available in all audio editing software. As sounds move across the room, portions of them appear in all speakers, assuming that the speaker location is determined beforehand. Surprisingly I had good results with this. Even if the outcome is less clear than on a fixed system, and one cannot determine the exact trajectory of the sounds, one gets a sense of sounds moving across the room.

3. Ambisonics: ideally one records a three-dimensional image of one's environment by using an ambisonic microphone, and replicates the environment later on a multichannel system. The recording has to be decoded for a particular speaker configuration that is set before the decoding happens. While this should not work in an (indeterminable) Audio Orchestrator setting, my ambisonic experiments still yielded a somewhat immersive environment experience.

I adapted several older pieces for the Audio Orchestrator, in all of them I experimented with the concepts mentioned above, or a mix of them. Manifolds is my first piece solely written for the Audio Orchestrator, it is only using the loudspeaker orchestra concept. . Manifolds features synchronized instrumental gestures with a dark cinematic atmosphere, in which the voices engage in a dialogue with each other.

One challenge is of course the limited frequency range of smartphones. To allow for lower frequencies than cellphones provide I created a bass drone in the main channel (sent into a sound system). All subsequent voices go to the cellphones. I was using physical modeling synthesis techniques to create the sounds, those are clear and pristine, and can be better located in the room due to their high frequency content.

A technical challenge was the allocation of voices. It does not make sense to have 100 different voices going at the same time, as due to the limited projection of cellphones one couldn't hear far away sounds anyway. One can create a perfect immersive environment with just a few voices, duplicating each voice whenever new audiences log in. Beyond the main channel Manifolds uses just 4 different voices. So with 100 people in the room we would have each voice replicated 25 times. This worked out well as people mingle in the room, and a group of up to 4 people entering the space would always get different voices. First I created these additional voices manually, but fortunately the team quickly implemented the "every nth" voice routine, which allows me to expand my pieces to any number of participants with just a few clicks.

The piece has been presented as an installation, and as a performance. During the installation Manifolds is on a constant loop. To make people aware of the piece (if audiences enter the room and nothing is heard, they usually leave immediately), all voices are coming from the sound system. As soon as people log in, voices move onto the cellphones.

For the performances we projected the QR code onto a screen, and people logged in quickly. In a performance with 50 people everybody was logged in within a minute or two, which also has something to do with the fact that in all performances people helped their neighbours to get ready. In 2021 I have performed Manifolds with 20 participants at Whitebox Gallery in Brooklyn and 50 people at New York University. There have been apparently over 100 people at the opening of CYFEST13 in St. Petersburg, Russia, and I realised that I did not think about adding Google Analytics to the website to get an accurate account of the participants for both the performance and the installation.

Clear directions are important. One has to tell the audience to crank up the phones to the maximum, and to project away from their ears, otherwise their sounds mask those of the other people in the room. Occasionally I had to remind someone that the piece does not work with your headphones on.

There have been of course other projects that use audiences' smartphones for installations. The ones I've seen were usually custom apps created for the occasion, while the Audio Orchestrator pretty much works right out of the box. What sets the Audio Orchestrator apart is that all voices are perfectly synced, up to the point that one can even create interlocking rhythms without any audible lag.

I have used the Audio Orchestrator with my grad students, too. I teach at various colleges in NYC, and not every college gives you a multichannel system. The Audio Orchestrator comes in handy to demonstrate multichannel sound concepts, and you can have the students create multichannel works on their own. Some go off the deep end, though. Xingmei Zhou and Jiawen Mao created a piece that used a fixed multichannel system together with the Audio Orchestrator. Sounds from a jungle came from the 21-channel speaker system at New York University's Steinhardt Program, while the audiences walked around providing insect and other animal sounds through their cellphones, creating an ever-changing immersive jungle environment.

Want to use the tools in this case study?