We’re seeing a rapid blurring of studio and stage production techniques. Five years ago, going on tour meant racks of analog outboard gear. It wasn’t easy to take studio toys– they’re often too delicate to make it through a load-in.
Now we have digital consoles that can run hundreds of plugins simultaneously. Artists take their favorite channel strips on the road and its possible to recreate a studio album live. At the same time, production techniques are moving into the virtual world. Simulated amps and sampled instruments are a cheap and accesible alternative to real equipment and players. The result is that an astonishing amount of production happens in-the-box and many producers and artists are starting with similar building blocks; the same sampled instruments, plugins and software.
From a compositional perspective we know that pop music is relatively formulaic and many of the chords and progressions are following a trend of growing similarity. In fact this is happening to such a degree that very narrow technical differences (like a 7 BPM tempo shift in the case of EDM and trance) can define entire genre with separate fan-base and listenership.
The goal of this project was to see if it’s possible to take advantage of these similar building blocks and these small technical differences to imagine a single live performance that can be produced in-the-box multiple ways simultaneously. Essentially the band provides the basic compositional building blocks, the computer allows us to take those building blocks and produce them, like any producer would, but instead of doing it just one way, there are three completely different constructions of the blocks.
Disclaimer: this project was never really completed! It would be incredible to have resources and time to finish it! I’ll describe the basic pieces, as far as we got, so you can have an idea of how things fit together.
We put together a 6 piece band playing silent instruments: electric guitar, bass, horn (EWI), MIDI drums, keys and a guy on Ableton.
All the MIDI and audio from the instruments went to a computer, where we used a DAW for maximum flexibility to duplicate each input and create several different types of instruments for each player. All those outputs, now the equivalent of three bands playing the same thing on different instruments, went to a mixer and also Ableton.
We had two mixing engineers (myself and Charles) working on two completely different mixes of the same performance. A third person (Donovan) on Ableton could sample and loop any of the incoming audio.
The change in instrumentation wasn’t enough to create a large difference so we tried other methods too; muting and quantizing elements of certain parts (the hi-hat, guitar, etc…). We experimented with tempo tracked filter gates and spectral blurring as well.
The band was given a combination of the separate ‘mixes’ on their headphones. They used colored hats to let us know what they wanted to hear.
We tried a new Behringer x32 mixer. It’s a pretty cool little 32 channel mixer with class-compliant IO and OSC control of every parameter.
In order to keep track of multiple audience and band mixes happening simultaneously, I wrote a small OSC server to make a lot of common mixing tasks very fast (merge these two mixes together, copy one mix to a new mix, etc…) The x32 has 10 stereo busses, so these were used as separate band and audience mixes. Each mix for the players consisted of a template into which audience mixes could be merged.
24 sets of headphones were mounted on LED strips around the band. The color of each strip corresponded with the hats the band was wearing, which also corresponded to the three audience mixes (two traditional mixes and an Ableton mix).
Results & Lessons
It turns out its pretty hard to get vastly different sounds out of the same performance. We learned a lot and made a lot of progress and I hope to keep investigating rehearsal methods to see how far this can be pushed. A few thoughts in no particular order:
More than instrumentation change is needed. The more transforming effects we could fit in, the better and more different the mixes were. I would like to try really going to town with vocalist and a harmonizer, maybe producing minor chords in one mix and major chords in another. With the EWI using instruments with fairly different envelopes (like a sampled flute vs. an analog model synth) were particularly powerful. We were able to cut the hi-hat in one mix and that made a large difference. Spectral smearing is a pretty effective way to translate just about any sound into something else more distant and less-focused. To push even further, the addition of pattern gating can take something like a solo guitar line and make it sound more like rhythm backing synth.
It was very, very important to carefully control what the band hears. Changing their mix affects how they play. For the first rehearsal we had a mono mix for each of the band members. As the systems became more advanced we were able to give them stereo and more nimbly control exactly who heard what. This keeps things interesting and engaging for the band members who are playing.
The music we created wasn’t very dynamic. Although it would be possible to very carefully compose a complex inter-combination of style and genre all played by the same people, we took the path of asking the band to improvise in a fairly static musical context. Not much changed in terms of chord progression or path. Instead, each player spent more time exploring each mix and working together to create more a vibe. This also allows the audience to listen to different pieces of the mix and compare them without things changing too much. Because of this, the project in its current form does much better as an installation rather than a performance.