Sleep No More


Mixing, Performance Systems, Sound Design

Sleep No More is an awesome show running in New York (you should go!) It’s a 3 hour long immersive experience based on Macbeth and influenced by Film Noir, created by the genius theater group, Punchdrunk. Recently, they asked our group at the lab to come up with some interesting extensions to the performance as an experiment funded by the UK government National Endowment for Science, Technology and Art. This post is a quick brain dump of some of the things we did at Sleep No More. There is way more to the project than what is here, and this post goes into some technical detail that may not be for the faint hearted. For a basic overview, check out:

The Gizmodo Article
The New York Times Article
Pete’s Write-up for the Guardian

Media: Here and There

For the project, I made a networking and media delivery platform with the goal of connecting  a souped up browser-based text adventure with the real-life experience of participants in the show. It was crafted from existing software and hardware glued together because we had only a couple of weeks to build everything. I scripted a 65 channel mixer and DSP in Reaper using specially crafted XML  (affectionately named BLEML– Ben Luke Elly markup language– after its creators) to define cues which could be run in any combination and order. The cues could play both live and pre-authored content, triggered by OSC from the story engine, which kept tabs on online and real-life participants.

A streaming platform (Wowza) delivered audio and video to the browser with a couple seconds of latency on a decent 1Mbps broadband connection. We used i7 mac minis for rendering nodes, which were responsible for switching video coming from IP cameras and playing out clips which were pre-authored, then encoding the mix of everything as H264 and streaming the final result to Wowza. A pair of beefy rack mount ADK machines ran Wowza and auxiliary telephone and re-encoding servers as well as the story engine and mask servers.

All content for both video and audio could be sourced from pre-recorded, traditional audio or video capture or IP sources, and was mixed across multiple outputs for streaming based on a network control protocol. That’s the innovative part of this system! Any number of inputs can be altered and then mixed-up to feed any number of outputs, based on a brain (in our case the story engine), and all IO can be IP based or not. We had to put it together because there aren’t products on the market to do that kind of processing and mixing with low latency. On the other end, a tweaked flash (blechhh) client allowed us to configure buffering and streams using JS hooks, with GPU accelerated decoding of H264.

Some BLEML!

I also put together a modified Asterisk PBX which was connected to telephones on the set, MIT’s SIP system and a live actress (Megan!) playing a voice part during the performance. The system could make international calls and connect them to masks with bone conduction transducers, phones on the set, or Megan sitting in a small office. All of the calls were triggered via OSC from the story engine. The caller ID was 00000000. Megan could chat with us in the control room via iPad running a text chat. Calls could also be attached to channels in Reaper (the sound mixer) so that any of the live microphones on the set and pre-recorded material could be piped down the phone line. Here, I used PJUSA python bindings to build a small SIP client, running on the audio machine. PJUSA allowed the call to be programmatically connected to the mixer based on caller id or other call data using Jack (jackd sat behind the mixer as a router to handle audio IO). Connecting to the 1940’s telephones on the set required defining a custom tone set because the key exchange did not use standard dial and ringback tones.

Pre-recorded sound for the system was recorded in binaural with a pair of Soundman OKM mics and mixed with existing show audio. We had a lot of fun with those mics (everyone should go buy a pair!). Antares AVOX was used to take a couple years off Megan’s voice to make her sound like a 7 year old. She had a super-cardioid Audix VX5 to try and reject as much background noise as possible. The tight pickup pattern allowed us to use a waves L2 limiter (smashed to almost -30dB) so she could speak at multiple volumes, from a whisper to a  yell while cutting the dynamic range to something usable for the bone transducers and phone lines.

Networking: Pipes and Tubes

The network was pretty involved as well. In the building, we had Cisco 3600 and 3700 gigabit switches linked together with LACP to allow solid GigE peformance all over the building. Luke and I helped to spec and install 8000 feet of Cat6 to put in drops for network cameras and access points. Cisco provided us with eight of their 4500 series IP cameras and some Tamron lenses. The 4500s stream really awesome looking 1080p H264 and MJPEG to our video rendering macs. A kineme plugin (NetworkTools) allowed us to pull the MJPEG streams off the cameras at very low latency into Quartz Composer for processing.

To track the live participants in the space, Cisco provided us with an Aironet 3600 system of 24 access points and a 2500 controller. This is a seriously cool system that can detect and jam rogue networks. Being in NYC, there were about 120 networks in range of the building. We also had to compete with several networks in the building that were used by the cast and crew. The Aironet system provided realtime monitoring of “air quality” so we could maintain links in the building easily and trouble shoot issues with connectivity.

Each participant carried an android device (Droid X) with custom software (a mash-up of MKermaniWifiScan and Akito’s secret sauce) to maintain a relatively stable Wifi connection while wandering around the building. We found a lot of problems with the android wireless stack and had to make work arounds to get it to roam acceptably. We tried several approaches, but in the end used one where the device continuously scanned and if it found a BSSID with the correct SSID and 10dB greater signal strength, it would manually reconnect networks. This allowed the device to reconnect pretty reliably even when moving quickly. The droid X has some firmware issues that require powering down the wifi chip to reset, so even with the scanning we had to reboot occasionally.

Out of the building, we had two TWC business broadband connects at 50M down and 5M up. A Cisco 2951 router managed load-balancing between the two connections with CEF (cisco express forwarding). A VPN allowed us to work easily from lab and static NAT meant that all the services were accessible from the outside across streaming, telephone and story engine systems. We had two static IP addresses, one for each connection. Inbound connections were balanced in the client using JS to pick an appropriate IP for websockets, H264 streaming, etc… All services were hosted on both IPs.

Here are some photos of the project!

Leave a Reply

Your email address will not be published. Required fields are marked *