Achieving stereo reproduction in the theatre has always been a chancy proposition. The traditional approach usually involves creating the illusion of stereo through the assignment of multiple pan-potted sources, such as assigning the orchestral pit microphones across the left, right, and (sometimes) center loudspeakers in a typical proscenium show. Even though individual orchestral mics each represent a mono source, through the miracle of the pan-pot, a stereo effect can be created that sounds large and wide to the audience listener. Simply assign strings to the left side, horns mostly center, percussion to right, and the result is “stereo,” much in the same way that pan controls are used on multiple tracks played back and mixed on a pop recording session.
Sound effects can be likewise treated — or by using a separate matrix mixer (or the matrix section on the house console) — routed to speakers placed anywhere within the auditorium, whether to rear speakers (used as a cinema-style surround pair) or a discrete speaker(s) for playback of specific sounds, placed overhead, under seats, or hidden within the set itself. Perhaps the most extensive use of this in recent times is by Cirque du Soleil, which sometimes employs dozens of individual speakers at various locations within the house. Adding SMPTE timecode or MIDI-based automation to the system — or simply manually triggering various snapshot console presets in time with the onstage action — can greatly simplify the process for the mix engineer. Used individually or together, all of these techniques combine to bring effects playback to a new level of realism for theatre audiences.
However, until recently, the main drawback to achieving truer stereo reproduction in the theatre stems from the actors themselves. In earlier (pre-wireless) times, actors were expected to vocally project, and from the short delay differences in the sound arriving at the left and right ear, an audience member would experience both localization cues and a stereo effect from hearing actors speaking from different parts of the stage.
With larger venues, shows performed at higher sound pressure levels, and the availability of good-sounding, dependable wireless rigs, the use of PA systems and radio mics were soon essential in theatre. This rise of wireless use brought vocal tracks into the mix mostly as center-panned — essentially, monaural — elements. In addition to the loss of localization from mono playback, the listener hears two distinct sounds, from the actor's voice itself and from the speaker. Any delay differences in the arrival of the two sounds causes a smearing effect, resulting in reduced vocal intelligibility. Without proper localization, the PA becomes more of a distraction than an enhancement.
How It Came To Be
Some solutions emerged, such as the TiMax Audio Imaging Matrix processor and ShowControl software. On the market for several years, the system comprises an automated Audio Imaging Matrix hardware unit (available as either the modular TiMax Rack with 8×8 to 32×32 outputs or the TiMax ImageMaker in fixed 8×8 or 8×16 configurations) and TiMax SoundTablet sound effects editing/playback software. The combination of the two lets sound designers create several arrival delay-offset playback zones onstage where effects can be routed and panned using the software's waveform-based drag-and-drop pan-assignment facility.
In addition to sound effects, the TiMax system could also be applied to actors, helping audiences localize on the performer's radio mic rather than the PA. The process was effective, but tedious, requiring the lengthy and complex blocking of actors' stage movements during rehearsals, translating these into positional delay data and manually recalling these presets on a cue-by-cue basis. However, as a show progresses, any blocking changes required reprogramming, while stepping through multiple “imaging cues” puts another burden on a busy mix engineer.
What It Does
Last year, TiMax unveiled an innovative solution to the problem. First shown in the US at LDI 2006, the Track the Actors (TTA) system automates the process by following each actor's movement dynamically and in real time. A foot-square Cordis Radio Eye antenna array mounted in the grid above the stage receives signals transmitted by small radio tags worn by each actor. These are fed via Ethernet to the TTA software, which derives positional information about the actors and displays them as lifelike avatars that move onscreen in sync with the real actors. The movement data is sent as MIDI messages to the ShowControl software, which outputs level/delay instructions to the TiMax delay matrix, thus placing the actors' audio into the appropriate stage-localization zones. Everything occurs automatically, in real time, without requiring operator intervention during the show.
Used on a recent production of Madame Butterfly at London's Royal Albert Hall, TTA was a time-saver, says Robin Whittaker, director of the UK-based company Out Board, who manage worldwide distribution of the TTA product. “We only had four or five rehearsals in the RAH, and it can be very difficult to know what you are listening to. For example, if an actor is not where they were blocked, or Sharry [sound engineer Richard Sharat] misses a couple of movement cues because he's focused on getting the orchestra mix right or EQing a mic, then the delay scenario from that actors mic could be way off.”
The TTA system let the show's sound designer, Bobby Aitken, and mixer Sharat achieve subtlety in the sound reinforcement by shifting the audio focus to the performers themselves. Hearing the actors' voices some 10 to 20 milliseconds before the speakers, the brain integrates the two arrivals together, but the listener is said to instinctively localize on the first arrival from the performer.
What End-Users Have To Say
Having used TTA on other shows in the past, including productions of Showboat and La Bohème, Aitken is impressed with the system. “Using the tracking technology with TiMax allows us to spend more time on the creative aspects of the production as we are not so burdened with functional setup tasks.”
George Petersen is the executive editor of Mix magazine.