How can five actors perform up to 17 roles in two languages when three of the actors are deaf? This problem confounded Cleveland Signstage Theatre as it prepared to mount tours of A Winnie-The-Pooh Birthday Tail and The Hobbit. All of Signstage's productions are presented simultaneously in spoken English and American Sign Language (ASL). One option--employing two actors for each role--was precluded by the economics of the children's theatre touring market.
A tour the previous year demonstrated the problems of having hearing performers provide all the spoken English portion of the performance. Even with individual wireless microphones, the audio portion of the program was not satisfactory. Hearing actors were asked to create one or more characters physically and perform them in ASL while also providing distinct vocal characterizations for roles performed by the deaf company members. Coordination with the deaf performers was difficult when the hearing actor was off-stage, out of sight.
Cleveland Signstage Theatre has also made a commitment to avoid, where possible, the use of simcom. Simcom (simultaneous communication) requires a hearing individual to speak in English and sign in ASL. Unfortunately, it is virtually impossible to simcom and treat each of the languages with respect. American Sign Language is not signed English; its structure and syntax is different. For this reason, it is almost impossible to sign ASL clearly and speak English at the same time.
The desire to avoid simcom also led to situations in which a hearing actor signing one role would be voicing for another character onstage. This was a source of confusion for the audience.
The solution was to create the audio portion of the production as if it were a radio drama. Actors went into the sound studio and the play was recorded. The studio recording was mixed down to a single channel and dubbed to a Sony MDX four-track machine.
The problem that remained was how to coordinate the audio portion of the presentation with the communication of the script in ASL. Obviously, the company members who were deaf could not coordinate their interpretations with the audio track. When a scene was taking place that included both deaf and hearing actors, the hearing actors could, to a certain extent, cue the deaf actors and help synchronize the action with the soundtrack. However, this solution was obviously inadequate for two reasons.
First, it did not solve the problem of scenes in which only deaf actors were performing. Second, it would have made the deaf actors dependent upon the hearing actors for the show's pacing. Not only would this be contrary to Signstage principles, it would deny the deaf actors the opportunity to express themselves fully as theatre artists.
The challenge was to find a system that would allow moment-to-moment coordination of the recorded sound of the play with the spontaneous creation of the characters onstage. In addition, we wanted to be able to add special effects and underscoring. All of this had to be accomplished without adding staff, since the goal was to keep the company small (a total of six individuals--five performers and one stage manager/technician).
We considered the possibility of using mini-disk machines; however, the four-track machines had unacceptably long lag times. Two-track machines had quicker response times, but we would need multiple machines to accommodate dialogue, underscoring, and special effects. The ability of one person to manipulate controls on multiple machines seemed unlikely to assure a consistent product, so we continued to look for another approach.
Carlton Guc of the Cleveland-based Stage Research, was presented with the problem. Stage Research is the company responsible for SFX, the sound playback software designed for live performance. Guc felt that the new SFX 5.0 software, using a Pentium-based computer, could handle the assignment; he assembled a machine and delivered it in less than a week. The machine included a sound card, a CD-ROM drive, and a 1GB Jaz drive. (The Jaz drive allowed us to back up each of the shows on a separate disk.)
The first step was to convert the studio recordings from the Sony MD4X to the hard drive of the computer. CoolEdit Pro worked beautifully. At first, several lines of dialogue were kept together. Each cluster of lines was saved as a separate ".wav" file. As we went through multiple iterations working with the actors in rehearsal, the general trend was to further divide the lines into smaller and smaller segments. This gave greater control to the stage manager and more freedom to the actors. The next step was to convert the special effects and the underscoring into wave files.
Once all of the lines, effects, and underscoring were set up as wave files, the focus shifted to setting up the SFX show cues. Separate cue lists were set up for dialogue, music, underscoring, and special effects. One of the big advantages of the SFX system is that the cue name can be highly descriptive. In our case, it often was the entire line of dialogue.
One of the best features of the SFX system was its ability to overlap cues. In crowd scenes, several lines of dialogue could be started one after another or simultaneously. Underscoring was a dream because of the flexibility provided by the software. It allowed us to set up a single cue that would trigger the playing of multiple files. For example, a single trigger cue could start fading out one underscoring file, start fading up a second underscoring file, and play a special effect at the same time.
Once the cueing was completed, the SFX system would be set in "show mode". This allowed the stage manager to use the mouse to click on a "go" button for each of the lines. If an actor took a pause or altered physical business, the stage manager would adjust the cueing accordingly. Over time, it was found that segments of the play were being consistently performed in a set rhythm. Links within the cue list accommodated these tightly cued segments so that two, three, four, or more lines could follow one another automatically, providing some relief for the stage manager.
Another helpful feature of the SFX system was the option of setting up "hot keys." Any alphanumeric key on the keyboard can be programmed to execute one of several functions. We found that setting up hot keys for "go," "pause," and "play" worked better for us than using the mouse.
The process had its problems. One difficulty was that while the audio portion of the production was frozen in time, the performance itself evolved. We experimented with different solutions; in one case, we delayed the recording session until the show was well into the rehearsal process, which reduced the number of necessary changes. A second strategy was to record several different line readings by the voicing actor. Once the readings were converted into .wav files, it was easy to substitute one reading for another. In some cases, we simply had to go back into the studio and rerecord some segments.
A second phase of the process that was never implemented involved using the SFX system to trigger a series of cue lights. One challenge in working with an integrated company of deaf and hearing actors is the difficulty of cueing. In a company of hearing actors, a director wouldn't think twice about having an actor speak to another actor's back. However, if the actor whose back is turned is deaf, he doesn't know what was said to him or when the other actor has finished speaking.
The concept was to set up three arrays of cue lights across the front of the stage. Each array would have eight lights, each in a different color, each controlled by the SFX system. Actors would be assigned a color. The light would go on during the period of time that the audio track for his character was playing. In theory, this would give the deaf actors more information about the progress of the scene and a greater ability to contribute to the process of synchronizing the production. Additionally, the lights could be flashed rhythmically to communicate the tempo of musical numbers to the deaf actors.
Time constraints prevented us from implementing the cue light system on the tour. The programming of the lights was time-consuming, and we experienced difficulty in coordinating the lights with the rhythm of the songs. We were convinced, however, that the system will support the concept and we hope to experiment with it again in the future. It is important that such experiments continue, because this technology opens new possibilities for economically incorporatingdeaf actors into productions.
Computer reliability was also a concern. A loose power connection to a hard drive caused intermittent problems, and the system required some tweaking to maintain optimal sound quality. As a backup, the company traveled with a Sony MD machine with a recording of the show made in performance.
This solution will not be appropriate for every play in which deaf and hearing actors work together. Additional time needs to be built into the rehearsal process to allow the best possible match between the audio and the physical interpretations. The flexibility and the power of the SFX software, however, opens new possibilities for incorporating individuals who are deaf into roles that would otherwise not be available to them. Additionally, since all the information required to run the sound system appears on the VDT, the possibility actually exists for the playback system to be operated by a deaf individual. The potential for using the SFX system to open opportunities for individuals with disabilities is exciting.