Video and lighting systems are becoming more tightly integrated. Convergence is starting to come to pass, yet control and output systems are becoming increasingly complicated. You currently have one system to visualize content. You then need a media server and video system to output the content to various devices and then layer on a show control system to synchronize it and make all of these devices work together. d3 from UnitedVisualArtists, Ltd. (UVA) is a 3D visual show production suite that replaces a lot of this system.

The d3 system is a proprietary package of software and hardware available for rental or for sale from XL Video. With d3, you can use the same hardware and software system to playback the content to video, LED, and lighting devices. You can map content to a wide variety of display devices, even with different pixel density and type, all at the same time.

What It Does

“When you design with d3, you place LED output devices, lighting fixtures, and screens into a 3D environment. You can import CAD drawings of your stage, booth, or building and turn the 2D plan into a 3D environment,” says Ash Nehru, software director with UVA. “You can use d3 to preview your design in realtime with pixel-accurate representation and play content through the output devices and lighting fixtures. d3 handles all output needs, driving all displays and lights, without rendering. You can layer content — audio, video, and bitmaps — onto a beat-based timeline and then connect your output devices and playback your show.” Nehru adds that users have a host of other options, including the ability to mix output devices of differing pixel density and type; design for any arbitrary shape or placement of screen or device including curves and 3D shapes; make rapid changes to content and configuration without rendering; pre-visualize from any camera position; output four full HD feeds and DMX from one unit; slave additional boxes, as needed; and sync to SMTPE, MIDI, DMX, or any other input. It's also fully expandable and customizable.

Having one system reduces playback equipment complexity, since one unit sequences and drives everything. “What the software actually does is integrate a number of stages in project creation that generally deal with separate effects software packages and separate control devices,” says Nehru. “It integrates a visualizer, and there is also a show control aspect to it with a beat-based timeline control that plays MIDI timecode.”

The system also includes integrated video playback. Users can place video clips directly onto the timeline, “rather than using a show control system with a media server and a dedicated video playback device and then having to work to visualize the stage,” adds Nehru.

In fact, the workflow couldn't be much easier according to Nehru. “It is as simple as copying a video file into a folder, as long as it is in the right format,” he says. “We support QuickTime .MOV files. There is a proprietary hardware-based codec that we have developed to get files into the .MOV format — it's called DXV — which is designed for very high resolution content, so you can go up to resolutions of 4000×4000. It has a reasonably high data rate — slightly higher than H.264. It has a couple of useful properties such as black for prop black — you don't get black levels like this — and it also allows you to have alpha channels, something that you cannot do with H.264.”

The d3 system can also output DMX to control automated lighting in addition to video outputting. “We allow you to access DVI and DMX fixtures,” says Nehru. “We can put content onto multiple different groups of fixtures and use them as one canvas without any outboard console. Our system allows you to control large numbers of moving head lights using a targeting system. So you can target them onto a performer or create interesting patterns with very little effort.”

The d3 system is both dedicated hardware and software. “It is fundamentally a PC, but it is a PC that has been custom-built to our specifications,” says Nehru. “It is Windows-based, which we have been using for quite a while now and is very well understood. It has proven very stable. We have run a number of projects 24/7 for two to three months and have had no failures, and we did the whole U2 Vertigo tour — over 130 shows — without any failures.”

How It Came To Be

UVA got its start as a group of content creators in 2002. One of its first projects was working with Massive Attack for its 100th Window world tour. “When we finished our work, the band asked us to stay on with the tour since there would be changes with each show,” says Nehru. “The idea was that it was very heavily text-based and linked into the band members' computers. To do a show like that would be impossible with video-based techniques, because you would have to re-render the whole show every day.” So Nehru created a realtime rendering system that could do it all with effects files but using an integrated show control system. After that success, Nehru adds, “We continued to develop our own software that we could use with all of our large projects.”

In 2005, Willie Williams approached UVA to work on the U2 Vertigo tour. “It was very sculptural,” Nehru says. “There were no large, square LED screens. It was all very abstract-based, with hanging Barco MiSphere curtains in different positions. We found that we were having problems making artistic decisions because we couldn't see what the content would look like, so we worked with various existing programs to see how the content would look in 3D and realized that we needed to write in realtime to show us how the content was coming across. We took a chance to create this system to visualize the content, and it just developed and developed until it ended up running the show. We found it to be easier that way.”

What's Next

UVA is responding to client requests for new features, as well as additions for the company's own projects. Moving forward, the focus is on usability and workflow. On many media servers, working with content saps a lot of time, which Nehru would like to streamline. “We are putting in a content management pipeline to allow you to work more efficiently,” he says. “If you think about how content is put on systems at the moment, there are a number of steps involved, and you are dealing with really, really large files. Very often, a substantial number of hours are spent — and no shortcuts — when loading content onto current systems. We want to reduce the procedure that will allow the copying, transporting, and distribution operations to happen in one step. And also, it will allow you to update sections of media, rather than the whole media file. We will be further developing controls for lighting, as well. We are looking at ways to really tightly integrate lighting and video into the same show.”

What End-Users Have To Say

“As we were developing a new exhibit property for Honda USA, we decided to research new ways to deliver media and integrate it with lighting, sound, and interactive applications without limitations,” comments Julien Le Bas, creative director with George P. Johnson (www.gpj.com). “d3 provides the right balance between a robust and proven playback system and a highly flexible, modular, and scalable system. d3's most interesting feature is its ability to open new creative horizons without compromising stability and quality. The intuitive interface, the scalability of the software, and the creativity of UVA has provided a solid frame, which we can use to create new media combinations and experiences, thus delivering a higher value to our customers while having fun and expanding our brains.” Le Bas has a few features that he would like to see added to the d3, such as “integration of I-Mag and possibilities to remote control the system from small wireless devices or to pause the media at any frame,” he says. “I see in d3 and similar tools the next step in realtime media management for reactive and interactive, temporary or permanent, architectural spaces.”

Freelance programmer Stefaan (“Smasher”) Desmedt started using d3 while on tour for two years with Vertigo. “Recently, I used it for Honda at the car shows in LA and Detroit, and I am currently using it for the new television show Million Dollar Password,” says Desmedt. He especially likes the visualizer, “because you can preprogram your whole show and see exactly what it will look like, in pixel-perfect detail,” he says. “The other nice feature is that, if you have multiple screens for outputs — say, 10 different screens, and they all have different resolutions — you can throw on any clip on any screen. Usually with current media servers, you have one feed, and they pixel map it in post, and you have to render everything in one piece. If there is anything wrong on one of the 10 screens, you have to re-render the whole thing. With d3, you just need to re-render one clip.”

Desmedt calls attention to one particular feature that he has used quite a lot, the ability to project any single video clip onto the whole set creating a single image that uses all the screens. “You can take one video clip and splash it over the whole scene without doing any tricks,” he says. “That's what we are doing here on Million Dollar Password…if you had to do that in post, it would take you days. It's very easy to program the timeline; it's very, very easy and very flexible to work with once you understand it. d3 allows you to create really, really complex shows.”

The more complex the show, the more important improvements are to the user interface for Desmedt. “I have been working with UVA for the last year on improving the software, making it more user-friendly. It is not a big firm, so if there is custom stuff that you want created, it is a phone call away. The cooperation of UVA is an advantage; believe me, on a lot of occasions that is not the case.”

For further information, visit www.uva.co.uk.