If you're a regular reader of this magazine, you already know that digital media servers like the High End Systems Catalyst, IRAD RadLite, Diagonal Research NEV7, LSD MBox, and Martin Eureka 3D are being increasingly utilized in all kinds of production applications. Indeed, they are becoming a routine part of many shows today. But what does this mean to the lighting designer and programmer — in fact, why should they be concerned at all? The answer is that since the servers are DMX-controlled, the responsibility for their care and feeding usually falls on the console programmer and the LD — which is not necessarily a bad thing, as it increases their value to the production.

Also, the console programmer is often the most computer-literate person on the lighting crew (or at least that is the view of most production managers), so the programmer is usually going to be the key person to deal with the hardware and software. However, a firm understanding of how to apply digital media servers (and video display products in general) in production environments, requires knowledge of computer file and display video resolution, and their implications, that many lighting professionals (including myself) didn't have to be concerned about until now.

Digital media servers are capable of different output resolutions based upon the hardware and software used in the system, along with the internal processing resolution of the operating system and the graphics file format(s) in use. This article will discuss the evolution of video resolution and formats, resolution of broadcast standards, display standards, and computer graphics and video resolutions, and how it all relates to use with DMX-controlled, computer-based digital media servers. There is a lot to learn about the world of video image transmission and display for the lighting professional seeking to make the most out of the new tools that have been made available to them. Let's start with a little video background first.

Broadcast Video

The first video signals in use were broadcast-type video signals. RF video signals as carried over the air include both audio and video components and are outside the scope of this article. Broadcast video signals vary around the globe, with most countries using one of the three major video formats: NTSC, PAL, or SECAM. While there are actually scores of different video standards, most are simply variations on the three major formats. In the United States, the NTSC (National Television Standards Committee) color system has been adopted, whereas European and other nations use the PAL or SECAM standards. These were developed decades ago, and are just beginning to be updated to the HDTV standard. When compared to most computer graphics signals, these video standards offer low resolution and a relatively low bandwidth of around 5.5MHz (5,500,000Hz).

The NTSC color standard has a horizontal frequency of 15.75kHz, meaning that 15,750 horizontal lines are drawn every second. The best theoretical maximum picture resolution for NTSC is roughly 525 horizontal lines by 525 vertical lines. In practical application, the net viewable resolution is further decreased by the real-world limitations of television transmission, storage, and display equipment. The diagram on page 34 charts the visual resolution of various devices and standards.


The long-awaited HDTV (High-Definition Television) standard is the next-generation solution to replace NTSC and is currently being implemented by Congressional order. The most common HDTV resolution specs as outlined by the FCC are 1080i, 720i, and 720p: 1080i offers a resolution of 1920×1080, interlaced; 720i offers 1280×720 pixels of resolution, interlaced; 720p offers a resolution of 1280×720 pixels, progressive scan.

What is interlaced video versus progressive scan video? In standard, interlaced analog TV, the picture on TV is changed 50 or 60 times per second (50Hz in Europe/PAL systems, and 60Hz in most NTSC systems, including the American one), but the picture contains only every other horizontal line and the line between is left “empty” — and the next frame then contains only the horizontal lines missed in the last frame. Therefore, in an interlaced picture at 50Hz frequency, the picture actually changes only 25 times per second (so its true frame rate is 25fps).

But with progressive scan technology, every frame contains everything, so 50Hz progressive scan video changes the picture 50 times per second, having the frame rate of 50fps. This higher number of lines also corresponds to a higher scan rate than NTSC and greater bandwidth requirements for all equipment in the video recording and playback chain. It also lends itself well to film. This is because film, by nature, is progressive scan (one frame at a time containing all information).

While television signals are generally combined together into a single composite or Y/C signal containing all color components, computer signals generally keep all color signals separate to insure highest resolution. There used to be dozens of different types of computer video signals used by the various computer manufacturers. Since the introduction of the personal computer, IBM has introduced many computer video standards from low-resolution MDA and CGA, to high-resolution S-VGA and XGA. Though Apple and Sun previously used proprietary formats (and still do in some instances), the IBM standards have been adopted by the video card industry for use on a variety of platforms. Each new video standard was released to improve the resolution, number of colors, and speed of the video display.

High-Resolution Graphics

Images created by high-resolution computer video cards used in digital media servers and graphic workstations now approach film and photorealistic quality. Resolutions of 1280×1024 are common, along with high vertical refresh rates that eliminate picture flicker. In order to reproduce subtle shading and achieve realistic imagery, the newest high-resolution graphic sources also offer a vast selection of colors, from 32,000 colors to over 16 million. With horizontal scan rates in excess of 64kHz and bandwidths in excess of 100MHz, high-resolution computer graphics require careful interfacing to the data display system if the designer expects to utilize the full resolution of the system.

The word resolution can be applied to both computer video output signals (like those generated by digital media servers), display units (like video projectors and monitors), and the computer graphics and video files themselves (Photoshop, JPEG, QuickTime, MPEG-2). Computers generate a video signal with a certain resolution depending on the hardware and software design. Some computer graphic cards can switch between different resolutions depending on software needs. Resolution in computer video and file graphics terminology is specified in terms of horizontal and vertical pixels, or pixel resolution, where one pixel represents a single dot of the image. A resolution of 640×480 means that there are 640 dots horizontally and 480 dots vertically. The diagram above shows how a simple figure like a circle becomes more clearly defined when a higher number of horizontal and vertical pixels are used to represent the circle.

Another use of the term resolution refers to the capability of a video monitor, TV set, LED wall, or video projector to display images with clarity and detail. Rather than using a figure like 640×480 pixels, these resolution figures are quoted as vertical lines of resolution. A projector or monitor may carry a specification of “500 lines of resolution for composite input, 650 for RGB input.” This type of resolution refers to the throughput and video bandwidth of the display device including the internal circuitry, video amplifier, and display hardware like DLP or LCD. A higher line-of-resolution figure equates to greater capability to display fine image details with clarity. Video and data displays are designed for a certain maximum resolution, and different displays have different capabilities. The display(s) used should be capable of displaying the maximum resolution that the digital media server generates, unless you're not using its full file resolution. Video bandwidths, horizontal scan rates, and the display type all determine the maximum displayable resolution and the quality of the image.

Resolution, Bandwidth, Scan Rate

The terms high resolution, high bandwidth, and high scan rate are all related and proportional to each other. High resolution requires a high horizontal scan rate, which in turn results in high bandwidth requirements for the entire data display system. The chart below lists some general ranges that define low, medium, and high resolutions, scan rates, and bandwidths.

This is a key point: The resolution and horizontal scan rate of a computer signal are set parameters that determine the bandwidth required of all equipment downstream in the data display system. When any part of the data display chain lacks sufficient bandwidth to pass the signal, the effects of reduced signal bandwidth are easily observed on the displayed image. This could happen by choosing incorrect video connection standards between pieces of gear, a switcher that is not correct, and so on. If your digital media server is capable of SVGA resolution but your projector is only capable of EGA, you will not be able to take advantage of the full resolution of the digital media server's video display card. Conversely, if the display device is high resolution, but the images being displayed from the digital media server are of low resolution, a line doubler may need to be installed to make the images look better in comparison to high-resolution live IMAG camera outputs, for example.

Video and Still Image File Resolution Let's now discuss the computer file resolution of digital media servers. Mac-based systems (like Catalyst, for example) use Apple's QuickTime standard for live video playback. QuickTime is a fully professional, high-resolution video delivery and production format. In fact, it's professional enough to be used by filmmakers such as Robert Rodriquez on feature films like Spy Kids. At right is a chart with a range of resolutions used with QuickTime movies, from HDTV to low-res Internet:

Horizontal Scan Rate 15.75kHz 16 to 35kHz 36kHz and above
Resolution 320×200 640×480 1024×768 or more
Bandwidth 5-20MHz 30-50MHz 60MHz and above

As you can see, there is a wide range of resolutions available for use. For example, the base layer of Catalyst (the live motion layer) is capable of reproducing a 30fps movie at 720×480, which is NTSC Standard Definition DV, 4:3 aspect ratio, non-square pixel, with no drop in frame rate. This resolution is currently considered to be “broadcast quality” in image resolution and bandwidth. Referring to our resolution discussion earlier, we can see that a QuickTime image with 486 lines of resolution is higher than the 330 lines of resolution for broadcast TV, as well as the 425 lines of resolution for S-Video or DVD.

This raises an interesting point. It appears that QuickTime running on an Apple G4 with a powerful video card is capable of producing very high-resolution video — much higher quality than what is perhaps necessary in many applications. In other words, if your display system is comprised of television monitors with a maximum resolution of 330 lines, perhaps you can use lower-resolution movies in your digital media server — which are smaller files. Smaller files copy, transmit, and load faster, and generally don't eat up as much bandwidth all the way down the line to the display devices.

The other popular professional computer movie file format is MPEG-2. This format is supported by the digital media servers running on WinTel platforms (IRAD RADlite and Martin Eureka 3D). The biggest difference between it and QuickTime is that MPEG-2 employs compression, whereas QuickTime does not. But even though MPEG-2 video was not developed with studio applications in mind, a set of comparison tests carried out by MPEG confirmed that MPEG-2 video was at least good and, in many cases, even better than standards or specifications developed for high bitrate or studio applications.

Now let's talk a bit about still image files as opposed to movies. It's important in the case of Catalyst to take into consideration the area of the rectangular image that is sometimes sacrificed when the mirror head is used. Keep important parts of the image near the center of the field (logos with non-black backgrounds should also be kept in the center of the field). If you're using Catalyst without the mirror head (or any of the other systems on the market), this is not an issue. In the case of Catalyst, the maximum file resolution is 1024×2048 pixels, and it will accept Photoshop, TIFF, GIF, BMP, PICT, 3DMF, PNG, QuickTime, and Targa file formats. The optimum format choice will depend on the nature of the image. Most photographs work well in JPEG format, though TIFF offers less compression. For simple line art with 246 colors, GIF is a good choice. Again, larger file size affects load time in most systems, as well as hard disk capacity. Now, you might ask, “If most of the projectors and monitors only offer 330-525 lines of resolution, why use a file with 2048 lines of resolution?” The answer comes when you send the “zoom in” command to a digital media server — it digitally zooms in. So if your image is low-res to begin with (I define that as 300dpi/pixels per inch), it's going to look really grainy at 10x zoom! So if you think you're going to want to zoom in on different parts of the image (it doesn't have to be the center), high-resolution images are desirable. And if your images are stored at lower resolutions, leave them their original size. You'll have fewer distortions utilizing the media server's ability to reshape and rescale images than your graphic program.

Once you've connected the digital media server's output to the interface box (hopefully supplied with your system), the interface will then output an analog video signal that can be routed to a video mixer, line amp, switcher, or display device directly. This is desirable because the video output from your computer's video card is not powerful enough to carry over long distances, and the connectors and cable systems were designed for computers, not production. In addition, there are all sorts of video routers, switchers, line amps, etc., for use with pro video standards. Converting the computer's video output to RGB or Component video will allow use of devices that support these high-bandwidth pro standards, and will also look familiar to the video guys you are dealing with. In video systems, both analog and digital, there are many different video formats and interfaces that are used in different applications for various technical and economical reasons. Here is a list of the most commonly used analog video signal interface types, from the best to worst in picture quality. They are all designed to use 75-ohm coaxial cables (one coaxial cable or more per interface):

Frame size (pixels) Type of video
1920×1080 High Definition, 16:9, Square Pixel
1280×720 High Definition, 16:9, Square Pixel
720×486 Standard Definition, 4:3, Non-Square Pixel for NTSC
720×480 Standard Definition DV, 4:3, Non-Square Pixel for NTSC
720×576 Standard Definition, 4:3, Non-Square Pixel for PAL
640×480 Multimedia, 4:3, Square Pixel
480×360 Multimedia, 4:3, Square Pixel
320×240 Multimedia, 4:3, Square Pixel
240×180 Multimedia, 4:3, Square Pixel
160×120 Multimedia, 4:3, Square Pixel

RGB video is the highest-quality video used in the professional A/V presentation industry and computer video. It has one wire for each color, usually with its own RF shielding to reduce any interference and any subsequent quality degradation. Nothing is better. How the sync information is transferred varies from one RGB interface application to another (possibilities are sync-on-green, separate composite sync, and separate HSYNC and VSYNC signals). This format uses the most bandwidth to store or transmit, as each separate component requires full bandwidth.

Component video is a bit of a misnomer — RGB is technically also component video, or video whose components are transmitted separately. When someone refers to Component video they're usually referring to Color Difference component video. In color difference component video, the picture is transported in luminance plus two “color difference signals.” This format offers a quality image but uses less bandwidth. Color difference component video [YUV or YCrCb] is the highest quality form of video typically used in the TV broadcasting industry.

S-Video combines the RGB components into two wires, Luma and Chroma (Y and C), or Brightness and Color. Each wire includes its own shielding to prevent interference. Four-pin mini-DIN connectors for S-Video are the most commonly seen on video monitors, DVD players, High-8 movie cameras, etc. This format became popular when S-VHS machines were released and is sometimes mistakenly called “the S-VHS standard.”

Composite video (sometimes referred to as only “video” by its connector name) uses one wire (with its own shielding) to carry all video information (red, blue, green, and sync) mixed together. This is generally a pretty good picture, but depends greatly on the quality of the generating and receiving equipment. This format is quite often referred to as PAL video, or NTSC video, depending on what video format is used. This is the format most often found on lower-end consumer-grade video equipment. The connector is usually an RCA type.

RF video is the format that comes out of the cable outlet on your wall and goes into the cable plug on the back of your TV. This is one shielded wire carrying not only the NTSC or PAL video information, but also the sound information as well. In the case of the cable coming out of your wall, this one wire contains many channels. Unfortunately, in real-life situations, those many channels and the sound with video can interfere with each other and cause picture quality to degrade.

Closing Thoughts

I think there are two ways of looking at putting together a basic system comprising an automated lighting control console (Wholehog II, grandMA, Maxxyz), DMX-controlled digital media server, and a video projector:

Content > Server > Display
Display > Server > Content

In the first model, the content will determine what the resolution of the system will have to be. In other words, high-resolution images have been determined to be necessary to achieve the artistic goals. This decision determines what digital media server will be used (Is the content high-res QuickTime and TIFF, or MPEG-2 and low-res bitmaps?), and then what is the minimum resolution that the display devices (and everything in between) will need to be to deliver the content at full resolution. Image quality and content are king in example one.

In the second example, the user doesn't have the luxury of determining the resolution of the display devices — they are what they are. If they are TV monitors with a maximum of 330 lines of resolution, you can probably use movies with resolutions like 480×360. This will make your movies load and play back faster. Also, try to use files and video transmission standards that offer at least 20% more resolution than you need — you can lose that much by the time the signal arrives on the other end. For us, that video standard will most likely be RGB or Component video. And everything in between the media server and the display devices, (projectors, LED videowall, plasma displays) will have to support that resolution.

DMX-controlled digital media servers coupled with video projectors and other display devices can allow creativity never seen before in many areas of lighting and production. But, they also require some specialized knowledge and foresight that is usually not part of the lighting world. Hopefully, after reading this article, you now have a little more of that knowledge. Now go download some cool QuickTime movies!

Robert Mokry is an 18-year veteran of the lighting industry. He can be contacted at rmokry@robertmokry.com.