“Revolutionary.”

“Paradigm-shifting.”

“Holy Grail.”

“Irrelevant.”

“Overrated.”

Buzzwords abound. How about statements?

“Changing the way lighting can be done…”

“Folding projection into lighting…”

“Just another signal path…”

“A powerful addition to the projection design toolset…”

No lack there, either. We are talking about media servers. Love them or hate them, understand them or not, they are here to stay.

To thoroughly investigate these products, we wanted to really assess capabilities and workflows. We wanted to talk to the users, and we wanted to lay hands on wherever we could.

At first blush, the 800-pound gorilla of media platforms appears to be Catalyst Pro v3.0 from High End Systems, with maybe one or two runners up. The fact: many major lighting manufacturers and some larger vendors have developed, or are currently developing, media server solutions. There is also a developing crop of solutions by independents. There is even a whole tier of live video serving and mixing solutions that have been created with the rave designer in mind.

The bottom line is that the field is crowding, which, in our estimation, will mean a couple of things. First, the development of features demanded by users will be fueled and sped along by competition, and all of these platforms will develop rapidly. Second, some will succeed, and others will not. The media server category is specialized and expensive to develop and implement. Just as the market only supports several truly popular and widely used lighting boards, so too will there probably be “several” media servers playing in the professional production environment. Faced with this conclusion, we opted to narrow the focus of our article to the most widely adopted and known products.

One thing common to all these solutions is that they are image-serving devices controlled and accessed via lighting control interfaces. In almost every case, these media devices have been created by lighting companies and lighting mindsets. One might ask why this solution springs from the lighting industry, but production history has shown us breakthrough technology development from unexpected places before. The storied history of Vari-Lite and the initial development by audio experts leaps to mind. Whether or not historical metaphor is relevant, what this development has brought about is a very different look at controlling and organizing media.

All of the solutions feature a separate computer that hosts the server program and is connected to fast disk arrays with media files or, in some cases, to other media sources via control protocols. All have some sort of multiple media-layering capabilities. All have some ability to transform media “geometrically” via scaling, revolution, or motion. Let's look at each one individually to explore their features in more detail.

HIGH END SYSTEMS
CATALYST PRO v3.0

Currently in its third software version, the Catalyst Pro v3.0 is a powerful and stable platform for media serving and control. Catalyst sets a high bar for performance and capability with some very important new features.

Catalyst resides on a Power Mac® G5 and can utilize a fast SATA drive array, a SCSI RAID, or a Fiber Channel disk array for media storage. The Macintosh, of course, uses the Apple® OS X, making it a veritable super computer with its 64-bit architecture. Naturally, the software relies heavily on Quicktime® functionality to deliver and playback media. Quicktime is Apple's own media signal path, and it is immensely powerful and flexible just as a component of other systems. A lighting person might regard this as the most programmable media “softpatch” they had ever used. For those on the projection side of the fence, the benefits are already well known.

Version 3.0 differs from previous versions in dealing with layers. The Catalyst v3.0 can utilize up to four concurrent layers of media. Each of these media layers is treated by the Catalyst as a separate fixture with its own range of parameters and associated control channels. The layer fixtures require 40 channels of DMX control each. The methodology for working with layers in Catalyst is to use the first layer as a base, stacking additional layers on with independent control of media transformations, simultaneously playing all layers back as a composite. For more info on precisely what can be accomplished by doing this, check out our media server primer in Entertainment Design [February 2004].

This independent control of layers is how High End tackles the problem of crossfading from image to image. Operating in an HTP format, the layers can crossfade from one to another. What you cannot do is crossfade from one four-layer compositional light cue to another. Therefore, media crossfades have to be thought out carefully in terms of cueing using follow cues and part cues to call up, fade, and roll various media.

Catalyst has external monitoring in the form of the Macintosh's 1 or 2 external monitors. The monitoring has the ability to scroll through libraries previewing clips and effects or layer combinations before output. High End also provides advanced info in the monitoring, referred to as the HUD (Heads Up Display). Media clip properties, format type, playback speed, and other attributes can be determined utilizing the HUD.

The Catalyst is capable of two discreet outputs if you choose not to have the ability to preview. With previewing capability, one output is possible. The signal outputs are routed out through the Catalyst Interface Box (CIB). The CIB provides two RGBHV outputs. High End has also provided for the DV1 Dual Video Distribution Amplifier. Actually consisting of two separate VDAs, the DV1 boosts two output signals for high-quality transmission through up to 250' of cable. The DV1 also provides two VGA ports for viewing the output signals. It would be nice to see some options in the CIB/DV1 part of Catalyst. More and more staging applications are making the step up to SDI or even HD-SDI, and these are solid options for High End to consider. Also intriguing might be other options for DV1, which could allow the display of multiple screen, contiguously-composed images through its ability to encompass up to 4 VGA outs, or even firewire800, which doesn't share the latency problems of its older sibling.

Catalyst is a solid performer in the area of media manipulation. Geo-metric manipulations take the form of scaling, rotation, or horizontal/vertical movement of media. These attributes can be given timed changes via the timing controls on the lighting desk. Layers can be distorted, cropped, or keystone-corrected. Color correction is available on layers as well. Three-color mixing is accomplished utilizing the color encoders on the lighting desk. Layers with no media assigned can be used to do color-mixed, plain light output, or color-over-gradient reveal layers to achieve really subtle and shaped lighting texture.

In terms of layer mixing, there is a certain hierarchy to respect and a good deal of flexibility to be found therein. Since the layers utilize an HTP ordering (Highest Takes Precedence), it is possible to use still or moving images with alpha channels to “mask” layers below, allowing for images to gradiate to black or to be framed by other graphical elements. To those designers who have dabbled or are already experts in the use of compositing applications, this will be a familiar landscape. It would be a substantial improvement to see the implementation of layer interaction modes similar to those found in image manipulation programs like Adobe® Photoshop®. This would allow for top layers to affect only the color, the luminance of layers underneath, or to be overlaid or screened over underlying layers. Electronic image designers know that this is a powerful core technique in creating media.

Catalyst provides a lot of power through preset effects. Stalwarts like transparent black or transparent white allow for luma keying through the brighter or darker part of images to reveal layers below. Contrast effects give the ability to work edges and transitional color and lighting values. The Solaris effects give interesting control over secondary color channels for results that range from subtle to psychedelic.

Catalyst comes with a library of 3D objects to which you can map imagery. We've also been told by reliable sources that High End can convert and provide you with your own source objects, given enough notice. It would be better, of course, if users could just import their own 3D objects. It's important to understand that Catalyst imports this truly 3D object as a flat 2D layer. A further limitation is found in the inability to keystone-correct the final output of the 3D object, as is possible with the layers.

The Catalyst Pro v3.0 manual recommends rendering media at 720×480 in the Quicktime DV/NTSC CODEC. With these settings and compression, users can rest assured they will get real time performance using all four layers. Media playback resolution is limited only by the native capabilities of the Radeon 9800 Pro card and by the display device. Designers essentially trade real time multi-layer response for layer resolution, but by planning and laying out cueing, it is possible to operate flexibly at high definition.

VLPS
EX1 MEDIA SERVER

The VLPS EX1 media server grew out of “what if” wonderland. Colloquial legend has it that Rusty Brutsche, Jr., an avid user of the animation program Maya, was overheard asking, “What if you could do 3D media using a light board?” As a result, the EX1 takes an entirely unique approach to processing and serving video.

EX1 operates in a 3D environment and allows for the use of user-generated 3D models in the “scene” (up to three models, actually). In this 3D environment, designers have two dedicated media layers in addition to the 3D elements. There are also four fixed, color-changeable spotlights and a big, broad fill light in the design “space.”

It can be a little daunting to realize the power in this if you haven't worked in a 3D program. An appropriate metaphor here is a sound stage. Picture that, on this sound stage, you have a camera that can move on any axis. Imagine you can put up to three set “objects” on this sound stage and you have a built-in lighting system. To top it off, you have two irising backdrops that magically track your camera, whatever direction it turns. Now, look through that camera…this is what EX1 is outputting.

How does EX1 do this? Instead of relying on a computer's video display card to serve frames, EX1 utilizes the OpenGL protocol found on high-end 3D design and gaming cards to serve up not just frames, but the 3D environment, in realtime. This is a totally unique approach, and it affords EX1 a lot of room to grow. Similar to Quicktime, OpenGL has an enormous amount of built-in effects capacity that the engineering team should open up. VLPS Los Angeles senior programmer Bryan Faris notes, “Texture mapping as a video effect is not code. It doesn't break the code every time we add an effect capability; it's just something else the texture is doing as far as the computer is concerned. It puts a lot of art back in the hands of the artist.”

You can ignore all of this, of course, and just playback your media on one of the two media layers. Ah, but what if you needed an additional layer or two to do that same effect with multiple masking layers that you could make on Catalyst? On EX1, you would add flat, planar 3D “objects” with media mapped as texture on them. Instantly, you have two more layers but layers that can move back and forth toward the camera, as well as up, down, and sideways. Take the next step, and you can import fractal shapes, humanoid characters, or whatever and map media onto these objects. Then, you can navigate the camera around and through said objects.

For concert designers and purveyors of pure texture, this leaves gaping opportunities for making cool flash fast. But hang on, there's also an application here for designers trying to map media onto scenery. By creating 3D models of actual onstage scenic elements and then matching the projector's perspective to those actual scenic elements in placing the 3D virtual object, you have the ability to give yourself a canvas to appropriately distort your media and make it appear “mapped” on the actual set piece. We had developed a similar methodology that involved previsualizing in Softimage XSI (a 3D application) prior to production, but this is a far more interactive method for tech.

All this is well and good, but where is EX1 now, comparatively?

EX1 has all of the media manipulations that we've heard of in terms of geometrics. Keystone correction is available as a function only on the final output, as opposed to per individual layer. This has a certain sensibility when you consider that this is a 3D scene, and the correction goes on your view of it, rather than individual elements. Still, it would be nice to be able to individually perspective shape layers, achieving multiple perspective keystoning.

EX1 allows for color correction per layer. What's missing here, at present, is the ability to selectively affect color channels in an image. This would allow designers to just warm up the tones in a picture, for example, basically giving control over the individual red, green, and blue color channels that are mixed finally in the pixels to provide full color. It's a subtle but powerful capability. To be fair to VLPS, none of these servers really has secondary color correction tools, but some get interesting options via applying chromatic effects.

Each layer has media playback speed controls, but the ability to define In and Out points in your media is missing. Let's say you have a clip of a tornado to use from The Wizard of Oz, and you only want to use the middle third of the clip for playback. With EX1, you would have to define such a clip, from the original, in an exterior media-editing program like Final Cut Pro, Premiere Pro, etc. VLPS has indicated that this feature is high on the priority list for addition to the platform.

The EX1 is resolution-independent; once again, playback capability is impacted only by the performance of the host Windows XP system and the speed and size of the disk drive array. Because the EX1 is availing itself of the realtime texturing in the 3D card to provide video playback, it scales up in resolution without any apparent hit to performance.

On the signal output side, the EX1 has its own outboard box with the familiar RGBHV plugs. Again, it would be good to see this platform move toward options here like SDI, HD-SDI, DVI, etc.

There's another factor in considering the EX1. Like all of these platforms, it can be used with most advanced lighting desks. However, when paired with VLPS's own Virtuoso or Virtuoso DX, the programming interface takes an amazing leap. Both boards have arrangements of encoders and preset surfaces that make almost all controls available at all times, without navigating to them. Also, the VLPS desks feature an integrated media display window that allows users to preview, choose, and work with the media on the server in a very visual way, eliminating the need to scroll through clips on an encoder, view them, and then set up presets (which can easily eat up an entire day of tech alone). With the integrated viewer, clip libraries are directly visible as thumbnails. Roll the mouse over one, and it animates if the file has motion; double click, and it loads into your selected layer. It's that easy. The knockout addition to this feature is that it is platform-independent and works with EX1, Catalyst, and others. Beyond EX1, this makes the Virtuoso consoles a real advantage in programming almost any media server choice.

Currently, EX1 can be seen feeding the screens of the immense television hit, American Idol. The EX1 has also been out on the recent Fleetwood Mac tour, and it's popping up in plenty of other places. Our gut instinct about the EX1 is that it is being developed by the proven production brains responsible for some amazing fixtures and control surfaces.

FOURTH PHASE/LSD
MBOX

The Mbox is an interesting device with a bit of history. A result of a development process that produced LSD's Icon M Light, the Mbox has been around for a while now and has its own media handling and control approach. The server was initially designed to display Icon M graphics on LED walls. It has some tremendous effects, and it's been proven in the field. Here is another case of a vendor developing its own product upon seeing a viable target market, responding to requests to add features, and encouraging development through constant contact with end users.

The Mbox runs on the Power Mac G5 platform, specifically the G5 Xserve, which has a fabulous rack-mounted form factor. The software will actually work with other G5 and G4 models, as well.

The system uses the internal drives (SATA) of the Macintosh to store and playback the media. The Mbox's I/O module plays a key role in actual media playback, however. All Mbox video output runs through the module, which provides smooth fading to video black. It is the last stage in Mbox video processing before the image is sent downstream to the display devices. The module allows fading of the image without utilizing the Macintosh's processor and without the image becoming transparent in the fade.

This module also provides some piece of mind for programmers and operators by automatically going to black if the lighting console fails or control signal corrupts. Fourth Phase Mbox programmer Drew Findley comments, “It hasn't happened yet, knock on wood, but the idea is that the ‘Welcome to Macintosh’ screen will never appear, inadvertently, on stage.” The module is equipped with the same RGBHV outputs as the other servers we have discussed thus far, with Fourth Phase planning to introduce an SDI option with Genlock in the near future.

The Mbox output can be set to any resolution the host computer supports. The most common resolutions used, however, are 800×600 or 1024×768. Mbox is yet another user of the powerful Quicktime format for video processing. The server will playback any CODEC that Quicktime supports, but Findley reports that “choosing a CODEC is a fine balance between image quality and playback performance.” As we've seen, this rings true with most of the serving solutions. In the case of the Mbox, the Quicktime CODECs that have proven performance are DV-NTSC/Pro High Quality and Photo JPEG Medium Quality.

The still-image side of Mbox's story features compatibility with most popular file formats including JPEG, TIFF, GIF, and PICT. Image formats with an alpha channel (TIFF, PICT) can also be used to build transparency in the displayed image. For alpha channels in moving media clips, Mbox provides an Alpha Channel Tool. This tool essentially uses a luma-keying model to create transparency based on black levels in a clip. The darker the area of the clip, the more transparency it has. Those familiar with Catalyst Pro v3.0's transparent black effects will find this familiar, although Mbox's controls work differently and are more precise.

Currently, Mbox does not include a live input or serial control of external video devices. We think this is decidedly important for furthering options and control via this methodology. It would be a welcome addition.

Regarding media transformations, Mbox has the features designers expect: rotation in three axes, scaling, tiling, X/Y positioning, and cropping. Mbox moves away from every other server in the field by also offering real time blur capabilities for softening the image.

Each layer in Mbox has CMY additive color controls that can be applied. Mbox is also capable of a graduated desaturation of media right down to grayscale. The color controls occur downstream of the desaturation controls.

Mbox also has In and Out point selection capabilities. Each clip on each layer can have discreet In and Out points. Mbox has a variety of loopable playback modes that can reference these In and Out points:

  • Loop Forward Mode starts at the In point, plays forward to Out, then loops back to start again at the In point.
  • Loop Reverse starts clips at the Out point, playing backwards to the In point, then looping back to start again (in reverse) at the Out point.
  • Once Forward starts at the In point, plays once through, and freezes back on the first frame.
  • Once Reverse…same as above except…that's right, in reverse.
  • Bounce Forward starts at the In point, plays to the Out point, and “bounces” back, reversing through the image back to the first frame.
  • Random displays random frames from the clip in random order.

Each layer has discreet speed control that can allow for slow motion, or faster, playback.

The Mbox features multi-layer playback. The platform currently supports three layers of video, still imagery, or vector graphics. Mbox does not have any layer interaction modes, and layer interaction is defined only by opacity values. This means that designers can lay down a base clip and then layer masks or gobo images over it.

Where Mbox really excels is in media playback quality. Images look exceptionally sweet on Mbox. Although it would seem that any of the platforms that utilize Quicktime as their base video engine should have similar playback quality, we found this to be subjectively not the case, and, in fact, there are varying degrees to which Quicktime can be used. In Fourth Phase's case, a lot of time and dollars have been spent to optimize the quality levels for real time playback. Frame rates are silky smooth, and output quality is maintained from original media quality. The Mbox is also very agile in clip selection speed and playback, making it superior in live production situations.

Another plus for Mbox is the ability to cross-dissolve between images within the same layer. On other platforms, intra-layer dissolving is the only option for crossfading. This is another one of those subtle, seemingly workflow differences that can have a cumulatively huge impact. It's the difference between having to write lots of part and follow cues to land things correctly as opposed to writing a simple linear cue stack and having the fade happen automatically.

Fourth Phase's inclusion of Genlock in the Mbox allows for designers to achieve absolute synchronicity between multiple servers, as well as other video playback devices. Multiple Mboxs have absolute frame-accurate playback capability, not to be underestimated when you're trying to get multiple media display in perfect sync.

Mbox is capable of working with DMX and Art-Net, and profiles exist for it to work with WholeHog® II, Whole-Hog® III, grandMA, and Maxxyz. Fourth Phase says it has designed Mbox to be fully capable and easy to operate across the spectrum of lighting desks.

Mbox has also had a slew of high-profile appearances, including the latest tours of The Eagles and Bon Jovi. Recently, the platform was used on Jay-Z's appearance at Madison Square Garden, feeding ten large LED screens, as well as an extensive G-Lec LED curtain feature (if you haven't seen a G-LEC curtain yet, Google their website, and enjoy drooling).

GREEN HIPPO/DHA
HIPPOTIZER

Like many of the platforms discussed, Hippotizer comes from a unique direction. Green Hippo was formed in 1999 by a group of technicians and artists who had backgrounds in corporate presentations, control systems, and the arts. The company initially worked on solutions that controlled DVD players for playback. The team soon progressed to computer server-based display and, by 2000, had developed a touchscreen-controlled media playback system that was ultimately purchased outright and marketed to clubs by Luminar Leisure.

In 2001, Green Hippo gathered a new team from around the world to develop the next platform for real time VJ control. The Hippotizer was introduced at BAR 2001 and quickly became a favorite on the rave circuit.

In Spring 2003, Green Hippo was approached by a sister company of DHA (David Hersey Associates), Scene Change Ltd., to develop a version of Hippotizer suited to the more scripted and controlled environments of live production. This new development would also encompass the use of lighting controls to trigger playback.

This “mixed” heritage means that Hippotizer has some abilities and aspects not shared by its companions in this category. Because it was designed to playback clips in an instant, offering maximum live flexibility and capability, the Hippotizer is wickedly fast in response to user commands. Designers can literally work completely improvisationally with no latency whatsoever.

The platform's development also included several stages of control, and this plays to the end user's advantage. The Hippotizer is available in four variations of capability and control, with some interesting options: Hippotizer Lite, Hippotizer Pro, Hippotizer Club DMX, and Hippotizer Concert DMX.

All options have Composite, S-Video, RGBHV, and DVI outputs, featuring a wider range than any platform discussed so far. All of the variations also have live video input capability (via firewire), and they all come with pre-installed media.

The Lite and Pro platforms both have custom controllers and don't work with lighting desks. These controllers are really fabulous and flexible, though, and have the responsive capabilities discussed initially. The Lite version has a custom, 49-key control pad that triggers video and accesses functionalities. The Pro version has an large touchscreen controller with previewing and visual clip selection.

Both Lite and Pro platforms operate at 800 × 600 output resolution. Both have the ability to store and serve up to 190 moving clips and up to 10,000 high-resolution stills. There is room provided for up to 70 user-created pieces of media. Since these versions have integrated control, they both utilize a media management software interface to set up and organize clips, effects, and presets. Both the Lite and Pro version are limited to two-layer playback and have almost 200 built-in overlay effects. Either of these would be a flexible and viable option for a budget-minded production, as well as clubs. The lack of media layers is somewhat limiting, but by planning and designing carefully, it's not insurmountable.

The Club DMX and Concert DMX versions both avail themselves of exterior lighting desk control with DMX or Art-Net control. The “DMX” variants of the Hippotizer both feature larger output resolution than their Lite and Pro siblings at 1024×768. Their storage capabilities vastly outstrip the two other versions as well, with storage on fast SATA disk arrays for up to 10,000 moving clips and 10,000 still images.

The Club DMX version has less transforming and effects capabilities and thus weighs in at a lightweight 16 DMX channels. It is also limited to one playback layer, which certainly lacks flexibility when compared with the larger pool of choices here. The Concert version has media transformations equal to other servers we have discussed (scaling, rotation, movement, media speed, etc.) and utilizes 64 DMX channels per server. It is able to manipulate and use three layers.

Hippotizer benefits from an installation of the entire DHA line of gobos, something the LDs out there are bound to find attractive. The Concert DMX platform also is capable of realtime variable image blurring, one of the “holy grail” factors in media serving.

In and Out points are assignable for media, and CMY color correction is available in both additive and subtractive modes. Eight point keystone corrections can be applied to final output only.

What the Hippotizer has that the other platforms lack is an accurate, zero-latency beat mapper, allowing effects and media cueing to derive directly from the beat and waveform of audio input. This is a huge advantage for concert LDs, who can make effects, react on media musically, or have media sequences trigger automatically based on audio input. We make extensive use of beat mapping to generate keyframes for effects in the compositing stage of our work, and imagining this in a real time environment makes our heads spin a bit. Every server ought to have this ability.

The bottom line on Hippotizer is that it is a bit more limited than some platforms but also more liberated in terms of live flexibility. This makes it a good choice for highly “fluid” live production.

IRAD
RADLITE

RADlite is also layer-based, with each layer behaving like an individual lighting fixture, as far as the lighting desk is concerned. RADlite refers to these layers as elements, and there are seven default layers with the ability to use more. This makes RADlite the winner in terms of sheer layer output. The Elements each have a function and purpose, and some element layers can have multiple instances in programming.

RADlite runs on Microsoft® Windows®, Macintosh®, and Linux, making it very flexible in a platform sense. As with all of the servers, the capability and power of the platform has a direct correlation with smooth playback and higher media resolutions. Running at DV resolution and compression, RADlite assures users that they can run on a reasonably powerful laptop! For less compression and more resolution, larger computers and lots of fast disk space are required. No surprise there.

RADlite has definite playback problems if it's not supported by the right hardware for the resolution and complexity you want. If you hope to bulk up on layers of uncompressed media, you must have a fairly new, well-equipped platform.

The flexibility and platform agnosticism does mean that RADlite can be used by designers offsite to pre-program on their home machines (or laptops at the hotel bar). RADlite allows free downloads of the software, but to get final output, you have to buy the license.

Another aspect of the code's open nature is that RADlite has been repackaged and altered to work in OEM solutions. James Thomas' Pixeldrive product, used to drive the James Thomas line of LED-based light fixtures, is actually a repackaged version of RADlite. IRAD views itself as a software creator, and they have an avowed dedication to allowing the product to be customized.

So, let's move on to operability and capability. We need to begin with the ubiquitous RADlite elements. The base element is called the RLcanvas. Video clips can be played on this layer with control of trail effects and color. Next, the RLgraphics layer is the place for still images, vector shapes, and graphics. Several of these layers can be used simultaneously. Third on the stack is the RLmask layer, which utilizes alpha channels to create framing around the images underneath it. RLmask is another layer that can have multiple, discreet instances. RLtext provides a control panel for creating and manipulating text, certainly useful in corporate applications. RLsurface allows RLcanvas and RLgraphics images to be mapped onto surfaces, including some 3D surfaces. This is similar to the Catalyst's treatment of 3D in that it is actually a 2D layer.

RLwave is an interesting, if limited, element that creates waveforms from audio inputs. We'd like to see this become more like Hippotizer's beatmapper, capable of affecting layer effects and attributes based on audio input. The final element is RS232 Devices, a feature that lets RADlite control up to six external video devices, including cameras, decks, and video switchers. This is pretty extensive for serial control capabilities (at least among these sorts of products).

The main media layer, the RLcanvas, is capable of crossfading or wiping between clips. The RLgraphics layers can interact with the RLcanvas layer using layer modes that are somewhat similar to Photoshop. RLgraphics layers can affect only the color of the canvas layer; or it can posterize it; it can bleed through darker areas (that luma-keying capability); or it can be used in an ADD mode which can, as IRAD puts it, “be wacky, messy, and hard to control…but fun.” That's one layer mode you'll want to save for the casual productions.

The RLcanvas also has clip speed capabilities, media geometric controls, and color correction controls.

The RLgraphics layer has similar geometric controls for media, adding scaling, zoom, and rotation. It also has the ability to apply highly controllable color gradients over the clips, as opposed to simple one-color overlay.

RADlite is making waves internationally, seeing widespread adoption in Europe and Australia. The recent 2004 Logies in Australia (a televised awards show similar to the Oscars) found RADlite gracefully feeding media to an array of HES DL-1 units and over 50 James Thomas Pixelline fixtures. In the US, RADlite is distributed by TMB in California.

This is a great deal of information to digest, and there are many options we still haven't covered. There is a whole subset of more rave-oriented, VJ-serving solutions that come pretty close to these platforms in some ways. Similarly, there are devices, like the NEV7, which function more as serial controllers through a light board. In any case, we can see that the media server market is bulging with options that afford a designer flexibility, power, and distinctly different capabilities in some cases. Armed with this knowledge, we encourage lighting designers and projection designers to go out and use with confidence!

CRITERIA CONSIDERED

We had a confab of the geeky minds working at our studio. After much discussion, we came up with a list of criteria to review and check:

  • What hardware platforms and operating systems are used?
  • What sort of storage array is used?
  • What formats and flavors of input/output are available?
  • What forms of monitoring and previewing of media are available?
  • Is there a native, secondary control interface separate from the light console?
  • How is the interface and control layout (via the console) achieved?
  • Does the platform have Genlock?
  • Does the platform have MIDI or timecode capabilities?
  • What methods of media management are used?
  • What file formats and media CODECs are acceptable or optimal?
  • Does the platform have live input capability?
  • Does the platform have external device serial control capability?
  • What geometric media transformations (translations, rotations, scalings) are available?
  • Is there ability for media layer distortions?
  • Can layers or outputs be keystone-corrected, cropped, and color corrected?
  • Do layers have interaction modes other than straight opacity (similar to Adobe® Photoshop®)?
  • Can the playback speed of media layers be changed?
  • Can in and out points be defined in media?
  • Do the platforms have the ability to use or manipulate 3D objects?
  • Do the layers utilize alpha channels native to moving or still footage?
  • What media resolutions can be used?
  • Can cues or layers be crossfaded?
  • Does the server come preloaded with anything?
  • What is the process for getting custom media loaded?