This month's missive begins with some reader feedback:
I have just received the December edition of the “ED on Projection” online newsletter, and have a comment.
In your feature story “Projecting Philosophical” reference is made to a device used as a projection source as a “media server.” The term “media server” is highly misleading. Can we in the industry work on our terminology to avoid confusion? All of us seem to be using the term “media” to describe the source material we work with, whether that be audio, video, projected images which use lighting or other fixtures, and even special effects. And in our increasingly digital world, many of us use computer servers more and more as the repositories of the data files which we play through our various systems.
Most television broadcast commercials and “bumpers” are recorded to and played back from servers. Most video servers can also play some level of audio in sync with the video. Many of us in the non-broadcast video field have used servers for some years as sources of standard or high definition video program material.
Audio systems have been using dedicated audio servers for a number of years, especially in radio broadcast. And now we have lighting manufacturers using the same term.
There are differences between the products now being advertised for lighting and the servers used for audio and video. Certainly, dedicated audio servers can be quite different than dedicated video servers, both in hardware and software, so why not expect differences for those “serving” the lighting industry? Unfortunately, it sometimes can take some time to figure out which is which. Time being the common enemy, we owe it to ourselves to do what we can to save it. Perhaps it is a simple as trying to be consistent when referring to the different products as “audio servers,” “video servers,” “image servers,” or whatever.
— John Groper,
Walt Disney Imagineering
Before going much further we need to let folks know that John's worthy opinion doesn't necessarily reflect that of his Disney masters. Tarnish not the Mouse. But he does raise some interesting points. When we got this letter we immediately opened it to a Mode Studios forum (more brains are better…sometimes). There was actually some pretty strong disagreement on the subject. Some felt that the term media server probably was misleading. Others felt it was a semantic argument that didn't really matter. Nevertheless we set about dissecting the question.
There are a lot of devices purported to be media servers, and it is true that the broadcast and music industries have both long used computer- or disk-based playback systems that have been referred to as media servers.
The problem, as we see it, has several aspects. First, almost all of the lighting manufacturers have chosen to categorize their devices as media servers. Is that wrong? Well, the fact of the matter is that these devices are all computers that host, playback, i.e., “serve” media. In the end, they do fit the profile: powerful computer platforms, attached to fast disk arrays that move and manipulate media.
Which brings us to another question: What is media? Bob particularly felt that it is inaccurate to call these devices video servers. Many times they are running at resolutions far beyond what is normally codified as video. To us, video implies a signal at 720×486 playing at 29.97 frames per second (slightly varied for our friends using PAL). But for the most part we personally use the HES Catalyst™, or the Green Hippo Hippotizer, or any number of other products running at resolutions of 1024×768, or 800×600, or 1280×1024. None of these resolutions are properly defined as video. So video server is out. Do we call it a digital kinetic imagery server? Seems a bit unwieldy. Image server? Image to us implies a single picture, or a non-moving picture. Now we are seeing that many of the popular lighting media servers are also being used to serve audio. Oh brother. Now what do we call it?
In the end, we had a hard time trying to logically and concisely classify lighting media servers as anything but media servers, which doesn't obviate the value of the question. Maybe it just says something about our semantic imaginations. It's a good thing we get paid to make pretty pictures instead of consistent linguistics! Do any of the rest of our gentle readers (we know that there's at least eight of you — we met you at ETS-LDI) have an opinion? Speak now and let your voice be heard!
On to a different multi-sided projection topic: Often we are called upon to project on objects rather than screens. The challenge in such cases is correctly mapping video onto these shapes and adjusting for their irregular geometry. There are two stages in the process where we do this: in previsualization and tech. The technique is the same, one pass is a virtual estimation, the other is a real one. Let's set up a hypothetical situation and we'll step through it.
You've been presented with your scenic design (Figure A). The plan is to project on the two facing sides of the cube, and the top. You have been given a projector by a thoughtful and charitable vendor, but budget precludes adding more. So the one projector is going to have to simultaneously put images on three varying surface geometries. You know (in your hypothetical world) that your only available projection position is from the proscenium truss, adding another geometric complication. The key is to find out precisely how an uncorrected image will look when projected on the cube, and then adjusting your art to pre-distort, compensating for this. We find this out by projecting a grid graphic on the object.
“What sort of grid?” you might ask.We have many different grid graphics that we use for alignment, color balancing, blending projectors, etc. We also have made some specialized grids just for this task: mapping strange geometry or odd dual-axis keystone situations. We favor a multicolored grid, with various alpha-numeric symbols scattered around. This gives us a lot of recognizable reference points. We do this because a plain black and white grid can often present difficulties in figuring out how it is distorting. Tracing the lines and intersections can be difficult. The multi colors and numerous numbers and letters give us more signposts to look for.
The first time we do this is in previz. You can see (Figure B) the cube with a grid projected on it from the projector position. We've accomplished this in virtual space by placing a grid “slide” (Figure C) in a light used by a 3D program. The light takes the place of the projector, and shows us how the grid image would look when projected on our cube object. You can see that the grid is skewed across the cube. Now, we plot the major geometric “points” of the objects where they appear in the distorted projected grid onto a piece of paper with the grid printed on it (Figure D). We now have a map to use in creating our pre-distortion.
The next step is to duplicate the plotting you've done on paper in the Adobe® Photoshop® document containing your grid. Once this is done, you can copy the contents of your Top Face, SL Face, and SR Face (above) into new layers in your Photoshop doc. Now, using the Transform> Distort command, you can drag the corners of these layers until they match the plotted faces of the cube (Figure E). Next, turn off the grid layer, make sure your background layer is black (Figure F), and voila, you have a predistorted image that will correct itself to display properly on your cube when projected (Figure G).
As I mentioned, we usually do this in previz, and then repeat the process in tech to touch up our perspective.
Now you might ask, “How do I do this with moving footage?” Do you think we're going to give up all of our secret recipes? You'll just have to attend one of our upcoming sessions (check in on the Broadway Lighting Master Classes, or at next year's LDInstitute™) to find out more of these useful techniques.