The tools for creating 3D stereoscopic movies are now amazingly affordable. A few challenges remain, but you can easily see the possibilities. In this Creative COW Magazine article Christopher Werronen takes you through the process of making a movie destined for Stereoscopic 3D.
A documentary I was shooting called "Healed By the Earth" required substantial visual support in the form of nature scene footage. I knew that I could manage at least one good HD video in the Colorado Rockies, but that's not the game I was hunting. I went there to capture Stereoscopic 3D landscapes.
It was up there, in the isolation, that nature provided a breathtakingly pleasant surprise. A waterfall hike in the Rocky Mountain National Park called Alberta Falls offered a spectacular opportunity to recreate what we naturally see, as 3D imagery. Fortunately for the production, that high altitude shoot at 10,000 feet forced me to stop every five minutes for air. Asthma concerns, combined with the steep grade of the hike, gave me opportunities to film about forty varied shots of beautiful, cascading waterfalls.
Only the rushing water could be heard, but I was sure that the 3D images would reveal more than words.
New tools for stereoscopic 3D production have started a grassroots revolution. Even in HD, stereoscopic 3D is quite affordable. Any kind of video camera can be used, and although I'm using Final Cut Pro for editing and Nuke for compositing, specialized software can be found for under $100.
In some ways, 3D movie technology has changed little since the 1950s. Two cameras record the same scene. The relative angles of the cameras during shooting, as well as the separation between the images as they are combined in post, show depth when viewed through cyan and amber glasses easily and inexpensively available online.
The stereoscopic images in this article will properly display depth when viewed with these glasses.]
Using 3D glasses with no electronic elements is known as "passive" stereoscopic display. "Active" systems often found in theaters show depth by synchronizing LCDs and shutters built into both the glasses and the projector.
I used twin Canon HV-30 HDV cameras for my shoot. The HV-30 manual states that there may be problems with cam function at high altitude, including lens haze. I opted for .5 Canon WD wide-angle lenses, polarized filters, and UV lens caps.
I had filmed with one of these cams time before, but the setup for shooting in 3D was quite different. There were numerous checks to be made prior to each shot. The cameras are first mounted on a slide bar, something like a tripod plate that holds 2 cameras at adjustable distances from each other. The minimum I tested on my shoot was on 72mm centers, just wider than the 65mm typical of adult human ocular spread. For long vista shots I tested the maximum slide bar width, 104mm. Identical camera settings needed to be checked and set, over and over. Zoom off.
Creating stereo files in Nuke with the "JoinView1" and "Anaglyph1" nodes
My concern prior to a shoot is that both cameras attached to the slide bar are squared and not slightly skewed. This is one way to avoid disparity errors such as double images and misalignments. Making sure my camera lenses are squared off also results in more comfortable viewing, and a natural 3D depth, just like looking through a window.
After I manually start each camera (the HV-30 lacks remote LANC triggers), I use a simple thumb cricket clicker, which I will later use as a guide to help sync the two tracks during the edit in FCP.
I expected to fill 20 hours of mini DV tapes in the mountains of Colorado, 10 hours total for each mono side of the stereo files. I hoped to see positive results, but as of this writing, I have not found a monitor or software that I can use to display stereoscopic shooting in the field. There was no way to know how this would turn out until I got back to Ohio.
The process after shooting is simple. I align and trim each corresponding stereo video channel, and, after the edit, each is separately rendered as a .mov file. (I'm still testing, but am presently using H.264.)
The Foundry's Nuke is a compositing application that imports and joins the files, then renders them as a stereo "anaglyph," a movie with two color layers whose offset creates the effect of depth...if all goes well.
I wear my 3D glasses when joining the files in Nuke, and again when editing the composited anaglyph file back in FCP. I wind up putting the glasses on and taking them off a lot.
Editing an anaglyph works just as it does for any other video file, except for some specific attention to 3D stereoscopic motion and correct orientation between cuts. Scenes on either side of an edit must balance in such a way that the audience is not pulled out of the 3D illusion. The resulting attention erroneously becomes placed on "3D effects," rather than "3D story."
This is why telling the perfect 3D story requires "Depth Grading" to manage the internal depth of the scene.
One aspect of this is "depth matching," so that the viewer's eyes are not forced to change their focal plane from shot to shot. You might have experienced this difficult transition while driving, by looking back and forth from your dashboard to the horizon. It takes time for the eye to settle, which is why edits that require rapid visual re-convergence are unpleasant for an audience to watch.
"Scene ramping" is another aspect of depth grading: changing an object's or shot's stereo distance gradually, so that the viewer's visual convergence on one shot picks up where changes to the previous shot have left off.
Nuke is an excellent compositing program, which I use to join stereoscopic files, correct for lens barrel distortions in the wide angle lenses, and adjust speed as much as -50%, among other things. However, the depth grading features are what make Nuke such a great choice for working with stereoscopic images.
A new set of Nuke plug-ins called Ocula has become available to work even more specifically with stereoscopic 3D.
For example, vertical disparities will sometimes present themselves. Those stereo images will have severe ghosting and will not focus into one clear image when combined into a stereo anaglyph image. Ocula can address this, not just with a simple Y-position shift, but by rebuilding frames to compensate for keystoning and other errors.
I'm especially interested in the 3D paint and roto features in Ocula that allow treatments for one eye to be automatically mapped to the other. I've had wonderful results using Synthetik Studio Artist to roto/paint HD frames in 2D so far, but the video tracks for each eye must be treated separately.
Ocula presents a small problem for some 3D stereographers: it is priced at $10,000. I'm primarily a writer/ director, now beginning as a producer. I've learned the compositing programs well enough to know when it's time for me to contract with specialists to help with vertical disparity issues in particular -- which I've now done.
I still sometimes shoot landscapes, but I'm now curious about dramatic performance in 3D. That search has taken me to a little village on the east side of Cleveland, Ohio, with small retail shops, restaurants and one Method Acting school, run by owner-teacher Jessica Houde. I'm now filming young actors in 3D as they train and develop. These 20--30 year old students convey tons of emotion for me to capture.
Filming 3D dramatic action has a whole set of considerations not found in stereoscopic landscape filming. I initially filmed the actors in HDV at 1420x1080/30, which had worked well for stationary nature scenes. But motion artifacts in filming the actors became a considerable problem, often confusing to diagnose
After hearing that James Cameron has found doubling his normal frame rate to 48fps to his liking in the production of his epic, "Avatar," I'm presently testing 780x420 @60p, and am pleased so far.
(Even though we are nearly a year away from the scheduled release of "Avatar," it is already having a profound effect on 3D filmmakers at every level.)
Whereas in 2D productions, strong control of visual depth of field directs the audience into the action or characters on screen, I prefer not to do this in 3D. Allowing the audience to choose their personal focus opens the field of vision to more natural 3D experiences in a scripted story. My work so far has led me to write scene structure so that the entire scene is in focus, writing for and encouraging multiple minor actions as supporting stories within the scene.
An option that may work better for guiding audiences watching stereoscopic scenes is "audio depth of field" adjustments, panning from in-focus reference points within the scene, rather than forcing visual depth of field corrections.
Painted with Synthetik Studio Artist
Trial and error also shows me that all action needs to remain inside the frame to avoid the risk of actors appearing to float off the screen. Smooth transitions of their movements in 3D space across cuts require careful choreography. Early tests show that lighting plays an even bigger role in developing the impression of depth in stereoscopic shooting than it does normally.
As these elements come under control, they open up numerous screenwriting, performance and presentation possibilities.
TOWARD 3D PERFECTION
So how did my Colorado footage turn out? The HDV footage itself is impossibly beautiful. Viewable as 3D, but flawed. The stereo shots from stationary points often come close to the state known as "3D Perfect," but as I work more with them, I can see small issues that require some divergence corrections.
Although I recorded to mini- DV tape in Colorado, and so far continue to do so back in Ohio, I don't recommend it. Tape systems drift if takes are longer than a couple of minutes, so most of my shots have been intentionally brief, at 30-second bursts. I understood the value of tapeless shooting before I left for Colorado, but solid state capacities were too small. I couldn't capture the 20 hours I planned without data transfer options, which weren't practical for these extended excursions. Tape has worked well enough so far, but a camera upgrade is definitely on the list of things to do.
Stereoscopic production obviously brings in a dramatically wider range of troubleshooting questions. Were problems caused by camera misalignment? Uneven adjustments of camera settings? Can these problems be addressed in post? Or are they being caused
Stereoscopic 3D is magic when it's working, but truly confounding when it's not. So why bother with HD stereoscopic 3D at this point in its development for independent producers? For me, I like the serious challenge. I like the filmmaking and storytelling options it opens up. I also like the possibly fantastic outcome I see taking shape. The ability to create stereoscopic 3D at this level of production has the potential to once again reshape the industry from the grassroots up!
The ongoing editing of my stereoscopic mountain scenes continues to guide this story to its end. But as a first experience, my 3D spirit was awakened by my baptism in Colorado's "God's Country."
Find more great Creative COW Magazine articles by signing up for the complimentary Creative COW Magazine.
Painesville, Ohio USA
Chris's career has included working with special needs kids and adults, organic farming, and acting. He is developing the pilot for a stereoscopic 3D TV series as he works at the farm shared by his wife, son, a stable full of horses, some chickens, cats and a dog. You can find him posting in COW forums including HD-High End, Final Cut Pro, and Nuke.