Cinesite Visual Effects Supervisor Sue Rowe approached the mammoth job of creating 831 visual effects for John Carter -- which marks the live-action debut of noted animation director Andrew Stanton -- with a lot of experience under her belt.
Cinesite Visual Effects Supervisor Sue Rowe approached the mammoth job of creating 831 visual effects for John Carter - which marks the live-action debut of noted animation director Andrew Stanton - with a lot of experience under her belt. A long-time Cinesite employee, she has previously served as that VFX house's visual effects supervisor for Prince of Persia: The Sands of Time, The Golden Compass, Death at a Funeral, X-Men: The Last Stand, Charlie and the Chocolate Factory, The Hitchhiker's Guide to the Galaxy, and Troy, among other feature films and TV programs. Earlier in her career, she was a Digital Compositor.
At Cinesite, we handled 831 visual effects in John Carter. Along with Double Negative, we were one of the two main VFX vendors on the movie, directed by Andrew Stanton. Since Cinesite is renowned for its photoreal environment work, we handled this part of the film, along with the 2D to 3D stereo conversion. There was a tremendous amount of sharing between Cinesite and D-Neg, who did the character animation, since every shot that has a creature in it also has a Cinesite environment. I'm proud of this. London is a unique place to work in that all our competitors are very close to each other. Although there is healthy competition, we also completely share when we work on the same film. My colleagues at D-Neg and my team had a great relationship. Moving Picture Company was brought on to do the attack of an aggressive horde of half-animals running towards John Carter and a smaller company called Nvizible picked up a few shots but the majority of the work was done at D-Neg and Cinesite.
I had a brilliant team and worked with four main supervisors: Jonathan Neill supervised Cinesite's work on the city of Zodanga, a mile-long rusty metal tanker that crawls like a myriapod across the surface of Mars; Christian Irles supervised work on the beautiful city of Helium, which has a huge glass palace in the middle; Ben Shepherd oversaw the big aerial battle between Zodanga and Helium; Simon Stanley-Clamp directed work on the Thern sanctuary, a huge underground cave that forms around the characters as self-illuminating blue branches as they walk through it; and Artemis Oikonomopoulou was Cinesite's CG Supervisor on the project. We developed a short hand with each other on how we wanted the sequences to go I put my input into the shots they did. There was a filtering system so it was as good as possible to review with Andrew Stanton.
Above left: Ben Shepherd oversaw the big aerial battle between Zodanga and Helium. Middle: Christian Irles supervised work on the beautiful city of Helium. Right: Simon Stanley-Clamp directed work on the Thern sanctuary.
Above left: Artemis Oikonomopoulou was Cinesite's CG Supervisor on the project. Right, Jonathan Neill supervised Cinesite's work on the city of Zodanga.
I have definitely drunk the Andrew Stanton Kool-Aid. He is completely inspiring, so eloquent and artistic. We would do a conference call with him every night, U.K. to the States. Using cineSync, we would load files and both look at the same Quicktime of the movie. Andrew stopped the footage when he wanted and could draw on the image. By talking every night for two years, this sense of trust and familiarity built up. Andrew cared about every single pixel. He had good input daily into small things we could change to make the shot better. He was also pragmatic; he knew the story was the most important thing. He was good at keeping us grounded. I found it very rewarding to work with him.
As a company, one reason that creating environments is a large proportion of the work we do is because it saves productions a lot of money. For John Carter, we went to Utah and shot for about 3 months on location. We took a lot of reference footage and photos in Big Water and Moab. Utah is about a seven-hour drive from Las Vegas. There was such beautiful red rock and you can see way off into the horizon. We Europeans aren't used to this harsh bright light and how powerful the bright areas of the sky can be and how dark the shadows. We're more used to an overcast environment.
The weather for the shoot was quite a challenge. We would start shooting at 9 am, and have 35-degree Celsius heat (95-degrees Fahrenheit). Then in the afternoon there were sand storms. So the shooting environment was harsh. [Director] Andrew [Stanton] had a lot of luck in the morning and, by the afternoon, weather conditions would really affect the shooting.
We used photogrammetry -- high resolution stills projected onto geometry -- that we could join together in Nuke and create vistas for horizons. We took all that back from Utah and used the foreground from the studio and put our hot desert environment in the background. The majority of the movie was shot at studios out in north London where it's generally raining or gray. I spent a lot of time in London, replacing those overcast days with bright Utah vistas.
John Carter was definitely the most challenging work I've done so far in my career, in part due to the fact that there was such a wide difference in the styles of work we were asked to do. The two main locations are Zodanga and Helium, and they had to be very different styles of architecture to tell the story. Then there was the challenge of the massive scale of the movie. By the end, 310 people at Cinesite had created 1,973 individual CGI models.
The opening of the movie is a minute-long completely CG sequence, starting with a view of Mars from space. The camera travels through clouds to the surface of Mars and along a giant trail pitted with mining holes. We shot aerial footage of the Utah desert from a helicopter. The path of the camera was prevised based on GPS maps and Google Earth. We prepped when to shoot to get the best lighting but the speed of the real camera was too slow for the Power of Ten idea. So we re-mapped the live action plate on to geometry and this gave us greater freedom for the camera move. Jon Neill and layout artist Thomas Mueller designed the shot starting in space through CG clouds to the surface of Mars ending with the camera rising up between the city's moving legs.
The camera then pans up to reveal a monstrous, dark city marching along the planet's surface. This is the city state of Zodanga, which is mercilessly consuming the planet's resources. We see the enormous mechanical legs of the marching city, and then the camera pans upward to reveal the airfield deck and palace as a flying machine takes off and whooshes past the camera. The camera pans down to reveal Helium City, perched on a rocky pedestal. This lovely city cultivates its own food and prizes knowledge and culture. The differing architectural styles reveal the character of each nation.
Helium City, perched on a rocky pedestal
The city of Zodanga was based on an overall design concept by Ryan Church in the production's art department. Cinesite had to interpret and build detail into the design to make it work for full-screen backgrounds. But the challenge was that we also had to make it detailed enough to be seen in close-ups.
The production did build a few sets for locations within the city, but even these required considerable digital extension work to place them within the immense scale of the city. It's hard to over-state the immensity of the work. Thousands of pieces of geometry were modeled for the building and we also modeled dozens and dozens of props, from tables to lamps, urns to bottles. To quantify it, we built 291 structural element models and 242 CG props. There were up to 20,000 objects in a single shot and between 1 to 2 billion polygons depending on the camera position and detail required.
Then there are the 674 legs that support Zodanga and move it forward. If you were to animate every one of those legs and render it, the computers would fall over and die. We had to create tactics to pre-cache the simulation and work out animation cycles and switch off the legs not seen by the camera. VFX supervisor Jon Neill worked out a way of pre-caching the ray-tracing that meant that it wasn't so computationally expensive and we could get versions of the shots in a much more expedient way. Variations in movement -- as well as additional animation such as cogs and cabling -- were used to create a more interesting look in the leg movements.
Zodanga final comp...Then there are the 674 legs that support Zodanga and move it forward.
It was similarly impractical to texture all the sections of the city in great detail. After decisions were made as to which sections of the city would be seen in close-up, we used a combination of Photoshop, Mari and Mudbox with bespoke shaders and lighting development to create a dirty, industrial feel to the detailed areas.
Jane Rotolo our Massive TD added to the city and ship, based on motion capture of specific actions such as turning the ship's wheel. Using Maya fluids and Houdini we covered the city in dust. Andrew wanted just a small amount of that dust to drop away from the ship as it takes off. We had to find that subtle balance so it didn't look like it was a propulsion jet
Flyer Chase on Zodanga
This was the first sequence we did on Zodanga. Since the scenes used a huge amount of memory-heavy geometry, the biggest challenge was how to render the props and set pieces in the digital set. We used Cinesite's proprietary geometry format, MeshCache, which support LOD files. Between layout and lighting departments, we built mid-res and low-res models in the distance and high-res models in the foreground, which allowed us to make the shot as complex as possible while still fitting it into memory.
Lighting the hangar deck was a challenge since we had to mimic real world lighting. We did this via global illumination, which calculates how light bounces around in the scene. We normally began with only one light (that is, the sun), and then calculated the global illumination. A lot of shots needed extra lights, and a god-ray pass using a volumetric shader added more atmosphere, all of it to make a believably sunny and dusty desert environment.
We also added CG wings to the practical flyer that John Carter escapes on. Andrew had read about these ships in Edgar Rice Burrough's book, and he wanted them to have an 'antique vessel at sea' feeling. We built these ships based on art references he gave us, and they turned out to be such beautiful assets. The wings are powered by the sun, and we devised a shader that would give reflections to the surface, so the color gamut goes from blue through purple to gold. Andrew described the work we did as being similar to fish scales; whenever the ships turn, it looks iridescent, like fish scales in the sun. The wing shader, which we controlled in real-time, changed the color based on the angle between the wings and the camera.
This is also where we did a number of shots with fully CG digital doubles, for which we used subsurface scattering and our Cinesite skin shader. We did a full high-resolution digi-double of Carter as he jumps from one pillar to another, and we also simulated movement for his clothes and hair.
Creating close-ups of digidoubles was perhaps the most daunting challenge in John Carter. That terrified me at the beginning; I'd never had the camera this close on a digital double. We did a huge amount of digidouble work and it took a lot to pull it off.
Also in this sequence, we reach the city's legs and focus on them during a dramatic chase. In some shots we see hundreds of the legs, and the amount of geometry per leg was again challenging. We also had to layout the impact effects when the legs hit the ground. The effects department provided an impact effect element and a layout that would analyze the movement of the leg and place the particle and fluid effects at the exact position of the ground impact. To give compositing more control, the leg effect was rendered in several layers.
Helium, a grandiose cathedral-like made-of-glass city, had to match exactly the art department concept stills from Ryan Church. To create the matte paintings with the level of detail Andrew required, projections were created for the terrain. The matte paintings were easy but also very time-consuming as well as render-heavy for the full 3D renders.
Helium is a grandiose cathedral-like city made-of-glass.
On Helium, there is a 197-shot Palace of Light sequence, a huge challenge as it established the look of Helium that would be re-used in other sequences. Andrew described this as "the jewel of the city," and we got many art department sketches as well as layouts, blueprints and texture samples. The glass itself was difficult. It needed to look like the frosty glass that was on-set inside the palace, but we also needed to keep the palace looking beautiful. It took a lot of time and tests to get right.
Palace of Light
The palace also contained a large number of digital assets, including a balcony, buttresses, ribs, glass feathers, a floor, mirror, flags, flambeaux and lenses as well as a transporter that crashes through the side of the palace, Carter's sword, and a fully animated CG Zodangan flier. We needed to be able to view the fully CG palace from its exterior and also use it as a set extension for live-action that was shot on an interior set.
The interior was a night scene, lit with moonlight and hundreds of flambeaux. It was a tremendously complex scene already and was then combined with the ship crashing through the glass walls. We built some panels with additional geometry, which worked better for the shattering-in effect. All in all, the city structure was built of 346 models with 74 individual props.
This 82-shot section of the movie contained an amazing amount of work. It started on my first day of shooting on set, where we were faced with a 360-degree greenscreen, and Andrew said, I want this room to be made of nano-technology. In fact, Thern is a living nano-technology matrix. We did a lot of R&D and worked on that environment for a full year.
Covered in Thern
In the crucial scene, when Carter and Dejah land on the surface of the pyramid, Thern begins to grow beneath their feet and then breaks off the flat surface of the pyramid into a Thern wave. The surface breaks into steps that drop down, and Carter and Dejah move to a blank wall that transforms into a Thern tunnel. That's the point where you'll see more of the digidoubles we made; a handful of wide shots required Carter and Dejah digidoubles, which were hand-animated. As they move into the tunnel, the Thern effect builds around them, leading them deep into the pyramid. At the end of the tunnel is the Thern Sanctuary that also builds itself.
These growing Thern shots are some of the most complex we did in the movie. I'm very proud of this environment. From a greenscreen room and a little conversation, we came up with a cathedral of blue ivy that builds itself and deconstructs before your eyes. The entire Thern effect system was designed and built from scratch with a combination of Maya, Houdini and custom in-house software. The final system was a semi-automated way to 'grow' Thern into any environment and geometry.
The entire Thern effect system was designed and built from scratch with a combination of Maya, Houdini and custom in-house software.
Early on, via cineSync from the U.S., Andrew looked at our animation test showing how the floor would grow and he said, 'You guys nailed it -- go to the pub.' You can see the final result in the trailer, when the princess Dejah touches the ground and it comes alive with blue light.
This shot continues in the Sanctuary, as Dejah activates the Ninth Ray sequence. When the Sanctuary is activated, nine fingers of Thern run across the floor -- another example of the very detailed Thern effect. Detailed camera tracks completed in 3DE were used for all the shots in the Sanctuary sequence. We used a LIDAR scan of the set married with our geometry of the Sanctuary layout, which allowed us to determine exactly what part of the Sanctuary would be seen in any camera direction.
Around 26 of the 82 shots in the Thern sequence were rendered as native stereo CG rather than post-converted. These shots lend themselves to being rendered with left and right eyes as they exist as totally computer-generated shots so there's no need to post dimensionalize. Compositing the Thern effect in stereo proved challenging in unexpected ways. Compositing the various Thern tip glows and self-illumination effects were difficult to fuse correctly and we used great care in Nuke to preserve the look of the mono final at the same time we were generating a stereo composite.
What we learned
The cities of Zodanga and Helium and the sequences highlighted above were just part of what Cinesite accomplished. We also did a flyer chase sequence on Zodanga, an airship battle and other Thern effects.
Through the experience of working on John Carter, we learned a lot as a company. Working on 800+ shots of such diversity meant we had to be very clever in how we rendered each shot. At the same time, audiences today expect so much more. They're not impressed by the fact that it's created digitally. It has to look right, and audiences know when something doesn't work.
The scale and size of the assets we had to render were immense. Traditional ray-tracing for the Glass Palace of Light was incredibly computationally expensive. You could set a render and come back two years later. There were a number of similarly large shots we needed to ray-trace and perfect, but once established, the smaller shots could be done in a different way.
I learned to be efficient and focus on the important shots. I also learned that there are some shots worth fighting for; working on them a couple more days make them brilliant. Other shots you can keep playing with until the end of time, and someone needs to say, Stop here! I think that's the artist in us all that wants to keep working on it.
To manage such a huge number of assets, Cinesite had an army of producers and coordinators who were amazing. They're really the unsung heroes. We have our own internal shot manager that has saved us on many occasions. We save and publish the assets so everyone can see where they are, in thumbnails, which is much better than a file name. We pre-planned and had a lot of hardware investment so we had enough disk space for 800 shots and multiple versions.
I so hope this movie gets the recognition it deserves. It's quite the epic, with a lot of humor. And, if I dare say so, the effects are very wonderful.
SIDEBAR: Converting John Carter from 2D to 3D
Cinesite Visual Effects Supervisor Sue Rowe
Cinesite had never done a 3D conversion before and built an all-new pipeline from scratch to accomplish it. In total, 87 minutes of the final film were converted from mono to stereo, which breaks down to 1,541 shots and 125,000 frames of film. Greg Keech was the Primary Developer; Michele Sciolette was Head of VFX Technology; Scott Willman was Stereographer/Supervisor and John Grotelueschen was Compositing Lead.
According to Sciolette, the most common conversion technique is based on separating layers via rotoscoping and then pushing or pulling them to specific depths by grading a depth map. Once this is in place, filters are used to simulate the shape and internal dimension of an object.
"Early on, we decided to base our stereoscopic conversion process on correct spatial information," says Sciolette. "We chose to use animated geometry that we could track and position in the scene and render through virtual stereo cameras. This allowed us to ensure that if John Carter was running from the foreground to the background he appropriately diminished in scale and that his footfalls were always meeting the ground. Elements that he ran pass would also be at an appropriate scale relative to him. It allowed us to place all of the objects in the set in their proper location in 3D space so that correct scale perception was maintained."
By having the scene laid out in 3D space, 'shooting' it also became very natural, adds Sciolette. "We could use the same cameras, lens data, and animation from the actual set," he says. "When we then dialed our stereo interaxial distance (the distance between the cameras) it was in measurements that made sense to the scale of the physical set."
Another major benefit from using the tracked VFX cameras was that the team was able to render CG layers in stereo and have them fit seamlessly into the converted plate elements, which was especially important when Carter physically interacts with complex geometry. "In typical 2D visual effects, holdouts would suffice," says Willman. "But in 3D, the position of each CG limb must be correctly placed in depth relative to the converted plate element."
This technique didn't obviate the need for rotoscoping. In fact, the Cinesite team did massive amounts of rotoscoping. This immense labor relied on the services of several external vendors, notes Willman. To ensure that the right elements were necessary to roto, Cinesite developed a pre-process that allowed an artist to quickly place markers on an image to identify what was important. These markers also contained a name given by the artist and a hierarchical relationship to other markers. "In the end, these tags provided a visual reference to the vendor, a specific naming convention to be used for both roto and rotomation, and a layer hierarchy to match," says Willman. "We shipped this information as a PDF file containing the images and markings and an XML file containing the hierarchies." When the vendor returned the work, Cinesite wrote code to parse the delivered geometry and roto files and compare them to the XML specification file supplied.
Back in-house, the shot was next assigned to a conversion artist who first had to connect the hundreds of named rotoshapes to their matching animated geometry. "To accomplish this in a reasonable amount of time, we wrote some tools to utilize the XML spec, analyze the roto and rotomation files, and then automatically connect the appropriate rotoshapes with their corresponding geometry into a Nuke script," explains Sciolette. "The result was a large node network of connected shapes, summing together into a projectable 3D scene representing the image to convert. Many times, extra shapes would be added if the vendor extrapolated on a repetitive element so the system had to be smart about accounting for exceptions or missed material."
For actual conversion, Cinesite again took a slightly different approach. "We used a warping mechanism to push the pixels from the original position to the new position by building a disparity and distortion map. The disparity map was built from the rendered depth map and the position of the new camera. "Using this approach we were able to know just how far a pixel had traveled to find its new location," says Willman. "We then used that data to begin reconstructing the un-occluded areas."
If the depth maps weren't detailed enough or the geometry didn't follow contours exactly, the artist improved depth maps by using a suite of Nuke nodes and filters to reconstruct missing areas of depth, soften internal edges, and make other artistic improvements.
All reviews were done using the application RV from Tweak Software. Cinesite also wrote some plug-ins for RV that allowed supervisors to measure the amounts of disparity in a stereo render, thus allowing them to ensure that the depth from shot to shot remained consistent and as intended.
All images courtesy of Cinesite. Please click on any image to zoom.
New John Carter Trailer introduced by Andrew Stanton | Official Disney 2012 Trailer | HD (YouTube)
Just in time for the theater release of Men in Black 3, Creative COW had the opportunity to speak with Visual Effects Supervisors, Jay Redd, and five-time Academy Award winner and vfx pioneer Ken Ralston about their adventures with aliens and the Men in Black.
Creative COW Magazine is copyright 2006 - 2013 by Creative COW®. All rights are reserved.
No reprint rights are granted except to educational institutions such as universities, colleges,
art academies and other training academies. All other rights are expressly reserved. [Top]