“Grimm” Creatures Transform Oregon Effects Studio
Portland-based HiveFX must balance technical expertise with graphic artistry to create stunning digital visual effects for NBC’s hit TV show, “Grimm.”
Autodesk Media & Entertainment Tech Innovators talked with Jim Clark, president of Grimm’s special-effect studio, and executive producer Gretchen Miller (hive-fx.com). What follows are excerpts from that interview.
Digital Special Effects
Blyler: HiveFX is a digital visual-effects company. How do created creatures fit within the production flow of NBC’s “Grimm” TV show?
Clark: Time is a premium on the set. All they need to do for us is hand-place the markers on the actor’s face. We’ve worked with the studio for about a year to get the optimal quantity and positioning for the markers. After the footage with the markers in place is shot, it is cut and sent to us. We pull our own Digital Picture Exchange (DPX) sequences from that raw footage, applying a proprietary tracking process to tack a digital head onto a real body using the tracking markers. Then we will either paint out the markers or replace the real head with our own digital one (see Figure 1).
|Figure 1: White dot markers, placed on the actor’s face, are used to optimize the tacking of a digital head onto the real body. (“Lonely Hearts,” Grimm Season 1: Episode 5)|
Blyler: Creating a unique digital head for each character seems like a lot of work.
Clark: When we originally started with “Grimm,” we would keep the live-action face throughout the entire shot and then integrate the computergenerated (CG) head on top of it. After the creature transformations, we remove the CG head. But this approach was fairly limiting and very complex; we had to track every single micromotion. It became easier to pre-build a CG, photo-real version of the actor’s face. Then, when the footage comes in, we’re ready with the digital head and can stay digital throughout the entire morphing sequence. Many times, we’ll try to end on the live action to get back to a realistic human face. The biggest challenge is trying to integrate those two textually—lighting-wise and skin-wise—to each other.
Blyler: I understand that you have a proprietary algorithm for achieving a very realistic transformation from human to creature.
Clark: The under-the-skin facial and body displacements, bumping, lumping, and bone shifting were traditionally done as a two-dimensional (2D) effect in compositing. We have begun using three-dimensional (3D) models, which gives the animators greater control over how to move those lumps around. We’re starting to capture more of that morphing and distortion in a 3D model. It’s all about meeting tight schedules. We have to deliver an episode every week. A lot of times, we’ll deliver on Wednesday into mid-day Thursday for a Friday airing.
Blyler: How do you interact with the studio to create these amazing creature effects?
Clark: We meet with the studio at the beginning of every episode to review edits with the supervising director and visual-effects supervisor in their office. Then all the info. is done over the web—everything. In terms of what the creatures will look like, the studio provides us with concept art weeks ahead of time. We have a 3D sculptor who uses Z-Brush to do the work. Then we get approval and sign off on those characters. The studio has been known to change the character design completely up to three days before airing. We do have an optimized process that allows us to re-create that character and punch it through the pipeline very efficiently. If we couldn’t meet this turnaround time, we’d lose the contract. You can’t make one mistake. You cannot miss a single episode.
Blyler: With the tight schedules and need for high-quality results, you must have an impressive array of software tools and hardware platforms. Tell me about both.
Clark: Autodesk Maya is our primary 3D animation tool. For textual painting, we use Autodesk Mudbox, Mari and some Bodypaint. In Season 1 we rendered skin with VRay from the Chaos Group and created the hair in Cinema 4D. We then tied the hair to the skin during compositing, which is done in Adobe After-Effects and Mocha (see Figure 2).
|Figure 2: a) 3D animations are done in Autodesk Maya; b) The final effects are both detailed and realistic, as shown with this fire breathing Dämonfeuer from “Plumed Serpent,” Grimm Season 1: Episode 14.|
This season – Season 2 – we’re rigging and animating in Maya then point caching to C4D with a new custom plug-in we had written. All skin surfacing, hair and lighting for the characters this season will be done in C4D. Eventually we’ll likely transition to C4D entirely, because of the fast learning curve, friendly interface and great toolset, including the integration of Python for custom scripting by our TD. To maintain the quality of our high-definition digital visual effects, we had to increase our tool suites over the last few years. Supporting these applications are over 40 quad-core Intel Xeon processors— about 160 dedicated processors and 500 GBytes of random-access memory (RAM). Originally, we were using eight-core Macs running in PC mode on a dual-boot. But we were getting poor graphic quality.
Blyler: Why didn’t you run on PCs directly?
Clark: We had to custom-build all of our 3D workstations. At the time, you could only buy prebuilt quad (four)-core PCs. For 3D performance reasons, we needed more cores. So we ended up going with 12-processor core machines— dual-core i7s with 48 GB of RAM using nVidia Quadra 4000 cards with 30-in. monitors. So our entire compositing team uses Macs while the whole render farm and creature CG teams run on PCs. In addition to the animation, you need all those cores for local rendering and testing. Before you send a shot to a render farm, you have to test-render it, which is a very data-intensive process. Depending upon the complexity of the shot, every frame could take up to 20 min. to render. Slow machines would burn up a lot of the artists’ time. We currently average 3.5-4 min per frame when rendering skin and hair in C4D.
Blyler: I want to explore the concept of a “render” farm. But first, let me ask about any open-source tools you might use.
Clark: We don’t really use many open-source applications. We do have a proprietary plug-in that takes point data from Maya to C4D. This allows us to cache all the point data from a Maya animation and make it available to Cinema 4D for rendering.
We also use a slick plug-in we had written called “REPATH” that searches our network for lost or moved texture images and reconnects broken links.
Blyler: Let’s talk about how you back up and secure all of this digital intellectual property that you’ve created.
Clark: Our DPX visual-effect sequences for “Grimm” go over the web to editing and then a color house. We have a local file server dedicated just to our work for “Grimm”—plus we have other dedicated servers for our Nike account and commercial work. The data on each server is replicated two times to other locations, every hour throughout the day.
Archiving our data-intensive files was a challenge, but pulling it off for future assignments was a real pain. Recently, we put in two 48TB, online archival striped, multi-drive RAIDs that are backed up from our primary file servers every hour. At night, RAID Archive 1 is mirrored to Archive 2, so there are no data-bandwidth conflicts during the daytime and we’re assured of no data loss. This way, we maintain five copies of the data at all times.
All of this is transparent to the artists, who work directly to the main file server. Now, if a primary file server tanks, we pull it off the network, rename the back-up server the same name as server that failed and we’re up and running in minutes – it’s entirely invisible to the artists and all file paths are maintained. Nobody loses any data or time.
Blyler: Have you lost critical data in the past?
Clark: We’ve never lost a piece of data in 12 years. We have to be super-critical because even one down day can cost us an account. Aside from that, having a fast retrieval system allows us to repurpose assets and share them. We have to get them at a moment’s notice. It’s flawless. I think we did 1.4 million files for “Grimm” in Season 1. Now, that’s all on archive. We scrubbed the main servers at the beginning of Season 2, for which we’re already doing character modeling for Episodes 1, 2 and 4.
Miller: We are a preferred digital-special-effects vendor for “Grimm” (see Figure 3). We helped to develop the look and techniques for all of the creature deformations from day one. In fact, the studio awarded us Episodes 1, 2, 3, 4, and 5 of Season 1. It proved to be more than we could handle—especially with all of the last-minute changes. So we turned down every other episode. This year, we are way more up to speed. The scary part about being the preferred vendor is that we still have to win every episode. We have no guarantee that we’ll get the next episode.
Blyler: It seems like the first season of a new show is a risky undertaking.
Miller: We have to bid on every episode. Some episodes will have very few visual-effects shots while others will have a lot. So we must constantly balance the team staffing levels to data load. Season 1 was stressful too because there was no guarantee that “Grimm” would be successful. Still, we had to invest in the employees and the infrastructure to support the show.
|Figure 3: A jackal-like creature, known as a Schakal, from “Three Coins in a Fuchsbau,” Grimm Season 1, Episode 13.|
In Season 1, we had only signed on for the first 12 episodes. Later, when we were awarded the rest of the season, it was a big deal. Now, in Season 2, we have a good data pipeline and process flow.
Blyler: What were the most challenging episodes in terms of special effects?
Clark: The satyr episode had something like 40 different creature shots (Lonely Hearts, Season 1: Episode 5 ). That was a fun episode—with CG frogs and the like. The week for that episode was busy, too, with lots of pig-like creatures for another episode.
We do the hardest work, the hardest shots. The lessinvolved shots like painting out wires goes to a shop in Los Angeles, CA.
Blyler: You mentioned a render farm. How does that work?
Clark: We use a 3D-animation render-management tool called Squidnet. It allows us to work on a distributed network. All of the artists connect their shots to the Squidnet server, which manages priorities and who can render what.
This is a big deal because we have lots of different tools. For example, depending upon the account, our artists will render using Autodesk 3ds Max or Maxon’s Cinema4D with Chaos Group’s VRay. We also use Autodesk’s Maya with VRay. All of these tools work under Squidnet on our render farm.
Blyler: Was it difficult for an Oregon company to win the special-effects production for “Grimm?”
|HiveFX’s president, Jim Clark, and executive producer, Gretchen Miller, talk about digital special effects at their studio in Portland, OR.|
Clark: It was really challenging. We had to pitch a blind bid with a half-dozen other companies here and in the Los Angeles area. The studio wanted to do the work in Oregon because of the generous filming rebate. The blind pitch was necessary so the network wouldn’t know who was located where. But after we won the first pitch, the network found out that we were based in Oregon. They made us do a second round of pitches to be sure we could do the work. We won that round too. The problem was that we were competing against the big animation houses—with big overheads. We had to produce feature-level visual-effects work in just three weeks. Normally, to get feature-quality effects, you have five times the budget and schedule.
Blyler: Are you profitable? This seems like a market with tight margins.
Clark: It is not a profitable industry. You have to do volume. For us, it means striving for a balance of shots in more episodes to get a more consistent flow of work. It’s hard to maintain staff levels. Very few companies can do the level of work we do on these budgets. Working in Oregon really helps us accomplish this.
On the other hand, the success of “Grimm” means we now have a lot of credibility and clout in the industry. This means that a lot of artists are finding us from all over the world. They want to work on something cool like “Grimm.”
Blyler: What are your plans for the future?
Miller: The next thing for a company such as ours is to create content. We want to grow from a service business to owning something— creating content. Jim (Clark) wrote a screenplay with a lot of visual effects called CHIHUANHAS – half Piranha/ half Chihuahua. We hope to get it going and use our company to produce it. That way, we can control the quality and the cost, and own something.
Blyler: Thank you.
John Blyler is the editorial director at Extension Media. He is an affiliate professor in System Engineering at Portland State University.