Why Computer Animation Looks So Darn Real

, , , , , , , ,

Shrek-600

Walt Disney once said, “Animation can explain whatever the mind of man can conceive.” For Disney, this was animation’s magic — its power to bring imagination to life.

Disney died in 1966, 11 years before computer animation’s heralded debut in Star Wars, and he likely never imagined how life-like animation would become, or how pervasively it would be used in Hollywood. As viewers, we now hardly blink when we see a fully rendered alien planet or a teddy bear working the grocery store check-out counter.

Animation has largely stripped its reputation as a medium for children; it’s been used far too successfully in major films to remain confined to kids. After all, who hasn’t had the experience of going to an animated film and finding the theatre packed with adults? Who doesn’t secretly remember the moment they were a little turned on during Avatar?

Considering animation’s rapid evolution, it sometimes feels like we’re just weeks away from Drake and Chris Brown settling their beef via a battle of photorealistic holograms.

So how did we get here? How did computer animation come to look so darn real?

From the MoMA to Casper

Computer animation debuted in 1967 in Belgium, and soon after at the MoMA, with Hummingbird, a ten minute film by Charles Csuri and James Shaffer. The film depicted a line drawing of a bird programmed with realistic movements and was shown to a high art crowd, who probably weren’t fantasizing the medium’s potential to create a sassy talking donkey.

In 1972, Ed Catmull, future co-founder of Pixar, created the first 3D computer-animated human hand and face, which was incorporated into the 1976 sci-fi thriller Futureworld. Computer animation didn’t capture the mainstream’s attention, though, until the classic trench run sequence in Star Wars, which used 3D wireframe graphics for the first time. It was the product of a lot of guesswork and brilliance, particularly by animator Larry Cuba. If you have 10 minutes to kill, this old-school video of Cuba explaining how they pulled it off is fascinating:

The late seventies were a time, though, when innovation didn’t happen at the breakneck pace we’re accustomed to today. The next big moment for computer animation didn’t come until 1984, when a young member of George Lucas’ Lucasfilms team, John Lasseter, spearheaded a one-minute CGI film called The Adventures of Andre and Wally B, which pioneered the use of super-curved shapes to create the fluid character movement, a staple of future films by DreamWorks and Pixar, where Lasseter would serve as CCO.

1986’s Labryrinth introduced the first 3D animal — an owl in the opening sequence — and 1991’s Terminator 2: Judgment Day introduced the first realistic human movements by a CGI character, not to mention Arnold Schwarzenegger’s obsession with voter demographics.

In 1993, computer animation’s reputation soared with the release of Jurassic Park and its incredibly realistic dinosaurs. The creatures sent adolescent boys into fits of delight, even though the film only used computer animated dinosaurs for four of the fourteen minutes they were on screen.

Then came 1995 and the release of Casper, which introduced the first CGI protagonist to interact realistically with live actors, though that interaction was predominantly Christina Ricci trying to seduce a ghost.

But Casper was just a warm-up for Toy Story.

The Toy Story and Shrek Era

Six months after Casper, the first feature-length CGI film was released: Toy Story. It was an incredible four-year undertaking by Pixar’s John Lasseter and his team; the film was 81 times longer than Lasseter’s first computer animated film a decade before. They faced two fatal challenges: a relatively tiny $30 million budget, and a small, inexperienced team. Of the 27 animators, half were rumored to have been borderline computer illiterate when production began.

“If we’d known how small our budget and our crew was,” remembered writer Peter Docter, “we probably would have been scared out of our gourds. But we didn’t, so it just felt like we were having a good time.”

They thrived. The animators began by creating clay or computer-drawn models of the characters; once they had the models, they coded articulation and motion controls so that the characters could do things like run, jump and laugh. This was all done with the help of Menv, a modeling environment tool Pixar had been building for nine years. Menv’s models proved incredibly complex — the protagonist, Woody, required 723 motion controls. It was a strain on man and machine alike; it took 800,000 machine hours to complete the film, and it took each animator a week to successfully sync an 8-second shot.

There are more PhDs working on this film than any other in movie history,” Pixar co-founder Steve Jobs told Wired at the time. “And yet you don’t need to know a thing about technology to love it.”

Jobs was right. Audiences loved the film not just because of the impressive animation and three-dimensional realism, but also because of a superb script and voice work by Tom Hanks, Tim Allen and Don Rickles. It sparked computer animated films’ reputation for pairing stunning visuals with compelling stories. That reputation was key, as computer animation’s evolution hinged on the willingness of studios to invest in it.

In 1998, DreamWorks’ Antz and Pixar’s A Bug’s Life maintained computer animation’s stellar reputation, while briefly terrorizing countless entomophobic parents. The flood scene in Antz received widespread praise, particularly from those who couldn’t wait for the bugs to die.

Computer animation’s next breakthrough came in 2001 with Shrek. Shrek delved into true world building; it included 36 separate in-film locations, more than any CGI feature before it. DreamWorks also made a huge advancement by taking the facial muscle rendering software it used in Antz and applying it to the whole body of Shrek’s characters.

“if you pay attention to Shrek when he talks, you see that when he opens his jaw, he forms a double chin,” supervising animator Raman Hui explained, “because we have the fat and the muscles underneath. That kind of detail took us a long time to get right.”

Shrek brought a new age of realism. Hair, skin and clothes flowed naturally in the elements; the challenge of making Donkey’s fur flow smoothly helped animators render the realistic motion of grass, moss and beards (and other things hipsters like). Shrek grossed nearly a half billion dollars, won the first-ever Academy Award For Best Animated Feature, and established DreamWorks as an animation powerhouse, alongside Disney-Pixar.

Advancements in Photorealism and Live Action

In computer animation, there are two kinds of “realness.” First, there’s the “realness” of Shrek, where the animation is still stylized and doesn’t strive for photorealism. Then, there’s photorealistic animation, which aims to make computer animation indistinguishable from live action.

In the same year Shrek was released, we also saw the release of Final Fantasy: The Spirit Within, the first photorealistic, computer-animated feature film. It was filmed using motion-capture technology, which translates recorded movements into animation.

1,327 live action scenes were filmed to make the final animated product. Though the film flopped, the photorealistic visuals were a smash success. The film’s protagonist, Aki Ross, made the cover of Maxim and was the only fictional character to make its list of “Top 100 Sexiest Women Ever.” Aki was a painstaking advancement in photorealistic animation; each of her 60,000 hairs was individually animated, and she was made up of about 400,000 polygons. Entertainment Weekly raved that, “Calling this action heroine a cartoon would be like calling a Rembrandt a doodle,” while naming Aki Ross to its “It” girl list.

The advancements in photorealism and motion capture animation kept coming. In 2002’s The Lord of the Ring: The Two Towers, Gollum was the first motion-capture character to interact directly with live-action characters. Two years later, Tom Hank’s The Polar Express ushered motion-capture films into the mainstream.

Photorealistic animation’s quantum leap came in 2009 with Avatar, a project James Cameron had delayed nearly a decade to allow the technology to catch up to his vision. Cameron commissioned the creation of a camera that recorded facial expressions of actors for animators to use later, allowing for a perfect syncing of live action with animation. Cameron demanded perfection; he reportedly ordered that each plant on the alien planet of Pandora be individually rendered, even though each one contained roughly one million polygons. No wonder it took nearly $300 million to produce Avatar.

Cameron’s goal was to create a film where the audience couldn’t tell what was animated and what was real. He succeeded. Now, the question is, “What’s next?”

What’s Next

Most people think that the animated rendering of humans hasn’t been perfected yet; Cameron’s 10-foot blue animated Na’vi aliens in Avatar was seen as an easier venture than rendering humans, but Cameron doesn’t think that was the case.

“If we had put the same energy into creating a human as we put into creating the Na’vi, it would have been 100% indistinguishable from reality,” Cameron told Entertainment Weekly. “The question is, why the hell would you do that? Why not just photograph the actor? Well, let’s say Clint Eastwood really wanted to do one last Dirty Harry movie looking the way he did in 1975. He could absolutely do it now. And that would be cool.”

Cameron has repeatedly emphasized that he doesn’t view computer animation as a threat to actors, but rather as a tool to empower and transform them.

And if that means we get to experience 1975 Clint Eastwood’s career again, well, that would just go ahead and make our day.

Read more: http://mashable.com/2012/07/09/animation-history-tech/

Leave a Reply

Your email address will not be published.