Artistry emulated in the tools, continued

      The next stage is where we mix traditional techniques with rule- based techniques. A good example of this is inverse kinematics, whereby you have the software know -- it has a logic about how the different joints relate to each other -- what the tool does.

      the tool culture develops new controls interfacing traditional animation, and articulated skeletal models. So there's some kind of reality plugged into and interfaced with the traditional techniques.

        The goal is to make the work more realistic.

      The next stage of things is data driven. When we pull information in from the real world. Now this is where scientific computing people live very often. And also, the military visual simulation people live in this world.

In terms of animation and motion, the idea is to capture a creative performance


          You've probably seen tangoing gas tanks. The first one of that series of commercials was done by hooking either optical or electro-magnetic motion capture system up to tango dancers, and then having them perform. Recording the orientation and placement of their joints. And then taking that and plugging that into the inverse kinematics. See, you have to have this part. And then you plug in the information from the performance.

      The idea here is to start with authentic data and then clean it up. And to take advantage of stuff we don't understand yet, like how people tango. Okay? So we take advantage of complex human performance abilities to pull that in, and then we mess with it using existing tools.

      The next stage is where we emulate the results of creative effort.


        The goal is greater control and room for refinement.

      When they did the go motion, they would do one pass, and then that's all they had. It's like the performance data for motion capture. You do it, and that's all you have. That's not efficient in a digital world.

      And it's also not productive, and it doesn't give you much room for creativity.

Now, something that needs to be done that hasn't really gotten very far is the reverse engineering of real time performances into procedural motion . And that would be really something.

        But that's really hard. I don't know of much happening in that area.

      The next phase which is now in the research world, is generating a non-determinant .

        In other words, things are not known in advance: fully determined autonomous character behaviors. You have to have some input from physics and animal behavior motion and behavior which is called ethology

      So it's starting to get kind of cross disciplinary again here. And take into account Collision detection and deformation, when something bumps into something else, does it deform?

Multi-layered constraint-based motion.

Emotion-based animation.

Automated learning of efficient locomotion.

Environment sensing.

        What you end up with is a real time interactive and adaptive 3-D world, which can effectively kind of write its own story in a way, if people can interact with it. And sets the stage for continuous feeds, such as to interactive television.

      There are characters.

        They evolve.

        They interact.

        Viewers interact with them,

        and then it just goes.

      Now, in order to really -- nobody knows really what all the front end for this will look like.

      It's going to be higher level scripting languages. We need to raise the level of abstraction of controls over 3-D by quite a bit.