What Is Physically-Based Animation

0
682

What Is Physically-Based Animation

Physically-Based Animation (PBA) refers to an area of computer graphics in which the aim is to generate physically-plausible animations using Artificial Intelligence (AI). The animations are usually played using a virtual character in a 2D or 3D simulated environment. A nice example of a recent state-of-the-art result using PBA is shown below:

An example of a 3D humanoid character running under heavy perturbations. Physically-based animation enables the character to dynamically adjust its movements to the environment [1].

One of the main differences between PBA and traditional keyframe animation is that PBAs can dynamically adjust to the changes in the environment. On the other hand, keyframe animation is static and non-responsive in nature and must be handled carefully, otherwise it can easily produce unnatural movements like this

If not handled carefully, traditional keyframe animation easily produces unnatural movements (footage from Fallout ۷۶).

In this post, I will briefly overview the basic concepts in PBA along with the current state of research for solving this problem.

۱. What Are the Common Approaches?

A lot of researchers have been working on PBA for about two decades, and so far, a countless number of methods have been proposed for solving this problem. I would divide these methods into two categories: 1) search-based methods, and 2) reinforcement learning. In this section, I explain the core ideas behind these approaches.

۱.۱. Search-Based Methods

A classic yet powerful class of approaches for solving PBA is to use search-based methods for optimizing the movements. The basic idea behind these methods is fairly simple: 1) generate a number of action sequences, 2) evaluate them using forward simulation and computing some cost function, and finally, 3) choose the action sequence that minimizes the cost function. A simple illustration of this process is shown below:

This picture demonstrates the basic mechanism in search-based methods using a simple example. Here the aim is to control an object from the left side to the green circle in the right. The optimal trajectory is shown in blue, and the gray lines indicate the randomly generated trajectories. After computing the cost function for all generated trajectories, the trajectory with minimum cost (shown in black) is considered as the solution [۲].

So far, a huge number of search-based methods have been proposed for solving PBA. A resulting animation, which is obtained using one of the best examples of such methods, is shown below:

The result of a search-based method, in which offline optimization is used to optimize a parameterized controller that generates the movements [3].

The interesting point in this work is that it does not directly optimize the movements. Instead, it first defines a parameterized controller for synthesizing movements, and then optimizes the parameters of that controller. This enables the character to robustly handle random perturbations in the environment [3].

۱.۲. Reinforcement Learning

Reinforcement Learning (RL) is a hot area of Machine Learning (ML) that studies a computational approach to learning from interaction [4]. The basic definition of RL includes an agent that has interactions with some environment, and its goal is to maximize the accumulated rewards over time. In each timestep, the agent observes the current state and takes an action. After that, the agent observes a scalar reward along with the observations of the new state. The aim is to optimize the agent such that it receives the maximum possible reward by taking optimal actions. A schematic view of this interaction is shown below:

The agent-environment interaction in reinforcement learning [۴].

In the past few years, RL has received a lot more attention due to remarkable results of Deep Reinforcement Learning (DRL) in Atari games [5] and the game of Go [6, 7,]. These advances have also inspired several breakthroughs in RL for continuous control. One of the state-of-the-art methods in these categories is shown below:

An example of using reinforcement learning to imitate acrobatic movements by watching Youtube videos [۸].

The pipeline used in the above work consists of three stages: 1) pose estimation, 2) motion reconstruction, and 3) motion imitation. The input video is first processed by the pose estimation stage, which predicts the pose of the actor in each frame. Next, the motion reconstruction stage consolidates the pose predictions into a reference motion and fixes artifacts that might have been introduced by the pose predictions. Finally, the reference motion is passed to the motion imitation stage, where a simulated character is trained to imitate the motion using RL [8].

۲. Which Games Use Physically-Based Animation?

Due to its high requirements in terms of computational resources, PBA is not extensively used in the animation pipeline of video games. However, it is interesting to know that almost any game that has an intensive animation system, uses at least a few PBA techniques. From the top of my head, the best examples include FIFAPES, and Assassin’s Creed.

FIFA is a good example of a big game title that uses PBA in its animation pipeline.

Putting big game titles aside, there are also a few indie games that have implemented their animation pipeline solely using PBA. Among these games, QWOP and Toribash are two of the most successful ones (if you know other good examples worth mentioning here, please let me know). You can find a lot of gameplay videos of these games on Youtube. However, I strongly recommend you to download and test them by yourself so you can feel the power and complexity of PBA. You can see an example movement from Toribash below:

A few games have implemented their whole animation pipeline using physically-based animation (footage from Toribash).

۳. What Are the Open Problems?

So far I have only told you the good news. Bad news is that current approaches for solving PBA are still not able to synthesize robust movements at moderate computational cost. So there are a lot of open problems in the field. I try to address the most important problems (from my point of view) in the following:

  1. How can we develop efficient methods for solving PBA?
  2. How can we use PBA in real-time applications and games?
  3. How can we evaluate the quality of an animation (in terms of smoothness, naturalness, etc.)?
  4. How can we use PBA to design novel game mechanics or human-computer interaction interfaces?
  5. How can PBA affect the evolving augmented, mixed, and virtual reality technologies?

۴. Conclusion

This post was a brief introduction to PBA. Compared to traditional keyframe animation techniques, PBA has the potential of synthesizing movements with more flexibility and diversity. Current approaches for solving PBA use search-based methods and/or reinforcement learning. Despite recent remarkable advances in the field, there is still a lot of room for improving the current approaches in terms of computational power and robustness. That is why PBA has not yet completely found its way into the game development pipeline.

I hope this post helped you catch a glimpse of the physically-based animation problem. Finally, I would love to hear any comments or questions that you might have.

Who Am I

My name is Amin Babadi. Starting from 2017, I am a Ph.D. candidate of computer science at Aalto University, Finland. I work under supervision of Prof. Perttu Hämäläinen, and my current research focuses on developing efficient, creative movement AI for physically-simulated characters in multi-agent settings. In particular, the ultimate goal of my research is to fill the gap between deep reinforcement learning and online optimization.

Prior to my Ph.D., I had 10 years of experience in the video game industry. Specifically, I worked on several commercial games from various genres including first-person shootertwo-player football, and classic adventure. In these projects, I was responsible for different programming disciplines including AI, animation, gameplay, and physics.

References

  1. Peng, X. B.; Abbeel, P.; Levine, S. & van de Panne, M., “DeepMimic: Example-guided deep reinforcement learning of physics-based character skills.” ACM Trans. Graph., ACM, ۲۰۱۸, ۳۷, ۱۴۳:۱–۱۴۳:۱۴
  2. Hämäläinen, P.; Rajamäki, J. & Liu, C. K., “Online control of simulated humanoids using particle belief propagation,” ACM Transactions on Graphics (TOG), ACM, ۲۰۱۵, ۳۴, ۸۱
  3. Geijtenbeek, T.; van de Panne, M. & van der Stappen, A. F., “Flexible muscle-based locomotion for bipedal creatures,” ACM Transactions on Graphics, ACM SIGGRAPH, ۲۰۱۳, ۳۲
  4. Sutton, R. S. & Barto, A. G., “Reinforcement learning: An introduction,” MIT press, ۲۰۱۸
  5. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G. & others, “Human-level control through deep reinforcement learning,” Nature, ۲۰۱۵, ۵۱۸, ۵۲۹
  6. Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M. & others, “Mastering the game of Go with deep neural networks and tree search,” Nature, ۲۰۱۶, ۵۲۹, ۴۸۴–۴۸۹
  7. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A. & others, “Mastering the game of go without human knowledge,” Nature, ۲۰۱۷, ۵۵۰, ۳۵۴
  8. Peng, X. B.; Kanazawa, A.; Malik, J.; Abbeel, P. & Levine, S., “SFV: Reinforcement learning of physical skills from videos,” ACM Trans. Graph., ACM, ۲۰۱۸, ۳۷
منبع بیتوته beytoote.com

ارسال یک پاسخ

لطفا دیدگاه خود را وارد کنید!
لطفا نام خود را در اینجا وارد کنید