Featured Projects

Impulsion Engine

Impulsion is a software platform developed to explore new concepts in social AI for lifelike interactive characters. We strive to improve virtual characters’ behaviors by making them act more realistically in social simulations. The platform represents a social AI engine that applies novel solutions to various types of interactive simulations and game environments.

“Body language, personal space, gaze attraction, and territoriality are all part of our daily social life: all are part of what makes us appear mindful and attentive.”

With knowledge from psychology and sociology, we can build lifelike interactive characters who can grasp situations, react to contingencies, and appear aware of their surroundings.

New Frontiers: Socializing Software

Tactical combat is by far the leading applicable scenario of character AI in today’s gaming industry. Demand for games with storytelling that goes beyond mere gunfire is pushing AI to its limits and expanding its versatility. Creating characters rich enough to engage in social scenarios, where we expect them to appear more lifelike, remains a problem yet to be successfully tackled by game developers.

Standing in line without rushing into others, facing a group of peers you are chatting with, sensing the comfort zone of another, and looking into each other’s eyes require a different approach than what is used for tactical combat.

Simulating Environmental Understanding

The eight-bit gaming world of yesterday left characters helplessly running into walls and awkwardly maneuvering around obstacles. The development of path-finding and local avoidance has established techniques that have given rise to a new breed character, one that no longer runs headfirst into walls but simulates an understanding of space. However, to achieve lifelike character behavior, an understanding of space is insufficient without an understanding of context. Certain locations may have an intended use that goes beyond mere topological shape. One solution would be to semantically tag locations–but what if the environment changes dynamically, such as in a space full of people who are moving around? Theories on human territoriality can help building a system that reasons about spatial affordance (what can be done in certain places or to certain objects), creating characters who can move about their environment more naturally.

The embedded video shows a demonstration of character AI with social intelligence. (A mix of autonomous/dynamic and scripted behavior.)

Achievements

  • Collaboration with CCP Games
  • Poster at SIGGRAPH 2012
  • Presented at Game AI Conference 2012

Collaboration

Impulsion grew from the Humanoid Agents in Social Game Environments project at CADIA where the first social engine prototype, called Populus, was built. The success of that prototype paved the way for further development. CADIA continues to research social AI for games and prototype new behaviors.

Contact

Claudio Pedica – claudio at iiim.is

Publications

  • Pedica, C. and Vilhjálmsson, H. Lifelike Virtual Characters using Behavior Trees for Social Territorial Intelligence. (Demo/Poster). In Proceedings of ACM SIGGRAPH 2012. Los Angeles, August 5-9.
  • K. R. Thórisson, O. Gislason, G. R. Jonsdottir & H. Th. Thorisson (2010). A Multiparty Multimodal Architecture for Realtime Turntaking. Proceedings of Intelligent Virtual Agents 2010.
  • Pedica, C., and H. Vilhjálmsson, H. (2010). Spontaneous avatar behavior for human territoriality. Applied Artificial Intelligence 24, 6, 575–593.
  • Pedica C., H. Vilhjálmsson H., M. Lárusdóttir. Avatars in conversation: the importance of simulating territorial behavior, Proceedings of the 10th international conference on Intelligent virtual agents, September 20-22, 2010, Philadelphia, PA.
  • Pedica, C., and H. Vilhjálmsson, H. (2008), Social perception and steering for online avatars, in ‘IVA ’08: Proceedings of the 8th international conference on Intelligent Virtual Agents’, Springer-Verlag, Berlin, Heidelberg, pp. 104–116.
  • Thórisson, K. R. & G. R. Jonsdottir (2008). A Granular Architecture for Dynamic Realtime Dialogue. Proc. of Intelligent Virtual Agents (IVA), Tokyo, Japan, September 1-3.
  • Jonsdottir, G. R., J. Gratch, E. Fast, & K. R. Thórisson (2007). Fluid Semantic Back-Channel Feedback in Dialogue: Challenges & Progress. Proc. of 7th International Conference on Intelligent Virtual Agents, 154-160, September. Paris, France.

Related Publications

  • Bonaiuto, J. & K. R. Thórisson (2008). Towards a Neurocognitive Model of Realtime Turntaking in Face-to-Face Dialogue. In I. Wachsmuth, M. Lenzen, G. Knoblich (eds.), Embodied Communication in Humans And Machines. U.K.: Oxford University Press.
  • Jonsdottir, G. R., K. R. Thórisson & E. Nivel (2008). Learning Smooth, Human-Like Turntaking in Realtime Dialogue. Proc. of Intelligent Virtual Agents (IVA), Tokyo, Japan, September 1-3.
  • Thórisson, K. R. (2008). Modeling Multimodal Communication as a Complex System. In I. Wachsmuth, M. Lenzen, G. Knoblich (eds.), Springer Lecture Series in Computer Science: Modeling Communication with Robots and Virtual Humans, 143-168. New York: Springer.
?>