NVIDIA introduced production microservices for the NVIDIA Avatar Cloud Engine (ACE) that allow developers of games, tools and middleware to integrate state-of-the-art generative AI models into the digital avatars in their games and applications.
The new ACE microservices let developers build interactive avatars using AI models such as NVIDIA Audio2Face (A2F), which creates expressive facial animations from audio sources, and NVIDIA Riva Automatic Speech Recognition (ASR), for building customisable multilingual speech and translation applications using generative AI.
Developers embracing ACE include Charisma.AI, Convai, Inworld, miHoYo, NetEase Games, Ourpalm, Tencent, Ubisoft and UneeQ.
“Generative AI technologies are transforming virtually everything we do, and that also includes game creation and gameplay,” said Keita Iida, vice president of developer relations at NVIDIA. “NVIDIA ACE opens up new possibilities for game developers by populating their worlds with lifelike digital characters while removing the need for pre-scripted dialogue, delivering greater in-game immersion.”
Top Game and Interactive Avatar Developers Embrace NVIDIA ACE
Top game and interactive avatar developers are pioneering ways ACE and generative AI technologies can be used to transform interactions between players and non-playable characters (NPCs) in games and applications.
“This is a milestone moment for AI in games,” said Tencent Games. “NVIDIA ACE and Tencent Games will help lay the foundation that will bring digital avatars with individual, lifelike personalities and interactions to video games.”
NVIDIA ACE Brings Game Characters to Life
NPCs have historically been designed with predetermined responses and facial animations. This limited player interactions, which tended to be transactional, short-lived, and as a result, skipped by a majority of players.
“Generative AI-powered characters in virtual worlds unlock various use cases and experiences that were previously impossible,” said Purnendu Mukherjee, founder and CEO at Convai. “Convai is leveraging Riva ASR and A2F to enable lifelike NPCs with low-latency response times and high-fidelity natural animation.”
To showcase how ACE can transform NPC interactions, NVIDIA worked with Convai to expand the NVIDIA Kairos demo, which debuted at Computex, with a number of new features and inclusion of ACE microservices.
In the latest version of Kairos, Riva ASR and A2F are used extensively, improving NPC interactivity. Convai’s new framework now allows NPCs to converse among themselves and gives them awareness of objects, enabling them to pick up and deliver items to desired areas. Furthermore, NPCs gain the ability to lead players to objectives and traverse worlds.
The Audio2Face and Riva Automatic Speech Recognition microservices are available now. Interactive avatar developers can incorporate the models individually into their development pipelines.
Discussion about this post