The Future Of AI For Games

The intersection between using AI to assist game development and providing interactions during play.

popularity

Earlier this month, I had the pleasure of attending the inaugural AI and Games Conference at Goldsmiths in London, for which Arm was an associate sponsor.

Hosted by Dr. Tommy Thompson, and borrowing its name from his AI and Games YouTube channel, the day really delivered on the promise of bringing experts and enthusiasts (and subscribers) together for interesting talks on the intersection of artificial intelligence and gaming as well as networking opportunities.

In terms of speakers, the two tracks on offer were roughly split between AI for game development and AI within games. Talks covered a range of techniques: from the more classical AI like Hierarchical Task Network (HTN) Planning and Monte Carlo Tree Search (MCTS) to generative AI with Large Language Models (LLMs) like ChatGPT offers.

If you were not at the sold-out event and feel like you missed out on some great content, you will be pleased to know that recordings will be appearing on the YouTube channel over the coming months.

In the meantime, I would like to use this blog post to highlight and explore some of the key points that are raised in the panel discussion entitled “The Future of AI For Games.”

Sharing the microphone for this discussion (pictured in the photo from left to right, with Dr. Tommy Thompson at the end) were:

The relationship between emerging AI tools and creators

With their users predominantly game developers, Martina remarked that one goal for Unity is to lower the hurdles that might be there in terms of complexity and difficulties of conveying something through technology.

She shared how working with the Unity Muse assistant really helps with this and at times it can feel “like an actual relationship where you go through ups and downs” and that, as AI advances over the longer term, we can expect “more of a hand-holding and warm cuddly relationship.”

Metaphorically, at first, but perhaps one day literally?

Protecting our players

When creating and testing toxicity detection filters, Andrei suggested we could use AI to generate toxic content instead of waiting for actual toxicity to happen.

I personally think that toxic tanker has sailed and there is plenty of undesirable content out there already, but I do like the idea of using AI for generating test inputs to get more coverage.

Yassine raised an interesting point around generating animation from a video that was taken by the user, where a particular movement might be deemed offensive in some cultures but not others, making it hard to moderate with algorithms alone. But then he also said that some games want to let players do whatever they want.

Martina has been thinking about this too and gave the example of someone building a first-person shooter game and wanting to have a bomb in the game. Should they provide assistance to build that game, with instructions how to do it? We then move from a question of safety for the players to the wider public at large.

The intersection between AI for game development and AI in games

Verbal interaction (with voice) is something Roberto sees as very important both in development and interacting within the game. This could be giving instructions and words of encouragement to the assistant we mentioned earlier, or it could be when speaking with NPCs. In both situations, the AI could be running on the device, in the case of making games within games, which has been a thing for decades.

Who else can remember making their own courses in Kikstart 2 on the Spectrum in the eighties? I think it would have been awesome to just tell the game what to build for me. Add a steep up-ramp there, some flames in the gap, then a telephone box to land on and then perhaps make my own revving sounds to control the speed of the bike when taking on my creation.

Seriously though, the more we want to do in-game, and the more we want to have AI involved with that, we will need to have the option for that processing to happen at the edge (on-device) rather than the cloud. Conversations do not feel very natural when there is a delay waiting for the server to respond, and there are potentially massive cost implications for game developers or publishers. By allowing the user to do the processing on their device, you shift the burden to them (and their battery or charging cable), but also offer the possibility of fluent interaction and more privacy.

That is not to say all processing should happen there, of course. In the words of Andrei, we need to think of “how much resolution do I actually need,” and switch between the device or server or even somewhere in between, for access to different sized models, for example.

Summary

I hope this post has given you a taste of the topics that were discussed, and you are as encouraged as I am in the growing interest from games engines and cloud providers in bringing GenAI to the game development pipeline and into games themselves.

I’m already looking forward to the return of this conference next year and seeing how far things have progressed!

In the meantime, I invite you to join the conversation and share your thoughts on the future of AI in gaming. How do you see AI shaping the games of tomorrow? What challenges and opportunities do you foresee?


Tags:

Leave a Reply


(Note: This name will be displayed publicly)