There was a great session this morning on why less is more when it comes to game AI (artificial intelligence).
The session was presented by Kimberly Voll | Independent Designer/Developer, Zany T Games
As usual, I’ll start by quoting the GDC summary, followed by my input and take-a-ways.
“The best AI agents do not need to be brilliant, they need to be believable. As AI developers, our goal is only for players to believe behaviour is intelligent, rather than filling in all the blanks: “less is more”. I will provide concrete techniques for designing effective AI systems. I’ll talk about my process for creating Stanley, the AI in ROCKETSROCKETROCKETS, and how that can be applied to other games, as well as look at other AI examples from my own work and other games to help illustrate the techniques.” – gdconf.com
Kimberly Voll, known on-line as “Zany Tomato” is all leveled up in the education department. She has a couple of high level degrees in the areas of computer science, and human behavior. She’s been programming since the 80’s, and loves exploring human behavior to mirror/interact with it via game AI. Here most recent game is “RocketsRocketsRockets“, and is available for pre-release on Steam.
In her experience, she’s found that when developing game AI, the brain of the player is as much a factor as any program logic that governs the software behavior. She points out that our brains assume complexity. As humans we perceive non human objects and beings as having human qualities. It’s natural for us to assume that the world around us is complex and thoughtful. As game developers, we can play off that.
I’ll give an example from my own experience with AI to illustrate how this works.
I developed a way-point path system for my enemies that works on a grid. When an enemy is engaged in following the player and the player turns a corner, the enemy looses the player’s position. The enemy then paths itself to the last known position the player was in.
If the enemy reaches the point and regains line of site with the player, the enemy will continue to pursue the player. Otherwise the enemy will pause a moment and return to its normal path.
One of the problems I encountered with this method is that inevitably multiple enemy paths will intersect. What do I do when enemies collide with each other? By default the enemies start twitching out in place until one is pushed aside enough to continue. From the player’s experience this looks like the enemies just got off the short bus.
My solution was to simply have the program choose an enemy to pause movement on if an intersection is detected. This lets one stop to allow another to move past before it continues. This solution also presents an opportunity to let the player’s brain become part of the AI, but more on that in a moment.
If I’m detecting the intersection of two enemies, then I can do whatever I want when that event occurs. Instead of instantly choosing one enemy to pause and one to continue, I pause them both for a few seconds. During this time I run an animation on the models that looks like they’re conversing. Maybe I even fire off some dialogue audio. Now it looks like they’ve stopped to chat.
All of this is very simple from the programmer’s perspective. Now let’s think about what this looks like in the mind of the player.
The player is spotted and the enemy pursues. The player turns and runs around a corner. Fearful that the enemy will find him/her, the player takes cover behind some barrels and peaks around. Just as the player thought, the enemy turned the corner, but now can’t see the player. The player feels clever for hiding to evade. After a moment of looking around, the enemy heads back. The player decides to pursue behind the enemy. The player then sees another enemy walking towards the one he/she is following. The two enemies stop and chat. “What are they chatting about?”, the player wonders. The player decides to leave the area, but in doing so, doesn’t realize that they were spotted. The two enemies then come around a corner to surprise, and defeat the player.
From the player’s perspective, something very real just happened. Those enemies got together to figure out the player’s position, and then formed a plan.
Nope, not really. It was just a couple of clever lines of code. The enemies pursue the player, or the player’s last known position if line-of-sight is lost. The enemies also pause and play a talking animation before continuing when intersecting each other. There you go. Every Tom Clancy game you’ve ever played. 😉 (I do love TC games)
Bringing this conversation full circle, Kimberly did a great job of communicating the use of simple AI. Here is a check list she shared as a guide for AI building. The key is to start simple, and only build it smarter if it adds to the player experience.
Here’s the list:
- Watch people play
- How would they expect another human to play/not play
- Start simple
- Identify appropriate/inappropriate behaviors for the AI to have
- Randomize the intelligence
- If something was awesome once, is it awesome twice?
- Hide AI repetition in natural repetition
- Once AI does something stupid it’s hard for the player to get over
I’ll end with a comment on that last point. If a game developer spends intense time and brain power crafting the most intelligent response AI can have to a particular situation, but it breaks in another situation, then the illusion is lost. The more complex AI is, the more likely it is to not work right in very specific situations. From this point forward the player will think your game is dumb, because they’ve had an inconsistent experience when dealing with AI.
Thanks for reading, and keep following along for more posts from GDC 2015!