TwinHits’ comment on my previous post perhaps had an unstated question behind it: isn’t it time to stop tearing down other people’s AI tests and come up with something creative? And the answer would be, yes. . .in a minute.
Just to be clear, I think that both the kind of Turing Test represented by the Loebner prize and the Botprize started out on the right track but are confusing two issues: one of them useful, one of them of less so. Turing’s original question was twofold: can a machine think? And if so, how would we know? His hypothesis was that if a machine could offer responses that were indistinguishable from those of a human, it could be said to think. And since then most Turing Tests such as the prize contests I have discussed have taken it as axiomatic that these two things are connected. However, I don’t think they are necessarily. Moreover, the emphasis in these tests is always on the AI itself, on its level of intelligence. Which seems to be the blindingly obvious point of the whole exercise. But the blindingly obvious isn’t so far yielding some very interesting results.
The “intelligence” issue may be interesting from the pure research perspective–although as I pointed out, if you were really interested in evaluating that aspect the entire contest would be run in a less tritely comparative fashion. The end result is that these prizes now come across as little more than cheerleading exercises for our own human awesomeness: we are so smart, creative, flexible, adaptable, sophisticated and cunning that no AI can yet fool us. Even if by some miracle an AI did manage to fool people sufficiently to pass one of these tests, that still wouldn’t be that useful a result for the rest of us. By and large people do not interact with machines, much less their overall environment, in the form of a rigidly controlled testing procedure. Therefore gamers and game designers should be interested in the other side of this question: not the capabilities of the AI itself, but the ability of the AI to fool us. In other words, less emphasis on the intelligence, more on the artificial (hence the name of this blog!).
The ability of the AI to fool us into thinking that it possesses some human characteristics is going to be based in part upon the inherent capabilities of the AI, naturally. But my argument is that it has much more to do, ultimately, with the design of the environment in which the AI is to operate, and the corresponding latitude afforded the player. Especially for gaming purposes what counts is not how smart the AI really is but how smart it appears to be. That perception is heavily shaped by the context of the game in general and that of the player in particular. It is possible for a well-designed game to “fool” the player into feeling as if they have encountered some smart AI even if the pure technical capabilities of the AI may be relatively rudimentary.
At this point, then, it might be useful to start identifying some examples of AI design strategies in games that are either particularly bad or particularly effective. I’m going to start with a couple of my favorites; you’ll notice that in several of these examples some of the good and bad aspects I’m describing can be attributed to game design issues in general as much as the AI design specifically–but that’s my point.
Dumb Difficulty Substitutes for Smarts: One of my least favorite approaches to creating a challenging game is where the “difficulty” settings simply boost the stats of the AI (and/or correspondingly reduce those of the player). So the enemy doesn’t become smarter, it just becomes physically tougher, more accurate and its weapons deal more damage, and/or there are more of them. This approach has been around virtually as long as electronic games themselves, and in small doses and in an appropriate environment it can provide a fun challenge. But it has become the unimaginative default for game designers. And sometimes it can be so badly implemented that it produces some unexpectedly hilarious side-effects that completely destroy a player’s immersion in the game.
One example is Call of Duty: World at War. As with all the other titles in the series, the game features painstaking attention to historical detail as it recreates WWII battlefields in both the Pacific and Eastern Front. The level design, sound design, weaponry, all make for a pretty immersive experience. . .until you crank it up to the highest difficulty level. The actual quality and tactics of the AI doesn’t change significantly at any difficulty level. But at the highest level, the Japanese AI suddenly starts up a passionate love affair with the grenade.
Picture this. A scenario has you as a marine fighting to clear a Japanese-held island late in the war. The defenders have been bombed, shelled, and strafed to buggery. Their supply lines have been cut, they’ve been reduced to eating rodents and the more substantial examples of the local insect population. They are so low on ammunition that they frequently resort to Banzai charges. But crank the difficulty up to max, and suddenly they discover a bottomless supply crate of grenades. Which they proceed to rain down on you with all the precision and frequency of an automated launcher. My “Sod this for a joke” moment came when I watched no fewer than 6 grenades land in a neat circle around me, after I had dodged no less than ten in the previous two minutes. This is not “creating a challenge,” it is covering up the limited abilities of your AI, and is a fundamentally unimaginative game design strategy (see “Change the Player”, below).
The One-Trick Pony: This is where the AI entities may have one single mode of operating which they deploy constantly every time you meet them. Sometimes this mode is shared by many entities that are supposed to be functionally distinct. This again tends to be a game design default, but the example that springs immediately to mind is Doom 3. Yes, I know id software is responsible for all this: Wolfenstein 3D and Doom helped to get me hooked on the whole games thing in the first place. And yes, I know their focus is really on creating great multiplayer combat. But their core game design and game design strategies since then have never changed: put all your effort into high end graphics at the expense of narrative and challenging AI. Doom 3 is no exception.
While the game does a great job riffing on (or ripping off, depending on your point of view) the original Doom, the AI is amongst the least challenging and least interesting out there. Most AI entities pretty much have one form of attack and one attack only. After the 300th time that an imp jumped out from the shadows with blinding superhuman speed and then simply stood there ripping at me while I poured lead into it, I recognized the game for what it was. Not smart. Not challenging. Simply a chew toy for the slavering OCD crowd. Never finished it. Glad I didn’t pay too much for it (thank you Steam!).
Limit the opportunities for the AI to be stupid: One of the most satisfying game AIs I’ve encountered was that in the original F.E.A.R. The firefights always felt very tense, and organic, with the enemy soldiers maneuvering to try and get better shots on me, panicking when I had killed too many of them, taking cover and refusing to come out, using the odd grenade at a strategically appropriate moment. But the genius of the game was the fact that the AI was made to appear smarter by the fact that it was given a very constrained environment in which to operate. Most of the combat in the game takes place indoors in very close quarters, with limited sight lines. First of all, the AI may have been behaving in some pretty questionable ways, for all I know. While I was cowering behind a piece of furniture they may have been spending all their time running headlong into walls or playing “pull my finger.” But your limited view also limited the chance that you would actually witness the kind of AI behavior that might crack the immersion for you. This strategy was enhanced by the fact that so much of your environment could be destroyed; even if you could get a clear line of sight to your enemy your view was often filled with smoke, plaster dust and swirling pieces of destroyed furniture. Secondly, the tight quarters means, in effect, fewer opportunities for the AI to screw up. Not that I want to pretend that issues like pathfinding in a confined space don’t pose a significant challenge. But this is one game where a very nice balance of AI, environmental factors, and limited player abilities all coincide to actually make the AI appear relatively smart.
Unfortunately, Monolith moved away from this design in subsequent games and resorted to the Dumb Difficulty move (see above); enemy AI didn’t change in the sequels, they just become more accurate and more resilient: not smarter, just tougher.
Change the Player not the AI: What are some alternatives to the Dumb Difficulty move? While that move is unimaginative, it is responding to a couple of real issues: the need to promote replayability and players’ desire to challenge themselves. An effective response to this dilemma that doesn’t simply involve turning your AI into super-soldiers is a key part of the Thief series of games. In these games, changing the level of difficulty in the game doesn’t change your enemies at all; it changes the nature of the tasks you have to accomplish in each level and the way you must accomplish them. The amount of loot you need to steal goes up, additional objectives are added, and at the highest difficulty level you are not allowed to kill any NPCs and sometimes you aren’t even allowed to be seen by anyone. Voila. The game is now challenging and you haven’t had to resort to populating your entire game with walking tanks masquerading as humans. This is a great example of your perception of the difficulty of the game and even the enemies you meet being influenced by a change in the way that you are forced to relate to your entire environment.
I’m interested in other ideas people might have for smart or stupid AI and/or design strategies in games.