Situational awareness is a military term that you also find used a lot in discussion of combat tactics and in various military gaming simulations (my experience with the term comes from many years playing online aerial combat simulations). The term encompasses the information processing demands faced by individuals and groups as they participate in complex, stressful, and rapidly changing combat situations. Essentially, SA involves treating the battlefield (both virtual and actual) as a dynamic environment where dispositional assessment (what is the location and status of friendly and enemy units) is coupled with short-term and long-term predictive behavior (where will friendly and enemy units be in 30 seconds? What is that enemy likely to do if I perform this action? What are my escape routes?). SA is especially challenging in a flight context that requires players to act and extend their attention and anticipation through three dimensions. Furthermore, the speed of aerial combat even in propeller-driven aircraft ensures that the change-of-state that forces ongoing updates in a pilot’s SA can be highly variable. For two aircraft engaged in the same plane (turning to try and get on one another’s tail, for example) the fight may unfold almost in slow motion. The situation with two aircraft going head to head at over 600 scale miles per hour is very different. Moreover, a fight is usually a mix of these states, with a pilot being forced to concentrate on both the slow-motion fight in which they may be engaged, while other aircraft around him or her present a dizzying and rapidly changing set of possible threats and opportunities.
In a gaming context SA is intimately involved with a player’s perception of the sophistication and complexity of the game’s AI. SA is both predictive and iterative (requiring a repetitive process of evaluation and re-evaluation): the more frustrating the AI, in that sense, the greater the chance that the player will attribute to it human-like characteristics. If, on the other hand, AI enemies prove to be utterly predictable in their individual actions, and reliably similar in their overall patterns of behavior (i.e. amenable to “rinse and repeat” kill strategies) then the AI will be perceived as artificial, true, but also dumb. Therefore, one of the most important and influential ways of fooling players into attributing complex, human-like behavior in AI entities is to influence their perception of the environment.
To put it simply, attributing a human-like complexity to an AI opponent depends not simply upon the behavior of the AI itself, but upon the player’s perception of the complexity of their environment. I have pointed out in other posts that there has been a lot of bullshit distributed quite liberally concerning the supposedly precious attributes of human intelligence. However, this same issue of environmental complexity also plays into our perception of human intelligence more than we might think (or like to acknowledge). When our interactions with our fellow human-beings aren’t being shaped by a massive set of assumptions, our perception of the nature and sophistication of their communication is largely an information processing challenging. To be sure, the amount of information we get when interacting with a fellow human being is extensive, and much of it is processed without our necessarily being aware of it: body posture, gestures big and small, facial expressions, even less tangible elements such as smell, tone, the tone behind someone’s tone, and so on.
It is worth considering how individuals get away with some form of trickery in situations of either face-to-face or (perhaps more useful for our purposes) mediated interaction: outright lying, misdirection, strategic omission, and all the rest. The chances of putting something over on an individual are greatly increased if that individual’s ability to assimilate information is impaired. This impairment can happen if a) the individual is overloaded with the sheer quantity of information, b) they are receiving a lot of pieces of information via different channels each of which requires its own set of analytical and response procedures (the multi-tasking phenomenon) or c) the amount of information is so limited that the individual is forced to act based largely on ignorance. Politicians exploit one or indeed all of these strategies all the time. Little kids learn these kind of manipulative strategies in the womb. (Come on, you must have figured out pretty early on that you had a better chance of being allowed to do something a little sketchy if you asked one of your parents while they were distracted by other things, or if you happened to leave out a few crucial details (“I just want to go over to Steve’s place and meet some friends. . .one of whom will have brought his parents’ Ferrari and is going to drive us all up to Santa Barbara where there is this kick ass Rave/DIY tattoo party going down.”)
To be sure, there are other factors that play into the likelihood that someone will be able to deceive someone else. The deceiver could be, well, just really skilled at the dark arts of deception. The person at whom the deception is aimed could be a moron. From a game design point of view, however, we can’t consider either of these factors as having any bearing on the matter. As I’ve argued in previous posts, the quest to create authentically humanly intelligent AI has been a quixotic failure to this point, and will probably be so for the foreseeable future. Nor do good games generally result from the assumption that the player is a moron. It may well be true–given that the mean statistical distribution of stupidity in the general population is quite high–but if you design a game around that belief it tends to produce really bad games. Or a bestseller by id software. Whichever. So what we’re left with, from a game design point of view, is manipulating the environmental information available to the player. Make no mistake, the goal here is deception. Designers are trying to perpetrate a (healthy) deception on players by getting them to accept a radically limited AI capability as a sophisticated, adaptable, capable, almost human-like opponent. It is a complex game of “pay no attention to the man behind the curtain” where success depends on keeping the player too busy to be able to part the curtain, or unable to locate the curtain in the first place.