Wednesday, February 17, 2010

Common Ecology DOES NOT Quantify Human Insurgency

Lately much ado is being made of the findings of Sean Gourley and his crew regarding power law relationships they’ve found in insurgency-based conflict. For some quick background, go here: http://seangourley.com/ and watch the 7 minute TED video.

Let me be frank. This is another prime example of academics armed with mathematical/statistics based techniques run amok with statistical inference and a naïve belief that it can predict the future.

First, let’s get some perspective. The discovery of power law relationships in conflict is not new. Lewis Fry Richardson discovered a power law relationship between intensity of conflict and the frequency of its occurrence as early as the 1940s. That discovery has been a result in search of a theory ever since. So far, no one has found a satisfying explanation for why the relationship exists, but it has continued to be one of the most robust findings in conflict literature.

Along come Gourley et al, and suddenly the finding is new again. But his group applied the idea to insurgency to see if the relationship exists there as well, and sure enough, it does. But they take the research a little further down the field and discover that the slope coefficient of -2.5 seems to hold as a common value across all tested insurgencies. On its own, this is an interesting finding.

Wired magazine has published some criticisms of the findings of Gourley’s group, and these criticisms center primarily on the quality of the data they used. I don’t find these criticisms to be particularly insightful, mainly because just about any data can be subjected, accurately, to the same criticism. In the vernacular, it’s all crap, but it’s the crap that we have. To really indict the data, one would have to demonstrate that it has a particular bias one way or the other, and that is a challenging task.

No, where Gourley and crew fly off the rails are in the inferences they make from the finding. On the website I pasted above, have a look at the 14 key features that define a successful insurgency. You don’t really have to read past the first one to see that the train derailed itself before it even left the station. Can you say Mao? How about Tamil Tigers? Shining Path? The “Man-body” feature is an exception to the history of insurgency, not a feature of it.

This sort of inference exemplifies the danger of completely decontextualizing the math from the reality. But it also amply demonstrates the weakness of utilizing descriptive tools to try and predict the future, as so far all of the predictions that this group have made have failed to pan out (see the video for an admission thereof).

Power law relationships are descriptive, not causal. They don’t actually tell us anything other than what an equilibrium condition may actually look like. And that’s really the strength of the work that Gourley has done. If the -2.5 slope coefficient truly is a robust finding, it can provide us with a metric against which we can judge success or failure of particular policy actions. It can also serve as a reality check for game or simulation runs, provided we keep in mind the descriptive nature of the math.

If we can take findings like this one and then contextualize them in terms of other models such as Violent System Theory or other constructs, we might make some headway in understanding how we can interdict a hostile environment successfully. But the inferences drawn by Gourley and his cohorts are not only wrong, they are dangerous, as they stand a good chance of getting American soldiers killed if improperly applied in reality.

Social science academia needs a good dose of humility concerning its own evaluation of the usefulness of mathematics and quantitative tools where human behavior is concerned. If academics like Gourley continue to be taken at their word without frequent and lethal doses of skepticism about the applicability of the tools used to draw inferences, the lesson in humility will be learned at very high cost in human lives.

Tuesday, February 2, 2010

Just What are Wargames Good For?

Recently I attended a roundtable discussion on wargaming at one of our national war colleges. During the discussion, a distinguished practitioner of our art mentioned his conviction that wargames were, in fact, good predictive tools. This comment was quite controversial, and it ought to be. Throughout not just wargaming circles, but in the OR world in general there is much ado made about the ability to predict the future. The notion is cast in various terms and syntaxes, most frequently masquerading as anticipatory analysis or behavior.

What’s more, the ability to predict the future is a stated goal of many federal business opportunities (see almost any recent SBIR or STTR solicitation), not to mention various programs already in place in the armed forces (for instance, see Air Force Research Lab’s Focused Long-Term Challenges). As a result, much effort and expense is being put into the notion that somehow there must exist some way to predict what our enemies are going to do, and thus be able to circumvent their actions. Oh what a tangled web we weave.

When we look at both qualitative and quantitative points of view and techniques to gain some insight into how to anticipate the behaviors of adversaries, the level of complexity rapidly outstrips our capacity to account for it. Simplifications usually rely on the description of trends, or the subjectiveness of the subject matter expert. The critical assumption that we’ve taken for granted is that in order to understand what our adversary is going to do, we must understand his culture, his motivations, his environmental influences, and so forth. What we find with this approach is that the problem rapidly becomes intractable.

There are two governing issues. The first I call faith in the one-to-one map, the second is the fallacy of classical determinism. Faith in the one-to-one map is simply the belief that the closer a model gets to reality, ostensibly through the inclusion of as many governing variables and interactions as possible, the more accurate the predictions will be. In truth, this is likely to be an inaccurate correlation. In practice, this approach is simply ridiculous. The problem, of course, is that the amount and accuracy of data required in order to make such an approach feasible doesn’t, and is unlikely to ever, exist. But even if we were able to gather accurately all the necessary data and correctly put together all of the interactions in the system and we could then run experiments with our one-to-one mapping of the world, we still would not be able accurately predict adversarial behaviors. Why? Because the underlying assumption with the approach is that the universe behaves according to the tenets of classical determinism. And the problem with classical determinism is a very simple one: it assumes away random evolutionary variation and the existence of creativity. It also ignores such metaphorical but very real notions as Heisenberg’s Uncertainty Principle or the Lucas Critique.

The nut of the argument: the moment free will enters the equation, deterministic approaches become untenable. We are governed by ANOVA in our techniques, while the world of social interaction, or society, is governed by discrete events that do not fall within the assumptive confines of our scientific notion of trend.

This problem is well Illustrated by Nassim Nicholas Taleb in his book The Black Swan. Taleb refers to this problem as the ludic fallacy. It is summarized as "the misuse of games to model real-life situations." Taleb characterizes the fallacy as mistaking the map for the reality.

This is Taleb’s central argument and is a rebuttal of predictive mathematical models, as well as an attack on the idea of applying statistical models in complex domains. According to Taleb, statistics only work in casinos or places in which the odds are visible and defined. This conclusion rests upon the following three points.

• It is impossible to be in possession of all the information.
• Very small unknown variations in the data could have a huge impact (the Butterfly effect).
• Theories/models based on empirical data are flawed, as events that have not taken place before cannot be accounted for.

Taleb is highly critical of the notion that the unexpected may be predicted by extrapolating from variations in statistics based on past observations, especially when these statistics are presumed to represent samples from a bell-shaped curve. This point of view is easily demonstrable by showing that unlikely events occur significantly more frequently than the tails of the bell curve would indicate. This falsification proof holds particularly well in the realm of social science. He goes on to claim that better descriptive tools include power laws and fractal geometry.

Taleb’s idea that power laws and fractal geometry provide better descriptive tools may hold some promise for discovering new approaches to the problem, but only if we start to better understand what is actually possible in the realm of the predictive. One place to start might be to recognize that understanding our own vulnerabilities may be the best predictor of enemy behavior we’ll ever have. Wargames can certainly help us with that, but we have a lot of poorly preconceived notions to overcome.