Hard Thoughts About Security, Sciences, and the Humanities

846 - 9-11 From Space
Image of the 9-11 attacks from space, taken by NASA. (Available from the Wikipedia entry on the 9-11 attacks.)

In the wake of another shooting of unarmed American servicemen, the Navy (according to NBC) “plans to station armed guards at all of its reserve centers across the country.” That might be a good idea, but it falls far short of what most Americans have been calling for as an apparently common-sense reaction to attacks on servicemen and women on their bases: let them carry guns. I mean, these guys are trained to handle firearms, right? What could be more obvious than giving a gun to a soldier or marine?

Yeah, it’s not actually that obvious. And it’s not just politics that are stopping that from happening:

Negligent discharges: One subject the military really doesn’t like to talk about (Foreign Policy)

Here’s an amazing number that I had never seen before: Since the beginning of the U.S. operation in Iraq [through May 2011], more than 90 U.S. military personnel have been killed there by negligent weapons discharges.

‘Disturbing trend’ seen in negligent discharges of weapons in Afghanistan (Stars and Stripes)

In the past 18 months, troops in Afghanistan have accidentally killed themselves or others at least six times and wounded nearly two dozen more troops through unsafe weapons handling, according to Army statistics released to Stars and Stripes.


There are other reasons for not issuing weapons to on-base personnel (the logistical headache is immense, especially when considering that bases have to get locked down whenever a weapon is misplaced), but the big one is the simple one: handing out guns is liable to end up killing more folks than the terrorist could accomplish.

Terrorist attacks are scary, but in many cases the irrational reaction of people to scary things is more dangerous than the thing that they are afraid of. Fear might not be the only thing we have to fear, but it’s definitely near the top of the list. Another example: more folks probably died in car crashes they took to avoid flying after 9-11 then actually died in the 9-11 attacks.

In the months after the 2001 terror attacks, passenger miles on the main US airlines fell by between 12% and 20%, while road use jumped. The change is widely believed to have been caused by concerned passengers opting to drive rather than fly. Travelling long distances by car is more dangerous than travelling the same distance by plane. Measuring the exact effect is complex because there is no way of knowing for sure what the trends in road travel would have been had 9/11 not happened. However, Professor Gerd Gigerenzer, a German academic specialising in risk, has estimated that an extra 1,595 Americans died in car accidents in the year after the attacks – indirect victims of the tragedy.

That was just the first 12 months after the attack. If the trend continued for another few years–even at a reduced rate–it could easily be the case that the number surpassed the 9/11 death toll.

As human beings we like to pretend that we don’t put a price tag on human life, but that’s not true. We do. All the time. We just don’t actually look at it. In my systems engineering courses, for example, we learned various ways to extrapolate a price value for a human life based on indirect decisions. Simplistic example: suppose there’s a dangerous portion of a highway with no physical barrier between opposing lanes, and every year 5 people die in accidents there. Installing a concrete barrier would lower that to 4 lives, and would cost $100,000. If the barrier doesn’t get installed then, willingly or not, we’re saying that a human life is worth less than $100,000 in this case.

Of course you can’t actually derive a “real” value of human life that way, but that’s actually one of the most interesting things: if you apply this kind of analysis across a wide range of examples–from road safety to asbestos removal–you will easily see that when the threat isn’t scary (as with traffic deaths) the value of human life is very low. But when it is scary–as with asbestos–we will often as a society decide to spend millions of dollars or more per life saved.

It’s not just about money, of course. I’m only using that as an example of the fact that–even when we don’t like to admit it–we have to make these kinds of trade-offs. They are unavoidable. This is why, every time I hear someone say something, “We have to do whatever it takes to save even one life,” I have to stifle an urge to smack them. Anyone saying that is a fool or a liar. In either case, the last person that should be in charge of deciding what we’re willing to spend to save a life is the kind of person who pretends we don’t have to make the decision at all.

It’s not just about money, by the way. There are other things at stake. How many of our civil liberties and our culture of openness have we already sacrificed in the name of preventing terrorist attacks? What are we getting for those sacrifices? Not much, most estimates seem to say, but the real answer is: no one knows. No one knows ’cause we’re not even supposed to ask the question. We’re not supposed to admit that there’s a tradeoff. That there’s a cost.

This kind of emotional decision-making is double-edged disaster. We spend billions on scary things that aren’t that dangerous, and then refuse to spend smaller sums of money on things that could save large numbers of lives. I was in Hungary for the last two weeks and, in trying to explain why most of America doesn’t have effective public transportation networks–I got to explaining our culture of cars. I pointed out that, because transportation to school and sporting events and other activities is so complicated and (time) expensive, we continue to let kids start driving at 16 in large part as a way to offload the burden on their parents. My Hungarian friend–where the minimum driving age is 18 and lots of people don’t get licenses until much later (if at all)–asked if the 16-year old drivers were good drivers. Of course they are not, I said. They have very little training, very little experience, and are dangerously immature. Doesn’t that result in danger? Well, yes it does. Off the top of my head, there are about 30,000 fatalities related to driving in the US every year. Of course, a lot of those don’t have anything to do with teenage drivers (drunk driving is a pretty huge portion of it), but there’s no doubt that thousands of kids are killed or seriously injured every single year. What would it cost to save them? Who knows. Where’s the rhetoric about, “If we can save even one life…”? Nowhere. Because it’s not scary.

I put a lot of emphasis–I’ve done it in this post and we do it in many of our blogs at Difficult Run–on the kind of quantitative analysis that you get from economics (my background) or engineering (Bryan’s) or business (Walker’s) or computer science (Ro’s). Sometimes I even go out of my way to take a swipe at the humanities–especially modern art and academia. But I understand very well that these are not fundamentally quantitative questions. Neither economics, nor engineering, nor business, nor computer science can answer questions about the tradeoffs we have to make between dollars or hours or civil liberties on the one hand and lives on the other. These are fundamentally philosophical and moral questions, and we have to seek philosophical and moral answers. And, like all philosophical and moral questions, they will probably never have a clear, objective, final answer.

But seeking those answers is worth it. It’s worth it from a practical standpoint because the kind of emotionally-driven policy that arises in the absence of clear-eyed analysis is Pareto inefficient. Sorry for the econ jargon, but it’s an important term. If a situation is Pareto efficient, it means that you can’t make one person better without taking from someone else. So Pareto efficiency isn’t necessarily a good place to be. It could be very unfair, for example. If you give $10 to Tom and $90 to Sue, that’s Pareto efficient, but it’s not fair. But the one thing that Pareto efficiency gets you is no waste. If you give $10 to Tom and $10 to Sue and then light the other $80 on fire, that’s Pareto inefficient. So Pareto efficiency shouldn’t be a final goal but it should be a bare minimum. And right now, there’s no doubt that our patchwork response to security is far, far from Pareto efficiency.

Simple example of Pareto efficiency from the Wikipedia page. All the red dots are Pareto efficient because there’s no waste: you’re getting all you can out of Item 1 and Item 2. The only way to get more of Item 1 in that case is to give up some of Item 2 (or vice versa). The gray dots are Pareto inefficient. You can get more if Item 1 without sacrificing Item 2 (or vice versa). If you imagine the two items are “Safety” and “Civil Liberties” you can see that picking which of the red points is difficult, but picking *ANY* of the gray points is insanity.

Seeking the answers is also worthwhile from a philosophical standpoint. Socrates said the unexamined life is not worth living. I believe that’s because if you don’t examine your life it’s not really your life. You’re just acting out the social conditioning you’ve been raised with. You’re not an independent agent in that case. You’re just a conduit through which cause and effect flow. Once you examine your life–once you adopt certain principles and attitudes and goals based on your own deliberation and values–you start to truly live. And this is true even if you don’t actually change very many of your decisions or actions. You might take a hard look and decide that the values and goals you ended up with from your parents and society are actually fairly reasonable and keep things more or less as-is, but–even in that case–there’s been a tremendously important shift because now they are your values and your goals.

So, when I write posts expressing cynicism about modern art or academic philosophy1, it’s not because I think that art or philosophy are dispensable. It’s because I think that they are indispensable, but that (1) the modern incarnations have often lost their way and become empty shells and (2) they become monstrous in the absence of a commitment to including hard data where applicable.

From my perspective, it doesn’t take a lot to remind an economist that art isn’t accounted for in the GDP figures. Physicists ignore air resistance in an awful lot of their models, but it’s not like they actually get confused and forget that it exists. Economists ignore lots of human foibles in their models for the same reason, and they are just as unlikely to somehow become confused and mistake the simplified models for the real thing. In fact, I would argue that very few people are more aware of human foibles than economists precisely because they are so routinely reminded of the incredible gap between their simple models and messy reality. Thus, we get books like Nudge or The Myth of the Rational Voter or Predictably Irrational: all investigations into how economic models of human nature fail written by economists.

On the other hand, I have routinely had to sit through painfully ignorant scientific or economic diatribes by humanities scholars who literally don’t have the first clue about what they are talking about. There’s a reason Marx is not taken seriously as an economist by economists and yet you will still find plenty of Marxists in English departments who either don’t know or don’t care to separate from his philosophical stances (which continue to be relevant and interesting) and his economic theories (which are about as relevant for modern economic policymaking as Copernicus’ model of the solar system is to getting an astronaut to the moon2.)

In simple terms: I know lots of economists and engineers and scientists who are conversant with, for example, pragmatism, but I don’t know of any humanities professors who could give you a cogent explanation of, say, marginalism.

Maybe that assessment is off base. It could be.

But the point–and this is true regardless of my perception of whether the humanites or the sciences are in deeper trouble today–is that we need an approach that embraces both and rejects fear-based decision making. We need folks to be conversant in the elementary basics of statistics and math and have an intuitive desire to base their analysis on hard data and then be willing to use that as the foundation for moral and philosophical arguments about how to set policy based on open-eyed analysis rather than emotionally-driven instinct.

Is that asking a lot? Maybe. But come on, people. How much time do we spend watching cat videos or reflexively sharing political memes that assume the other side is all composed of evil morons?

We can do better than we are doing.


2 thoughts on “Hard Thoughts About Security, Sciences, and the Humanities”

  1. Concerning scary vs. non-scary threats. As far as I can tell, the difference between the two is the measure of direct control we have over the occurrence. For that reason, terrorist attacks are scary (even though they kill less than traffic accidents) and NDs are not, as they come from mishandling firearms. And nobody thinks they’ll be the ones to screw up.

  2. I agree with virtually all of that. As you might imagine, listening to non-philosophers who think they understand the philosophy they’re talking about well enough to apply lessons from their own fields to it quite often reveals painful ignorance, as well. Conversely, many of the memorable talks I’ve seen in Philosophy departments involve the application (perhaps misapplication, admittedly) of new information from psychology or physics to long-debated philosophical problems. So, while there are certainly many poor uses of data in philosophy, and many who feel that the value of prominent thinkers of the past is worth defending against the tide of appreciation for whatever the trends of the day happen to be, the view that philosophers need to take data seriously is a big deal in many departments. You’ve more allies than you might realize.

Comments are closed.