Hold Up Your Light

You’ve probably seen something like this meme in your own social media network feeds. 

I’m gonna do two things to this meme. First: debunk it. Not because it’s all that notable, but because it’s a pretty typical example of something scary and nasty in our society. And that’s what we’re going to get to second: zooming out from this particular specimen to the whole species.

This meme has the appearance of being some kind of insight or realization into American politics in the context of an important current event (the pandemic), but all of that is just a front. There is no analysis and there is no insight. It’s just a pretext to deliver the punchline: conservatives are selfish and bad. 

You can think of the pseudo-argument as being like the outer coating on a virus. The sole purpose is to penetrate the cell membrane to deliver a payload. It’s a means to an end, nothing more. 

Which means the meme, if you ignore the candy coating, is just a cleverly packaged insult. 

You see, conservatives don’t object to pandemic regulations because they would rather watch their neighbors die than shoulder a trivial inconvenience. They object to pandemic regulations (when they do; I think the existence of objections is exaggerated) because Americans in general and conservatives in particular have an anti-authoritarian streak a mile wide. Anti-authoritarianism is part of who we are. It’s not always reasonable or mature, but then again, it’s not a bad reflex to have, all things considered.

One of the really clever things about the packaging around this insult is that it’s kind of self-fulfilling. It accuses conservatives of being stubborn while it also insults them. What happens to people who are already being a little stubborn if you start insulting them? In most cases, they get more stubborn. Which means every time a conservative gets mad about this meme, a liberal spreading it can think, “Yeah, see? I knew I was right.”

Oh, and if incidentally it happens to actually discourage mask use? Oh well. That’s just collateral damage. Because people who spread memes like this care more about winning political battles than epidemiological ones. 

Liberals who share this meme are guaranteed to get what they really want: that little frisson of superiority. Because they care. They are willing to sacrifice. They are reasonable. So reasonable that they are happy to titillate their own feeling of superiority even if it has the accidental side effect of, you know, undermining compliance with those rules they care so much about. 

I’m being a little cynical here, but only a little. This meme is just one example of countless millions that all have the same basic function: stir controversy. And yes, there are conservatives analogs to this liberal meme that do the exact same thing. I don’t see as many of them because I’m quicker to mute fellow conservatives who aggravate me than liberals. 

Why did we get here?

You can blame the Russians, if you like. The KGB meddled with American politics as much as they could for decades before the fall of the USSR and Putin was around for that. Why would the FSB (contemporary successor) have given up the old hobby? But the KGB wasn’t ever any good at it, and I’m skeptical that the FSB has cracked the code. I’m sure their efforts don’t help, but I also don’t think they’re largely to blame. 

We’re doing this to ourselves.

The Internet runs on ads, and that means the currency of the Internet is attention. You are not the customer. You are the commodity. That’s not just true of Facebook and it’s not just a slogan. It’s the underlying reality of the Internet, and it sets the incentives that every content producer has to contend with if they want to survive.

The way to harvest attention is through engagement. Every content producer out there wants to hijack your attention by getting you engaged in what they’re telling you. There are a lot of ways to do this. Clickbait headlines hook your curiosity, attractive models wearing little clothes snag  your libido, and so on. But the king of engagement seems to be outrage, and there’s an insidious reason why.

Other attention grabbers work on only a select audience at a time. Other than bisexuals, attractive male models will grab one half of the audience and attractive female models will grab the other half, but you have to pick either / or. 

But outrage lets you engage two audiences with one piece of content. That’s what a meme like this one does, and it’s why it’s so successful. It infuriates conservatives while at the same time titillating liberals. (Again: I could just as easily find a conservative liberal that does the opposite.) 

When you realize that this meme is actually targeting conservatives and liberals, you also realize that the logical deficiency of the argument isn’t a bug. It’s a feature. It’s just another provocation, the way that some memes intentionally misspell words just to squeeze out a few more interactions, a few more clicks, a few more shares. If you react to this meme with an angry rant, you’re still reacting to this meme. That means you’ve already lost, because you’ve given away your attention. 

A lot of the most dangerous things in our environment aren’t trying to hurt us. Disease and natural disasters don’t have any intentions. And even the evils we do to each other are often byproducts of misaligned incentives. There just aren’t that many people out there who really like hurting other people. Most of us don’t enjoy that at all. So the conventional image of evil–mustache-twirling super-villians who want to murder and torture–is kind of a distraction. The real damage isn’t going to come from the tiny population of people who want to cause harm. It’s going to come from the much, much, much larger population of people who don’t have any particular desire to do harm, but who aren’t really that concerned with avoiding it, either. These people will wreck the world faster than anyone else because none of them are doing that much damage on their own and because none of them are motivated by malice. That makes it easier for them to rationalize their individual contribution to an environment that, in the aggregate, becomes extremely toxic. 

At this point, I’d really, really like everyone reading this to take a break and read Scott Alexander’s short story, “Sort by Controversial“. Go ahead. I’ll wait.

Back? OK, good, let’s wrap this up. The meme above is a scissor (that’s from Alexander’s story, if you thought you could skip reading it). The meme works by presenting liberals with an obviously true statement and conservatives with an obviously false statement. For liberals: You should tolerate minor inconveniences to save your neighbors. For conservatives: You should do whatever the government tells you to do without question.

That’s the actual mechanism behind scissors. It’s why half the people think it’s obviously true and the other half think it’s obviously false. They’re not actually reacting to the same issue. But they are reacting to the same meme. And so they fight, and–since they both know their position is obvious–the disagreement rapidly devolves. 

The reality is that most people agree on most issues. You can’t really find a scissor where half the population thinks one thing and half the population thinks the other because there’s too much overlap. But you can present two halves of the population with subtly different messages at the same time such that one half viscerally hates what they hear and the other half passionately loves what they hear, and–as often as not–they won’t talk to each other long enough to realize that their not actually fighting over the same proposition. 

This is how you destroy a society.

The truth is that it would be better, in a lot of ways, if there were someone out there who was doing this to us. If it was the FSB or China or terrorists or even a scary AI (like a nerdier version of Skynet) there would be some chance they could be opposed and–better still–a common foe to unite against.

But there isn’t. Not really. There’s no conspiracy. There’s no enemy. There’s just perverse incentives and human nature. There’s just us. We’re doing this to ourselves.

That doesn’t necessarily mean we’re doomed, but it does mean there’s no easy or quick solution. I don’t have any brilliant ideas at all other than some basic ones. Start off with: do no harm. Don’t share memes like this. To be on the safe side, maybe just don’t share political memes at all. I’m not saying we should have a law. Just that, individually and of our own free will, we should collectively maybe not

As a followup: talk to people you disagree with. You don’t have to do it all the time, but look for opportunities to disagree with people in ways that are reasonable and compassionate. When you do get into fights–and you will–try to reach out afterwards and patch up relationships. Try to build and maintain bridges. 

Also: Resist the urge to adopt a warfare mentality. War is a common metaphor–and there’s a reason it works–but if you buy into that way of thinking it’s really hard not to get sucked into a cycle of endless mutual radicalization. If you want a Christian way of thinking about it, go with Ephesians 6:12

For our struggle is not against enemies of blood and flesh, but against the rulers, against the authorities, against the cosmic powers of this present darkness, against the spiritual forces of evil in the heavenly places.

There are enemies, but the people in your social network are not them. Not even when they’re wrong. Those people are your brothers and your sisters. You want to win them over, not win over them.

Lastly: cultivate all your in-person friendships. Especially the random ones. The coworkers you didn’t pick? The family members you didn’t get to vote on? The neighbors who happen to live next door to you? Pay attention to those little relationships. They are important because they’re random. When you build relationships with people who share your interests and perspectives you’re missing out on one of the most fundamental and essential aspects of human nature: you can relate to anyone. Building relationships with people who just happen to be in your life is probably the single most important way we can repair our society, because that’s what society is. It’s not the collection of people you chose that defines our social networks, it’s the extent to which we can form attachments to people we didn’t choose. 

What are the politics of your coworkers and family and neighbors? Who cares. Don’t let politics define all your relationships, positive or negative. Find space outside politics, and cherish it. 

Times are dark. They may yet get darker, and none of us can change that individually.

But by looking for the good in the people who are randomly in your life, you can hold up a light.

So do it.

Why I’m An AI Skeptic

There are lots of people who are convinced that we’re a few short years away from economic apocalypse as robots take over all of our jobs. Here’s an example from Business Insider:

Top computer scientists in the US warned over the weekend that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies

These fears are based on hype. The capabilities of real-world AI-like systems are far, far from what non-experts expect from those devices, and the gap between where we are and where people expect to be is vast and–in the short-term at least–mostly insurmountable. 

Let’s take a look at where the hype comes from, why it’s wrong, and what to expect instead. For starters, we’ll take all those voice-controlled devices (Alexa, Siri, Google Assistant) and put them in their proper context.

Voice Controls Are a Misleading Gimmick

Prologue: Voice Controls are Frustrating

A little while back I was changing my daughter’s diaper and thought, hey: my hands are occupied but I’d like to listen to my audiobook. I said, “Alexa, resume playing Ghost Rider on Audible.” Sure enough: Alexa not only started playing my audiobook, but the track picked up exactly where I’d left off on my iPhone a few hours previously. Neat!

There was one problem: I listen to my audiobooks at double speed, and Alexa was playing it at normal speed. So I said, “Alexe, double playbook speed.” Uh-oh. Not only did Alexa not increase the playback speed, but it did that annoying thing where it starts prattling on endlessly about irrelevant search results that have nothing to do with your request. I tried five or six different varieties of the command and none of them worked, so I finally said, “Alexa, shut up.” 

This is my most common command to Alexa. And also Siri. And also the Google Assistant. I hate them all.

They’re supposed to make life easier but, as a general rule, they do the exact opposite. When we got our new TV I connected it to Alexa because: why not? It was kind of neat to turn it on using a voice command, but it really wasn’t that useful because voice commands didn’t work for things like switching video inputs so you still had to find the remote anyway and because the voice command to turn it off never worked, even when the volume was pretty low. 

Then one day the TV stopped working with Alexa. Why? Who knows. I have half-heartedly tried to fix it six or seven times over the last year to no avail. I spent more time setting up and unsuccessfully debugging the connection than I ever saved. 

This isn’t a one-off exception; it’s the rule. Same thing happened with a security camera I use as a baby monitor. For a few weeks it worked with Alexa until it didn’t. I got that one working again, but then it broke again and I gave up. Watching on the Alexa screen wasn’t ever really more useful than watching on my phone anyway.  

So what’s up? Why is all this nifty voice-activated stuff so disappointing?

If you’re like me, you were probably really excited by all this voice-activation stuff when it first started to come out because it reminded you of Star Trek: The Next Generation. And if you’re like me, you also got really annoyed and jaded after actually trying to use some of this stuff when you realized it’s all basically an inconvenient, expensive, privacy-smashing gimmick.  

Before we get into that, let me give y’all one absolutely vital caveat. The one true and good application of voice control technology is accessibility. For folks who are blind or can’t use keyboards or mice or other standard input devices, this technology is not a gimmick at all. It’s potentially life-transforming. I don’t want any of my cynicism to take away from that really, really important exception.

But that’s not how this stuff is being packaged and marketed to the broad audience, and it’s that–the explicit and implicit promises and all the predictions people make based on top of them–that I want to address.

CLI vs. GUI

To put voice command gimmicks in their proper context, you have to go back to the beginning of popular user interfaces, and the first of those was the CLI: Command Line Interface. A CLI is a screen, a keyboard, and a system that allows you to type commands and see feedback. If you’re tech savvy then you’ve used the command line (AKA terminal) on Mac or Unix machines. If you’re not, then you’ve probably still seen the Windows command prompt at some point. All of these are different kinds of CLI. 

In the early days of the PC (note: I’m not going back to the ancient days of punch cards, etc.) the CLI was all you had. Eventually this changed with the advent of the GUI: graphical user interface.

The GUI required new technology (the mouse), better hardware (to handle the graphics) and also a whole new way of thinking about the user interaction with the computer. Instead of thinking about commands, the GUI emphasizes objects. In particular, the GUI has used a kind of visual metaphor from the very beginning. The most common of these are icons, but it goes deeper than that. Buttons to click, a “desktop” as a flat surface to organize things, etc. 

Even though you can actually do a lot of the same things in either a CLI or a GUI (like moving or renaming files), the whole interaction paradigm is different. You have concepts like clicking, double-clicking, right-clicking, dragging-and-dropping in the GUI that just don’t have any analog in the CLI.

It’s easy to think of the GUI as superior to the CLI since it came later and is what most people use most of the time, but that’s not really the case. Some things are much better suited to a GUI, including some really obvious ones like photo and video editing. But there are still plenty of tasks that make more sense in a CLI, especially related to installing and maintaining computer systems. 

The biggest difference between a GUI and a CLI is feedback. When you interact with a GUI you get constant, immediate feedback to all of your actions. This in turn aids in discoverability. What this means is that you really don’t need much training to use a GUI. By moving the mouse around on the screen, you can fairly easily see what commands are available, for example. This means you don’t need to memorize how to execute tasks in a GUI. You can memorize the shortcuts for copy and paste, but you can also click on “Edit” and find them there. (And if you forget they’re under the edit menu, you can click File, View, etc. until you find them.)

The feedback and discoverability of the GUI is what has made it the dominant interaction paradigm. It’s much easier to get started and much more forgiving of memory lapses. 

Enter the VUI

When you see commercials of attractive, well-dressed people interacting with voice assistants, the most impressive thing is that they use normal-sounding commands. The interactions sound conversational. This is what sets the (false) expectation that interacting with Siri is going to be like interacting with the computer on board the Enterprise (NCC 1701-D). This way leads frustration and madness, however. A better way to think of voice control is as a third user interface paradigm, the VUI or voice user interface.

There is one really cool aspect of a VUI, and that’s the ability of the computer to transcribe spoken words to written text. That’s the magic. 

However, once you account for that you realize that the rest of the VUI experience is basically a CLI… without a screen. Which means: without feedback and discoverability.

Those two traits that make the GUI so successful for everyday life are conspicuously absent from a VUI. Just like when interacting with a CLI, using a VUI successfully means that you have to memorize a bunch of commands and then invoke them just so. There is a little more leeway with a VUI than a CLI, but not much. And that leeway is at least partially offset by the fact that when you type in a command at the terminal, you can pause and re-read it to see if you got it all right before you hit enter and commit. You can’t even do that with a VUI. Once you open your  mouth and start talking, your commands are being executed (or, more often than not: failing to execute) on the fly. 

This is all bad enough, but in addition to basically being 1970s tech (except for the transcription part), the VUI faces the additional hurdle of being held up against an unrealistic expectation because it sounds like natural speech. 

No one sits down in front of a terminal window and expects to be able to type in a sentence or two of plain English and get the computer to do their bidding. Here I am, asking Bash what time it is. It doesn’t go well:

Even non-technical folks understand that you have to have a whole skillset to be able to interact with a computer using the CLI. That’s why the command line is so intimidating for so many folks.

But the thing is, if you ask Siri (or whatever), “What time is it?” you’ll get an answer. This gives the impression that–unlike a CLI–interacting with a VUI won’t require any special training. Which is to say: that a VUI is intelligent enough to understand you.

It’s not, and it doesn’t. 

A VUI is much closer to a CLI than a GUI, and our expectations for it should be set at the 1970s level instead of, like with a GUI, more around the 1990s. Aside from the transcription side of things, and with a few exceptions for special cases, a VUI is a big step backwards in useability. 

AI vs. Machine Learning

Machine Learning Algorithms are Glorified Excel Trendlines

When we zoom out to get a larger view of the tech landscape, we find basically the same thing: mismatched expectations and gimmicks that can fool people into thinking our technology is much more advanced than it really is.

As one example of this, consider the field of machine learning, which is yet another giant buzzword. Ostensibly, machine learning is a subset of artificial intelligence (the Grand High Tech Buzzword). Specifically, it’s the part related to learning. 

This is another misleading concept, though. The word “learning” carries an awful lot of hidden baggage. A better way to think of machine learning is just: statistics. 

If you’ve worked with Excel at all, you probably know that you can insert trendlines into charts. Without going into too much detail, an Excel trendline is an application of the simplest and most commonly used form of statistical analysis: ordinary least-squares regression. There are tons of guides out there to explain the concept to you, my point is just that nobody thinks the ability to click “show trendline” on an Excel chart means the computer is “learning” anything. There’s no “artificial intelligence” at play here, just a fairly simple set of steps to solve a minimization problem. 

Although the bundle of algorithms available for data scientists conducting machine learning are much broader and more interesting, they’re the same kind of thing. Random forests, support vector machines, naive Bayesian classifiers: they’re all optimization problems fundamentally the same as OLS regression (or other, slightly fancier statistical techniques like logistic regression.)

As with voice controlled devices, you’ll understand the underlying tech a lot better if you replace the cool, fancy expectations (like the Enterprise’s computer) with a much more realistic example (a command prompt). Same thing here. Don’t believe the machine learning hype. We’re talking about adding trendlines to Excel charts. Yeah, it’s fancier than that, but that example will give you the right intuition about the kind of activity that’s going on.

Last thing: don’t get me wrong knocking on machine learning. I love me some machine learning. No, really, I do. As statistical tools the algorithms are great and certainly much more capable than an Excel trendline. This is just about how to get your intuition a little more in line with what they are in a philosophical sense.

Are Robots Coming to Take Your Job?

So we’ve laid some groundwork by explaining how voice control services and machine learning aren’t as cool as the hype would lead you to believe. Now it’s time to get to the main event and address the questions I started this post with: are we on the cusp of real AI that can replace you and take your job?

You could definitely be forgiven for thinking the answer is an obvious “yes”. After all, it was a really big deal when Deep Blue beat Gary Kasparov in 1997, and since then there’s been a litany of John Henry moments. So-called AI has won at Go and Jeopardy, for example. Impressive, right? Not really.

First, let me ask you this. If someone said that a computer beat the reigning world champion of competitive memorization… would you care? Like, at all? 

Because yes, competitive memorization (aka memory sport) is a thing. Players compete to see how fast they can memorize the sequence of a randomly shuffled deck of cards, for example. Thirteen seconds is a really good time. If someone bothered to build a computer to beat that (something any tinkerer could do in a long weekend with no more specialized equipment than a smartphone) we wouldn’t be impressed. We’d yawn. 

Memorizing the order of a deck of cards is a few bytes of data. Not really impressive for computers that store data by the terabyte and measure read and write speeds in gigabytes per second. Even the visual recognition part–while certainly tougher–is basically a solved problem. 

With a game like chess–where the rules are perfectly deterministic and the playspace is limited–it’s just not surprising or interesting for computers to beat humans. In one important sense of the word, chess is just a grandiose version of Tic-Tac-Toe. What I mean is that there are only a finite number of moves to make in either Tic-Tac-Toe or chess. The number of moves in Tic-Tac-Toe is very small, and so it is an easily solved game. That’s the basic plot of the WarGames and the reason nobody enjoys playing Tic-Tac-Toe after they learn the optimal strategy when they’re like seven years old. Chess is not solved yet, but that’s just because the number of moves is much larger. It’s only a matter of time until we brute-force the solution to chess. Given all this, it’s not surprising that computers do well at chess: it is the kind of thing computers are good at. Just like memorization is the kind of thing computers are good at.

Now, the success of computers at playing Go is much more impressive. This is a case where the one aspect of artificial intelligence with any genuine promise–machine learning–really comes to the fore. Machine learning is overhyped, but it’s not just hyped. 

On top of successfully learning to play Go better than a human, machine learning was also used to dramatically increase the power of automated language translation. So there’s some exciting stuff happening here, but Go is still a nice, clean system with orderly rules that is amenable to automation in ways that real life–or even other games, like Starcraft–are not.

So let’s talk about Starcraft for a moment.  I recently read an article that does a great job of providing a real-life example. It’s a PC Magazine article about the controversy over an AI that managed to defeat top-ranked human players in Starcraft II. Basically, a team created an AI (AlphaStar) to beat world-class Starcraft II players. Since Starcraft is a much more complex game (dozens of unit types, real-time interaction, etc.) this sounds really impressive. The problem is: they cheated. 

When a human plays Starcraft part of what they’re doing is looking at the screen and interpreting what they see. This is hard. So AlphaStar skipped it. Instead of building a system so that they could point a camera at the screen and use visual recognition to identify the units and terrain, they (1) built AlphaStar to only play on one map over and over again so the terrain never changed and (2) tapped into the Starcraft data to directly get at the exact location of all their units. Not only does this bypass the tricky visualization and interpretation problem, it also meant that AlphaStar always knew where every single unit was at every single point in time (while human players can only see what’s on the screen and have to scroll around the map). 

You could argue that Deep Blue didn’t use visual recognition either. The plays were fed into the computer directly. The difference is that human chess players use the same code to understand the game, so the playing field was even. Not so with AlphaStar. 

That’s why the “victory” of AlphaStar over world class Starcraft players was so controversial. The deck was stacked. The AI could see the entire board at the same time (which is not possible as a restriction of the way the game is played, not just human capacity) and only by playing on one map over and over again. If you moved AlphaStar to a different map, world class players could have easily beaten it. Practically anyone could have easily beaten it.  

So here’s the common theme between voice-command and AlphaStar: as soon as you take one step off the beaten path, they break. Just like a CLI, a VUI (like Alexa or Siri) breaks as soon as you enter a command it doesn’t perfectly expect. And AlphaStar goes from worldclass pro to bumbling child if you swap from a level it’s been trained on to one it hasn’t. 

The thing to realize is that his limitation isn’t just about how these programs perform today. It’s about the fundamental expectations we should have for them ever

Easy Problems and Hard Problems

This leads me to the underlying reason for all the hype around AI. It’s very, very difficult for non-experts to tell the difference between problems that are trivial and problems that are basically impossible. 

For a good overview of the concept, check out Range by David Epstein. He breaks the world into “kind problems” and “wicked problems”. Kind problems are problems like chess or playing Starcraft again and again on the same level with direct access to unit location. Wicked problems are problems like winning a live debate or playing Starcraft on a level you’ve never seen before, maybe with some new units added in for good measure.

If your job involves kind problems–if it’s repeatable with simple rules for success and failure–than a robot might steal your job. But if your job involves wicked problems–if you have to figure out a new approach to a novel situation on a regular basis–then your job is safe now and for the foreseeable future.

This doesn’t mean nobody should be worried. The story of technological progress has largely been one of automation. We used to need 95% or more of the human population to grow food just so we’d have enough to eat. Thanks to automation and labor-augmentation, that proportion is down to the single digits. Every other job that exists, other than subsistence farming, exists because of advances to farming technology (and other labor). In the long run: that’s great!

In the short run, it can be traumatic both individually and collectively. If you’ve invested decades of your life getting good at one the tasks that robots can do, then it’s devastating to suddenly be told your skills–all that effort and expertise–are obsolete. And when this happens to large numbers of people, the result is societal instability.

So it’s not that the problem doesn’t exist. It’s more that it’s not a new problem, and it’s one we should manage as opposed to “solve”. The reason for that is that the only way to solve the problem would be to halt forward progress. And, unless you think going back to subsistence farming or hunter-gathering sounds like a good idea (and nobody really believes that, no matter what they say), then we should look forward with optimism for the future developments that will free up more and more of our time and energy for work that isn’t automatable. 

But we do need to manage that progress to mitigate the personal and social costs of modernization. Because there are costs, and even if they are ultimately outweighed by the benefits, that doesn’t mean they just disappear.

I Want to be Right

Not long ago I was in a Facebook debate and my interlocutor accused me of just wanting to be right. 

Interesting accusation.

Of course I want to be right. Why else would we be having this argument? But, you see, he wasn’t accusing me of wanting to be right but of wanting to appear right. Those are two very different things. One of them is just about the best reason for debate and argument you can have. The other is just about the worst. 

Anyone who has spent a lot of time arguing on the Internet has asked themselves what the point of it all is. The most prominent theory is the speculator theory: you will never convince your opponent but you might convince the folks watching. There’s merit to that, but it also rests on a questionable assumption, which is that the default purpose is to win the argument by persuading the other person and (when that fails) we need to find some alternative. OK, but I question if we’ve landed on the right alternative.

I don’t think the primary importance of a debate is persuading speculators. The most important person for you to persuade in a debate is yourself.

It’s a truism these days that nobody changes their mind, and we all like to one-up each other with increasingly cynical takes on human irrationality and intractability. The list of cognitive biases on Wikipedia is getting so long that you start to wonder how humans manage to reason at all. Moral relativism and radical non-judgmentalism are grist for yet more “you won’t believe this” headlines, and of course there’s the holy grail of misanthropic cynicism:the argumentative theory. As Haidt summarizes one scholarly article on it:

Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments. That’s why they call it The Argumentative Theory of Reasoning. So, as they put it, “The evidence reviewed here shows not only that reasoning falls quite short of reliably delivering rational beliefs and rational decisions. It may even be, in a variety of cases, detrimental to rationality. Reasoning can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions. This explains the confirmation bias, motivated reasoning, and reason-based choice, among other things.

Jonathan Haidt in “Righteous Mind”

Reasoning was not designed to pursue truth.

Well, there you have it. Might as well just admit that Randall Munroe was right and all pack it in, then, right?

Not so fast.

This whole line of research has run away with itself. We’ve sped right past the point of dispassionate analysis and deep into sensationalization territory. Case in point: the backfire effect. 

According to RationalWiki “the effect is claimed to be that when, in the face of contradictory evidence, established beliefs do not change but actually get stronger.” The article goes on:

The backfire effect is an effect that was originally proposed by Brendan Nyhan and Jason Reifler in 2010 based on their research of a single survey item among conservatives… The effect was subsequently confirmed by other studies.

Entry on RationalWiki

If you’ve heard of it, it might be from a popular post by The Oatmeal. Take a minute to check it out. (I even linked to the clean version without all the profanity.)

Click the image to read the whole thing.

Wow. Humans are so irrational, that not only can you not convince them with facts, but if you present facts they believe the wrong stuff even more

Of course, it’s not really “humans” that are this bad at reasoning. It’s some humans. The original research was based on conservatives and the implicit subtext behind articles like the one on RationalWiki is that they are helplessly mired in irrational biases but we know how to conquer our biases, or at the very least make some small headway that separates us from the inferior masses. (Failing that, at least we’re raising awareness!) But I digress.

The important thing isn’t that this cynicism is always covertly at least a little one-sided, it’s that the original study has been really hard to replicate. From an article on Mashable:

[W]hat you should keep in mind while reading the cartoon is that the backfire effect can be hard to replicate in rigorous research. So hard, in fact, that a large-scale, peer-reviewed study presented last August at the American Political Science Association’s annual conference couldn’t reproduce the findings of the high-profile 2010 study that documented backfire effect.

Uh oh. Looks like the replication crisis–which has been just one part of the larger we-can’t-really-know-anything fad–has turned to bite the hand that feeds it. 

This whole post (the one I’m writing right now) is a bit weird for me, because when I started blogging my central focus was epistemic humility. And it’s still my driving concern. If I have a philosophical core, that’s it. And epistemic humility is all about the limits of what we (individually and collectively) can know. So, I never pictured myself being the one standing up and saying, “Hey, guys, you’ve taken this epistemic humility thing too far.” 

But that’s exactly what I’m saying.

Epistemic humility was never supposed to be a kind of “we can never know the truth for absolute certain so may as well give up” fatalism. Not for me, anyway. It was supposed to be about being humble in our pursuit of truth. Not in saying that the pursuit was doomed to fail so why bother trying.

I think even a lot of the doomsayers would agree with that. I quoted Jonathan Haidt on the argumentative theory earlier, and he’s one of my favorite writers. I’m pretty sure he’s not an epistemological nihilist. RationalWiki may get a little carried away with stuff like the backfire effect (they gave no notice on their site that other studies have failed to replicate the effect), but evidently they think there’s some benefit to telling people about it. Else, why bother having a wiki at all?

Taken to its extreme, epistemic humility is just as self-defeating as subjectivism. Subjectivism–the idea that truth is ultimately relative–is incoherent because if you say “all truth is relative” you’ve just made an objective claim. That’s the short version. For the longer version, read Thomas Nagel’s The Last Word

The same goes for all this breathless humans-are-incapable-of-changing-their-minds stuff. Nobody who does all the hard work of researching and writing and teaching can honestly believe that in their bones. At least, not if you think (as I do) that a person’s actions are the best measure of their actual beliefs, rather than their own (unreliable) self-assessments.

Here’s the thing, if you agree with the basic contours of epistemic humility–with most of the cognitive biases and even the argumentative hypothesis–you end up at a place where you think human belief is a reward-based activity like any other. We are not truth-seeking machines that automatically and objectively crunch sensory data to manufacture beliefs that are as true as possible given the input. Instead, we have instrumental beliefs. Beliefs that serve a purpose. A lot of the time that purpose is “make me feel good” as in “rationalize what I want to do already” or “help me fit in with this social clique”.

I know all this stuff, and my reaction is: so what?

So what if human belief is instrumental? Because you know what, you can choose to evaluate your beliefs by things like “does it match the evidence?” or “is it coherent with my other beliefs?” Even if all belief is ultimately instrumental, we still have the freedom to choose to make truth the metric of our beliefs. (Or, since we don’t have access to truth, surrogates like “conformance with evidence” and “logical consistency”.)

Now, this doesn’t make all those cognitive biases just go away. This doesn’t disprove the argumentative theory. Let’s say it’s true. Let’s say we evolved the capacity to reason to make convincing (rather than true) arguments. OK. Again I ask: so what? Who cares why we evolved the capacity, now that we have it we get to decide what to do with it. I’m pretty sure we did not evolve opposable thumbs for the purpose of texting on touch-screen phones. Yet here we are and they seem adequate to the task. 

What I’m saying is this: epistemic humility and the associated body of research tell us that humans don’t have to conform their beliefs to truth and that we are incapable of conforming our beliefs perfect to truth and that it’s hard to conform our beliefs even mostly to truth. OK. But nowhere is it written that we can make no progress at all. Nowhere is it written we cannot try or that–when we try earnestly–we are doomed to make absolutely no headway at all.

I want to be right. And I’m not apologizing for that. 

So how do Internet arguments come into this? One way that we become right–individually and collectively–is by fighting over things. It’s pretty similar to the theory behind our adversarial criminal justice system. Folks who grow up in common law countries (of which the US is one) might not realize that’s not the way all criminal justice systems work. The other major alternative is the inquisitorial system (which is used in countries like France and Italy).

In an inquisitorial system, the court is the one that conducts the investigation. In an adversarial system the court is supposed to be neutral territory where two opposing camps–the prosecution and the defense–lay out their case. That’s where the “adversarial” part comes in: the prosecutors and defenders are the adversaries. In theory, the truth arises from the conflict between the two sides. The court establishes rules of fair play (sharing evidence, not lying) and–within those bounds–the prosecutors’ and defenders’ job is not to present the truest argument but the best argument for their respective side. 

The analogy is not a perfect one, of course. For one thing, we also have a presumption of innocence in the criminal justice system because we’re not evaluating ideas we’re evaluating people. That presumption of innocence is crucial in a real criminal justice system, but it has no exact analogue in the court of ideas.

For another thing, we have a judge to oversee trials and enforce the rules. There’s no impartial judge when you have a debate with randos on the Internet. This is unfortunate, because it means that If we don’t police ourselves in our debates, then the whole process breaks down. There is no recourse.

When I say I want to be right, what am I saying, in this context? I’m saying that I want to know more at the end of a debate than I did at the start. That’s the goal. 

People like to say you never change anyone’s mind in a debate. What they really mean is that you never reverse someone’s mind in a debate. And, while that’s not literally true, it’s pretty close. It’s really, really rare for someone to go into a single debate as pro-life (or whatever) and come out as pro-choice (or whatever). I have never seen someone make a swing that dramatic in a single debate. I certainly never have.

But it would be absurd to say that I never “changed my mind” because of the debates I’ve had about abortion. I’ve changed my mind hundreds of time. I’ve abandoned bad arguments and adopted or invented new ones. I’ve learned all kinds of facts about law and history and biology that I didn’t know before. I’ve even changed my position many times. Just because the positions were different variations within the theme of pro-life doesn’t mean I’ve never “changed my mind”. If you expect people to walk in with one big, complex, set of ideas that are roughly aligned with a position (pro-life, pro-gun) and then walk out of a single conversation with whole new set of ideas that are aligned under the opposite position (pro-choice, anti-gun), then you’re setting that bar way too high.

But all of this only works if the folks having the argument follow the rules. And–without a judge to enforce them–that’s hard.

This is where the other kind of wanting to “be right” comes in. One of the most common things I see in a debate (whether I’m having it or not) is that folks want to avoid having to admit they were wrong

First, let me state emphatically that if you want to avoid admitting you were wrong you don’t actually care about being right in the sense that I mean it. Learning where you are wrong is just about the only way to become right! People who really want to “be right” embrace being wrong every time it happens because those are the stepping stones to truth. Every time you learn a belief or a position you took was wrong, you’re taking a step closer to being right.

But–going back to those folks who want to avoid appearing wrong–they don’t actually want to be right. They just want to appear right. They’re not worried about truth. They worried about prestige. Or ego. Or something else.

If you don’t care about being right and you only care about appearing right, then you don’t care about truth either. And these folks are toxic to the whole project of adversarial truth-seeking. Because they break the rules. 

What are the rules? Basic stuff like don’t lie, debate the issue not the person, etc. Maybe I’ll come up with a list. There’s a whole set of behaviors that can make your argument appear stronger while in fact all you’re doing is peeing in the pool for everyone who cares about truth. 

If you care about being right, then you will give your side of the debate your utmost. You’ll present the best evidence, use the tightest arguments, and throw in some rhetorical flourishes for good measure. But if you care about being right, then you will not break the rules to advance your argument (No lying!) and you also won’t just abandon your argument in midstream to switch to a new one that seems more promising. Anyone who does that–who swaps their claims mid-stream whenever they see one that shows a more promising temporary advantage–isn’t actually trying to be right. They’re trying to appear right. 

They’re not having an argument or a debate. They’re fighting for prestige or protecting their ego or doing something else that looks like an argument but isn’t actually one. 

I wrote this partially to vent. Partially to organize my feelings. But also to encourage folks not to give up hope, because if you believe that nobody cares about truth and changing minds is impossible then it becomes a self-fulfilling prophecy.

And you want to know the real danger of relativism and post-modernism and any other truth-adverse ideology? Once truth is off the table as the goal, the only thing remaining is power.

As long as people believe in truth, there is a fundamentally cooperative aspect to all arguments. Even if you passionately think someone is wrong, if you both believe in truth then there is a sense in which you’re playing the same game. There are rules. And, more than rules, there’s a common last resort you’re both appealing to. No matter how messy it gets and despite the fact that nobody ever has direct, flawless access to truth, even the bitterest ideological opponents have that shred of common ground: they both think they are right, which means they both thing “being right” is a thing you can, and should, strive to be.

But if you set that aside, then you sever the last thread between opponents and become nothing but enemies. If truth is not a viable recourse, all that is left is power. You have to destroy your opponent. Metaphorically at first. Literally if that fails. Nowhere does it say on the packaging of relativism “May lead to animosity and violence”. It’s supposed to do the opposite. It’s advertised as leading to tolerance and non-judgmentalism, but by taking truth off the table it does the opposite.

Humans are going to disagree. That’s inevitable. We will come into conflict. With truth as an option, there is no guarantee that the conflict will be non-violent, but it’s always an option. It can even be a conflict that exists in an environment of friendship, respect, and love. It’s possible for people who like and admire each other to have deep disagreements and to discuss them sharply but in a context of that mutual friendship. It’s not easy, but it’s possible. 

Take truth off the table, and that option disappears. This doesn’t mean we go straight from relativism to mutual annihilation, but it does mean the only thing left is radical partisanship where each side views the other as an alien “other”. Maybe that leads to violence, maybe not. But it can’t lead to friendship, love, and unity in the midst of disagreement.

So I’ll say it one more time: I want to be right.

I hope you do, too.

If that’s the case, then there’s a good chance we’ll get into some thundering arguments. We’ll say things we regret and offend each other. Nobody is a perfect, rational machine. Biases don’t go away and ego doesn’t disappear just because we are searching for truth. So we’ll make mistakes and, hopefully, we’ll also apologize and find common ground. We’ll change each other’s minds and teach each other things and grudgingly earn each other’s respect. Maybe we’ll learn to be friends long before we ever agree on anything.

Because if I care about being right and you care about being right, then we already have something deep inside of us that’s the same. And even if we disagree about every single other thing, we always will.

In Favor of Real Meritocracy

The meritocracy has come in for a lot of criticism recently, basically in the form of two arguments. 

There’s a book by Daniel Markovits called The Meritocracy Trap that basically argues that meritocracy makes everyone miserable and unequal by creating this horrific grind to get into the most elite colleges and then, after you get your elite degree, to grind away working 60 – 100 hours to maintain your position at the top of the corporate hierarchy. 

There was also a very interesting column by Ross Douthat that makes a separate but related point. According to Douthat, the WASP-y elite that dominated American society up until the early 20th century decided to “dissolve their own aristocracy” in favor of a meritocracy, but the meritocracy didn’t work out as planned because it sucks talent away from small locales (killing off the diverse regional cultures that we used to have) and because:

the meritocratic elite inevitably tends back toward aristocracy, because any definition of “merit” you choose will be easier for the children of these self-segregated meritocrats to achieve.

What Markovits and Douthat both admit without really admitting it is one simple fact: the meritocracy isn’t meritocratic.

Just to be clear, I’ll adopt Wikipedia’s definition of a meritocracy for this post:

Meritocracy is a political system in which economic goods and/or political power are vested in individual people on the basis of talent, effort, and achievement, rather than wealth or social class. Advancement in such a system is based on performance, as measured through examination or demonstrated achievement.

When people talk about meritocracy today, they’re almost always referring to the Ivy League and then–working forward and backward–to the kinds of feeder schools and programs that prepare kids to make it into the Ivy League and the types of high-powered jobs (and the culture surrounding them) that Ivy League students go onto after they graduate. 

My basic point is a pretty simple one: there’s nothing meritocratic about the Ivy League. The old WASP-y elite did not, as Douthat put it, “dissolve”. It just went into hiding. Americans like to pretend that we’re a classless society, but it’s a fiction. We do have class. And the nexus for class in the United States is the Ivy League. 

If Ivy League admission were really meritocratic, it would be based as much as possible on objective admission criteria. This is hard to do, because even when you pick something that is in a sense objective–like SAT scores–you can’t overcome the fact that wealthy parents can and will hire tutors to train their kids to artificially inflate their scores relative to the scores an equally bright, hard-working lower-class student can attain without all expensive tutoring and practice tests. 

Still, that’s nothing compared to the way that everything else that goes into college admissions–especially the litany of awards, clubs, and activities–tilts the game in favor of kids with parents who (1) know the unspoken rules of the game and (2) have cash to burn playing it. An expression I’ve heard before is that the Ivy League is basically privilege laundering racket. It has a facade of being meritocratic, but the game is rigged so that all it really does is perpetuate social class. “Legacy” admissions are just the tip of the iceberg in that regard.

What’s even more outrageous than the fiction of meritocratic admission to the Ivy League (or other elite, private schools) is the equally absurd fiction that students with Ivy League degrees have learned some objectively quantifiable skillset that students from, say, state schools have not. There’s no evidence for this. 

So students from outside the social elite face double discrimination: first, because they don’t have an equal chance to get into the Ivy Leagues and second, because then they can’t compete with Ivy League graduates on the job market. It doesn’t matter how hard you work or how much you learn, your Statue U degree is never going to stand out on a resume the way Harvard or Yale does.

There’s nothing meritocratic about that. And that’s the point. The Ivy League-based meritocracy is a lie.

So I empathize with criticisms of American meritocracy, but it’s not actually a meritocracy they’re criticizing. It’s a sham meritocracy that is, in fact, just a covert class system. 

The problem is that if we blame the meritocracy and seek to circumvent it, we’re actually going to make things worse. I saw a WaPo headline that said “No one likes the SAT. It’s still the fairest thing about admissions.” And that’s basically what I’m saying: “objective” scores can be gamed, but not nearly as much as the qualitative stuff. If you got rid of the SAT on college admissions you would make it less meritocratic and also less fair. At least with the SAT someone from outside the elite social classes has a chance to compete. Without that? Forget it.

Ideally, we should work to make our system a little more meritocratic by downplaying prestige signals like Ivy League degrees and emphasizing objective measurements more. But we’re never going to eradicate class entirely, and we shouldn’t go to radical measures to attempt it. Pretty soon, the medicine ends up worse than the disease if we go that route. That’s why you end up with absurd, totalitarian arguments that parents shouldn’t read to their children and that having an intact, loving, biological family is cheating. That way lies madness.

We should also stop pretending that our society is fully meritocratic. It’s not. And the denial is perverse. This is where Douthat was right on target:

[E]ven as it restratifies society, the meritocratic order also insists that everything its high-achievers have is justly earned… This spirit discourages inherited responsibility and cultural stewardship; it brushes away the disciplines of duty; it makes the past seem irrelevant, because everyone is supposed to come from the same nowhere and rule based on technique alone. As a consequence, meritocrats are often educated to be bad leaders, and bad people…

Like Douthat, I’m not calling for a return to WASP-y domination. (Also like Douthat, I’d be excluded from that club.) A diverse elite is better than a monocultural elite. But there’s one vital thing that the WASPy elite had going for it that any elite (and there’s always an elite) should reclaim:

the WASPs had at least one clear advantage over their presently-floundering successors: They knew who and what they were.

Some Thoughts on the Tolkien Movie

I could have sworn there was a quick image of a cross in one scene, but I couldn’t find it online. So here’s a generic screen shot from the movie instead.

I saw Tolkien last week, and I really enjoyed it. This was surprising to me, because religion was absolutely essential to J. R. R. Tolkien’s life, to his motivations for inventing Middle Earth and all that went with it, and to the themes and characters of all the works he wrote in Middle Earth. Hollywood, on the other hand, is utterly incapable of handling religion seriously. So, how did Tolkien manage to be a good film anyway?

By basically ignoring religion.

Don’t get me wrong. They do mention that he’s Catholic, depict his relationship with the priest who was his caretaker after his mother died, and talk about the tension when Tolkien–a Catholic–wanted to pursue a relationship with Edith, who wasn’t Catholic.

You might think that’s religion, but it’s not, any more than Romeo and Juliet coming from different houses was about religion. His Catholicism is treated as a kind of immutable faction that he was born into and so is stuck with it.

Now, if the movie tried to explain anything deep about Tolkien’s character or his work while omitting religion, it would have failed utterly. It succeeds because–after redacting religion from Tolkien’s life–it also studiously avoids trying to say anything deep about his life or his life’s work.

As far as this film is concerned, all you need to understand how Tolkien’s life led him to create Middle Earth are a series of simplistic and primarily visual references. Tolkien left behind a boyhood home of rolling green hills. That’s the shire. Once, he saw the shadows of bare tree branches on the ceiling of his childhood room at night. That’s ents.

And of course there’s World War I. German flamethrowers attacking a British trench became the balrog. Shattered and broken human corpses mired in a denuded wasteland reduced to mud and water-filled craters became the Dead Marshes. And a kind of generic sense of impending, invisible doom became a dragon and also Sauron’s all-seeing eye.

As for the most famous aspect of Tolkien’s writing–the fact that he invented entire languages–that’s basically written off as a kind of personal obsession. Some people juggle geese. What are you going to do?

None of this is wrong, and that’s why the movie is so enjoyable. It’s fun to see the visual references, even if they are a bit heavy-handed. The rise-from-ashes, boyhood camaraderie and romantic plotlines are all moving. But for the most part the movie avoids all the really deep stuff and just tells a light, superficial story about Tolkien’s circle of friends growing up. And I’m fine with all of that.

Not everything has to go deep, and a movie is far from the best way to investigate what Tolkien meant by “subcreation” or a “secondary world” and all the theology that goes with that, anyway. Even if Hollywood could do religion. Which, seeing as how they can’t, just makes me grateful that in this film they didn’t try. That saves it from ruin and makes it a perfectly fun movie that every fan of J. R. R. Tolkien should see.

How and Why to Rate Books and Things

Here’s the image that inspired this post:


Now, there’s an awful lot of political catnip in that post, but I’m actually going to ignore it. So, if you want to hate on Captain Marvel or defend Captain Marvel: this is not the post for you. I want to talk about an apolitical disagreement I have with this perspective.

The underlying idea of this argument is that you should rate a movie based on how good or bad it is in some objective, cosmic sense. Or at least based on something other than how you felt about the movie. In this particular case, you should rate the movie based on some political ideal or in such a way as to promote the common good. Or something. No, you shouldn’t. ALl of these approaches are bad ideas.

That's not how this works

The correct way to rate a movie–or a book, or a restaurant, etc.–is to just give the rating that best reflects how much joy it brought you. That’s it!

Let’s see if I can convince you.

To begin with, I’m not saying that such a thing as objective quality doesn’t exist. I think it probably does. No one can really tell where subjective taste ends and objective quality begins, but I’m pretty sure that “chocolate or vanilla” is a matter of purely personal preference but “gives you food poisoning or does not” is a matter of objective quality.

So I’m not trying to tell you that you should use your subjective reactions because that’s all there is to go on. I think it’s quite possible to watch a movie and think to yourself, “This wasn’t for me because I don’t like period romances (personal taste), but I can recognize that the script, directing, and acting were all excellent (objective quality) so I’m going to give it 5-stars.”

It’s possible. A lot of people even think there’s some ethical obligation to do just that. As though personal preferences and biases were always something to hide and be ashamed of. None of that is true.

The superficial reason I think it’s a bad idea has to do with what I think ratings are for. The purpose of a rating–and by a rating I mean a single, numeric score that you give to a movie or a book, like 8 out of 10 or 5 stars–is to help other people find works that they will enjoy and avoid works that they won’t enjoy. Or, because you can do this, to help people specifically look for works that will challenge them and that they might not like, and maybe pass up a book that will be too familiar. You can do all kinds of things with ratings. But only if the ratings are simple and honest. Only if the ratings encode good data.

The ideal scenario is a bunch of people leaving simple, numeric ratings for a bunch of works. This isn’t Utopia, it’s Goodreads. (Or any of a number of similar sites.) What you can then do is load up your list of works that you’ve liked / disliked / not cared about and find other people out there who have similar tastes. They’ve liked a lot of the books you’ve liked, they’ve disliked a lot of the books you’ve disliked, and they’ve felt meh about a lot of the books you’ve felt meh about. Now, if this person has read a book you haven’t read and they gave it 5-stars: BAM! You’re quite possibly found your next great read.

You can do this manually yourself. In fact, it’s what all of us instinctively do when we start talking to people about movies. We compare notes. If we have a lot in common, we ask that person for recommendation. It’s what we do in face-to-face interactions. When we use big data sets and machine learning algorithms to automate the process, we call them recommender systems. (What I’m describing is the collaborative filtering approach as opposed to content-based filtering, which also has it’s place.)

This matters a lot to me for the simple reason that I don’t like much of what I read. So, it’s kind of a topic that’s near and dear to my heart. 5-star books are rare for me. Most of what I read is probably 3-stars. A lot of it is 1-star or 2-star. In a sea of entertainment, I’m thirsty. I don’t have any show that I enjoy watching right now. I’m reading a few really solid series, but they come out at a rate of 1 or 2 books a year, and I read more like 120 books a year. The promise of really deep collaborative filtering is really appealing if it means I can find is valuable.

But if you try to be a good citizen and rate books based on what you think they’re objective quality is, the whole system breaks down.

Imagine a bunch of sci-fi fans and a bunch of mystery fans that each read a mix of both genres. The sci-fi fans enjoy the sci-fi books better (and the mystery fans enjoy the mystery books more), but they try to be objective in their ratings. The result of this is that the two groups disappear from the data. You can no longer go in and find the group that aligns with your interests and then weight their recommendations more heavily. Instead of having a clear population that gives high marks to the sci-fi stuff and high-marks to the mystery stuff, you just have one, amorphous group that gives high (or maybe medium) marks to everything.

How is this helpful? It is not. Not as much as it could be, anyway.

In theoretical terms, you have to understand that your subjective reaction to a work is complex. It incorporates the objective quality of the work, your subjective taste, and then an entire universe of random chance. Maybe you were angry going into the theater, and so the comedy didn’t work for you the way it would normally have worked. Maybe you just found out you got a raise, and everything was ten times funnier than it might otherwise have been. This is statistical noise, but it’s unbiased noise. This means that it basically goes away if you have a high enough sample.

On the other hand, if you try to fish out the objective components of a work from the stew of subjective and circumstantial components, you’re almost guaranteed to get it wrong. You don’t know yourself very well. You don’t know for yourself where you objective assessment ends and your subjective taste begins. You don’t know for yourself what unconscious factors were at play when you read that book at that time of your life. You can’t disentangle the objective from the subjective, and if you try you’re just going to end up introducing error into the equation that is biased. (In the Captain Marvel example above, you’re explicitly introducing political assessments into your judgment of the movie. That’s silly, regardless of whether your politics make you inclined to like it or hate it.)

What does this all mean? It means that it’s not important to rate things objectively (you can’t, and you’ll just mess it up), but it is helpful to rate thing frequently. The more people we have rating things in a way that can be sorted and organized, the more use everyone can get from those ratings. In this sense, ratings have positive externalities.

Now, some caveats:

Ratings vs. Reviews

A rating (in my terminology, I don’t claim this is the Absolute True Definition) is a single, numeric score. A review is a mini-essay where you get to explain your rating. The review is the place where you should try to disentangle the objective from the subjective. You’ll still fail, of course, but (1) it won’t dirty the data and (2) your failure to be objective can still be interesting and even illuminating. Reviews–the poor man’s version of criticism–is a different beast and it plays by different rules.

So: don’t think hard about your ratings. Just give a number and move on.

Do think hard about your reviews (if you have time!) Make them thoughtful and introspective and personal.

Misuse of the Data

There is a peril to everyone giving simplistic ratings, which is that publishers (movie studios, book publishers, whatever) will be tempted to try and reverse-engineer guaranteed money makers.

Yeah, that’s a problem, but it’s not like they’re not doing that anyway. The reason that movie studios keep making sequels, reboots, and remakes is that they are already over-relying on ratings. But they don’t rely on Goodreads or Rotten Tomatoes. They rely on money.

This is imperfect, too, given the different timing of digital vs. physical media channels, etc. but the point is that adding your honest ratings to Goodreads isn’t going to make traditional publishing any more likely to try and republish last years cult hit. They’re doing to do that anyway, and they already have better data (for their purposes) than you can give them.

Ratings vs. Journalism

My advice applies to entertainment. I’m not saying that you should just rate everything without worrying about objectivity. This should go without saying but, just in case, I said it.

You shouldn’t apply this reasoning to journalism because one vital function of journalism for society is to provide a common pool of facts that everyone can then debate about. One reason our society is so sadly warped and full of hatred is that we’ve lost that kind of journalism.

Of it’s probably impossible to be perfectly objective. The term is meaningless. Human beings do not passively receive input from our senses. Every aspect of learning–from decoding sounds into speech to the way vision works–is an active endeavor that depends on biases and assumptions.

When we say we want journalists to be objective, what we really mean is that (1) we want them to stick to objectively verifiable facts (or at least not do violence to them) and (2) we would like them to embody, insofar as possible, the common biases of the society they’re reporting to. There was a time when we, as Americans, knew that we had certain values in common. I believe that for the most part we still do. We’re suckers for underdogs, we value individualism, we revere hard work, and we are optimistic and energetic. A journalistic establishment that embraces those values is probably one that will serve us well (although I haven’t thought about it that hard, and it still has to follow rule #1 about getting the facts right). That’s bias, but it’s a bias that is positive: a bias towards truth, justice, and the American way.

What we can’t afford, but we unfortunately have to live with, is journalism that takes sides within the boundaries of our society.

Strategic Voting

There are some places other than entertainment where this logic does hold, however, and one of them is voting. One of the problems of American voting is that we go with majority-take-all voting, which is like the horse-and-buggy era of voting technology. Majority-take-all voting is probably much worse for us than a 2-party system, because it encourages strategic voting.

Just like rating Captain Marvel higher or lower because your politics make you want it to succeed or fail, strategic voting is where you vote for the candidate that you think can win rather than the candidate that you actually like the most.

There are alternatives that (mostly) eliminate this problem, the most well-known of which is instant-runoff voting. Instead of voting for just one candidate, you rank the candidates in the order that you prefer them. This means that you can vote for your favorite candidate first even if he or she is a longshot. If they don’t win, no problem. Your vote isn’t thrown away. In essence, it’s automatically moved to your second-favorite candidate. You don’t actually need to have multiple run-off elections. You just vote once with your full list of preferences and then it’s as if you were having a bunch of runoffs.

There are other important reasons why I think it’s better to vote for simple, subjective evaluations of the state of the country instead of trying to figure out who has the best policy choices, but I’ll leave that discussion for another day.

Limitations

The idea of simple, subjective ratings is not a cure-all. As I noted above, it’s not appropriate for all scenarios (like journalism). It’s also not infinitely powerful. The more people you have and the more things they rate (especially when lots of diverse people are rating the same thing), the better. If you have 1,000 people, maybe you can detect who likes what genre. If you have 10,000 people, maybe you can also detect sub-genres. If you have 100,000 people, maybe you can detect sub-genres and other characteristics, like literary style.

But no matter how many people you have, you’re never going to be able to pick up every possible relevant factor in the data because there are too many and we don’t even know what they are. And, even if you could, that still wouldn’t make predictions perfect because people are weird. Our tastes aren’t just a list of items (spaceships: yes, dragons: no). They are interactive. You might really like spaceships in the context of gritty action movies and hate spaceships in your romance movies. And you might be the only person with that tick. (OK, that tick would probably be pretty common, but you can think of others that are less so.)

This is a feature, not a bug. If it were possible to build a perfect recommendation it would also be possible to build (at least in theory) an algorithm to generate optimal content. I can’t think of anything more hideous or dystopian. At least, not as far as artistic content goes.

I’d like a better set of data because I know that there are an awful lot of books out there right now that I would love to read. And I can’t find them. I’d like better guidance.

But I wouldn’t ever want to turn over my reading entirely to a prediction algorithm, no matter how good it is. Or at least, not a deterministic one. I prefer my search algorithms to have some randomness built in, like simulated annealing.

I’d say about 1/3rd of what I read is fiction I expect to like, about 1/3rd is non-fiction I expect to like, and 1/3rd is random stuff. That random stuff is so important. It helps me find stuff that no prediction algorithm could ever help me find.

It also helps the system over all, because it means I’m not trapped in a little clique with other people who are all reading the same books. Reading outside your comfort zone–and rating them–is a way to build bridges between fandom.

So, yeah. This approach is limited. And that’s OK. The solution is to periodically shake things up a bit. So those are my rules: read a lot, rate everything you read as simply and subjectively as you can, and make sure that you’re reading some random stuff every now and then to keep yourself out of a rut and to build bridges to people with different tastes then your own.

Google, the Gender Pay Gap, and Markets

So you’ve probably seen this article making the rounds: Google Finds It’s Underpaying Many Men as It Addresses Wage Equity. It’s not hard to see why. The idea that a socially-aware megacorp tried to equalize women’s pay and ended up handing out raises is not only intrinsically funny, but offers a dose of schadenfreude for all the folks who still think James Damore was fundamentally right about the tech giants ideological echo chamber. Fair enough. But I want to talk about something different, and the real reason I’m deeply skeptical of the whole idea of a gender pay gap.

The first thing to realize is that the entire concept of a pay gap is actually philosophically tricky to define. From the NYT article:

When Google conducted a study recently to determine whether the company was underpaying women and members of minority groups, it found, to the surprise of just about everyone, that men were paid less money than women for doing similar work.

OK, but how does Google define “similar work”? Probably–I’m guessing, but a guess is good enough in this case–by looking at stuff like job title. Do you think everyone who works at your company with the same job title as you is working as hard / getting as much done as you do? No? Then this isn’t a very good basis for assessing “similar work” is it?

In fact, the problem is really bad because–even if a company paid men and women equally given that they had the same job title (in this case Google appears to have paid women more) they could still discriminate at an earlier stage in the process. Thus (another quote from the NYT article):

Critics said the results of the pay study could give a false impression. Company officials acknowledged that it did not address whether women were hired at a lower pay grade than men with similar qualifications.

In other words, maybe Google pays senior developers the same (or even pays female senior developers more), but at the same time it also stacks the deck against new hires so that female applicants are more likely to get hired as regular developers and then men are more likely to get hired as senior developers. In that case, it could be true that Google is biased towards paying women more within one job title, but also that it’s biased towards paying women less overall.

Not so simple, eh?

Now, I don’t actually know if Google used job title to define “similar work” and I made the bold claim that I didn’t really care if they did or not. The reason for that is that there is no good way to measure how much work a person does. If they used job title, then that’s a bad proxy. But if they used something else, then I am confident that they used another bad proxy. Because there’s absolutely no practical way that Google could have spent the time and resources required to actually assess all of their workers. There’s a name for this in economics, for the ides that it’s basically impossible to measure how much work an employee is doing. It’s called the principle-agent problem. And, believe it or not, that’s actually the easy part. Even if you could accurately, easily, and cheaply quantify how much work your employees do (you can’t), there’s still no accepted methodology for assessing how much value that work contributed to the company. If you’re the sales guy who closes a deal that earns your company $1,000,000 in revenue you might think the answer is simple: your effort just got the company a cool million. But you didn’t do that alone. You were selling a product that you didn’t make, for one thing. So the designers, the marketing guys, and the folks on the assembly line building the widgets all need a cut. How do you attribute the value you made–$1,000,000–among all the complex, networked, interconnected contributors? Good luck with that.

So far, all I’ve really said is that trying to detect a wage gap is going to be really, really hard because assessing “similar work” is basically impossible. But there’s good news! If you understand the way markets work, you will understand that you have very, very good reason to be skeptical that men and women are really being paid different amounts for similar work.

Now, before I explain this, let me just point out that there are a lot of people who will tell you that economic models of markets are over-simplified, flawed, and misleading. They’re right, but those criticisms don’t really apply. There’s this whole controversial literature over concepts like the efficient market hypothesis that, luckily, we don’t need to get into here and now. In a nutshell, economists like to pretend (for the sake of tractable theories) that humans are perfectly rational and statistical geniuses who take all possible information into account when making purchasing decisions. If that were true, then things like market bubbles would (probably) not be possible. (It depends on the specific of your model.) So let me just say: yeah, I concede all that. Precise, mathematical models of markets are basically all wrong. We can quibble about whether they are “perpetual motion machine”-wrong or just “spherical chicken”-wrong, but whatever.

Here’s the point: in a market (even a fairly messed-up, realistic one) you’ve got a lot of companies who are all competing. Although there’s a lot going on, one vital way that one of these companies can get a leg up over its competitors is if it finds a way to offer the same good or the same service for less cost. This isn’t rocket science, this is really, really obvious. If company A and company B are both selling more or less interchangeable widgets, but company A can make them for $1.00 / each and company B can make them for $0.90 / each, then company B has a huge advantage.

So here’s the thing: if there were any real indication that you could hire a woman, pay her 70% of what you pay a man, and get “similar work”, then what you’re saying is that there’s an easy, obvious way to go out there and make your widgets for $0.70 when everyone else has to pay $1.00 to make theirs.

We don’t need to take any derivatives here. We don’t need advanced theory. We don’t need to assume that human beings are perfectly rational, hyper-calculating machines. We just have to assume that companies generally want to find ways to reduce the cost of the goods and/or services they sell. If that humble, uncontroversial assumption is true, then any perceptible evidence of a real gender pay gap would immediately be identified and exploited by the market.

If anyone could find a real gender pay gap, it would be the mother of all arbitrage opportunities. And look, folks, if there’s one thing that every red-blooded capitalist wants to find, it’s an arbitrage opportunity. This isn’t hypothetical, by the way. You look at an industry like currency trading, and companies invest huge amounts of money hiring geniuses, buying them super-computers, and paying for access to network cables that give them millisecond advantages so that they can find and identify arbitrage opportunities before the market erases them.

Because that’s what markets do. They look for chances to make free money and then they exploit them until they disappear. If you find out that you can trade your dollars for yen, your yen for rubels, your rubels for pesos, and then your pesos back to dollars and end up with more than you started with: that’s arbitrage. And you will immediately pump as much money as you can into running through that cycle. As a result, the prices will go up and the arbitrage opportunity will close. This is what markets do.

And so if there is a way out there to hire women to do men’s work for 70% (or whatever) of their pay, companies would do that instantly. And the result? Well, the first company would offer women $0.70 on the dollar, but then a competitor would offer them $0.71, and then another competitor would offer them $0.72… and pretty soon no more arbitrage.

So what’s my point?

Trying to find out if there actually is an real wage-gap is very, very hard because measuring “similar work” is difficult. But, if there is ever a whiff of a reliable, objective, solid gender pay gap it will disappear as quickly as it is spotted as the market rushes to exploit the arbitrage opportunity.

Here’s what it all comes down to: if you believe in the gender pay gap, you believe that a bunch of cold-blooded, selfish capitalists are staring at a pile of money left on the table, and not one of them is trying to get their hands on it. This isn’t a completely open-and-shut case, but it’s a very, very strongly suggestive argument that capitalism and wage inequality–of any kind: gender-based, race-based, sexual orientation-based, etc–are fundamentally incompatible in the long run. It doesn’t mean that we shouldn’t have laws against discrimination, because individual business owners might make stupid, bigoted decisions and we might decide not to wait around to let the market fix them. But it does mean that the idea of a real, persistent, ongoing gender pay-gap is like UFOs or Bigfoot or–even rarer than anything else–a free lunch.

It’s just probably not there.

Thoughts on Patriarchy Chicken

A friend of mine posted an article on Facebook about a fun commuting game: patriarchy chicken. The idea is that you (a woman, of course) go about your commute as you ordinarily would do with one exception: you stop giving way to men. Because, you see,

Men have been socialised, for their entire lives, to take up space. Men who would never express these thoughts out loud have nevertheless been brought up to believe that their right to occupy space takes precedent over anyone else’s right to be there. They spread their legs on tubes and trains, they bellow across coffee shops and guffaw in pubs, and they never, ever give way.

New Statesman

The more I thought about this claim, the less sense it made. The article is written from the perspective of a Londoner, and so right away you have to ask: what’s the extent of this space-hogging socialization? Is it everywhere? Just the West? Just the Anglosphere? I mean, if this is really a think, then let’s take this seriously, right? Where are the cross-national studies? (Really, if you have any, send them my way.) It’s just odd that these new terms–mansplaining, manspsreading, etc.–crop out and then become part of the accepted wisdom with basically no analysis at all. Poof! They’re part of our (socially-constructed) reality.

Here’s the thing, though, I was certainly not socialized “to take up space”. As a man (that gives me some insight into how men are socialized, right?), my socialization included at least a couple of points contrary to the “take up space” model.

  1. Never intimidate. Men are not only generally physically stronger, but we (as a sex) are also overwhelmingly responsible for basically all violent crime. Which means that, as a man, you can intimidate women without even realizing that you’re doing it. As a result men (me, for one) have to constantly monitor their physical proximity relative to other people to ensure that a woman never feels in any way threatened. And look, there’s no way to go into detail on this without sounding at least a little crazy, but what I’m about to describe are largely unconscious rules that a lot of men (like me) follow every day. We’ll use elevator etiquette as our basic example. If you find yourself riding alone in the elevator with a woman you don’t know or have just met (at a work conference, for example) you don’t stand too close, don’t stand between her and the door, and don’t stand between her and the buttons. You allow for brief eye contact and a casual smile / head nod initially to show that you’re socialized and non-threatening, but then you generally leave her alone. If possible, you select your floor first (especially if its a hotel elevator) because if you happen to be on the same floor you don’t want to give the impression that you’re following her. Also, if you do end up going to the same floor, you ensure adequate distance so that she has her own personal space. These rules aren’t hard and fast. They’re just part of the everyday, ongoing monitoring that many men do to ensure that they don’t accidentally come across as threatening to anyone around them.
  2. Always serve. This is trickier now than it used to be. If you go overboard trying to play-act like a 15th century knight you’re just going to annoy people and make them uncomfortable, which isn’t truly gentlemanly. The default rule is to be polite to everyone and that the tie always goes to a woman. In other words–as it applies to patriarchy chicken–you always give way to a woman.

That’s how I was socialized. It’s not that I’m just ignorant of the “take up space” socialization, I was raised–in many ways–in the opposite school. And look, I’m not alone here. Most of the men I call friends act the same way. We don’t have to talk about it. We know. Because we apply the rules not only to ourselves individually, but also to ourselves in a group. If one man can be intimidating on accident, a group of two or three men have to be even more careful to avoid making anyone else uncomfortable.

Again, we don’t talk about it. It’s just a basic social rule that all guys know. Like the rule that you never, ever use a urinal adjacent to someone who’s already peeing if there are other free spots available. Never in my life growing up was I told that. It’s just basic man-code. So is giving way to women. Interested in more stuff like this? Check out the Art of Manliness website.

Alright, so if I–and a lot of men like me–have been socialized not to get in a woman’s way or expect her to move for us, then what gives? Is Charlotte Riley (who wrote the New Statesman article) lying? Hallucinating? No, I’m sure she’s not.

Here’s the thing, if every man out there expected women to get out of their way, then patriarchy chicken wouldn’t have sporadic run-ins, it would have a never-ending chain reaction of collisions. All it really takes is a small percentage–say, 5% off the top of my head–of men who expect women to get out of their way to be really, really noticeable. And here’s the thing, such men exist. We call them jerks. (If we’re being polite.) And they probably don’t see themselves as men who expect women to defer to them spatially. They see themselves as important people who expect everyone else to get out of their way.

In my experience, there are basically zero social justice concerns that can’t be reformulated without the political lens and be just as valid. This is just another example of that. Instead of unsubstantiated conspiracy theories about male socialization that strain credulity, why not go with the simpler approach: some people are jerks?

And, hey, look: if you want to play “jerk chicken” (which sounds delicious) instead of patriarchy chicken, great! Go for it. I’m not telling Riley–or anyone else–to do anything any differently. You be you. I just think it’s kind of sad that, left to their own devices, people seem so eager to adopt what are basically the social science version of conspiracy theories. It’s like choosing to live your life in as dark and depressing a light as possible. Yeah, you can go around thinking that all (most?) men secretly hate you and want to oppress you… but, in the absence of really strong data, why would you want to? It just seems sad.

That’s how most conspiracies work, though. They are fundamentally un-empowering. Nobody is empowered by the idea that aliens can swoop down in a UFO whenever they want, kidnap them, probe them, and then release them to a world that treats the story with derision. That’s not empowering! Nobody is empowered by the idea that we’re all just pawns of mysterious forces like the Illuminati. Conspiracy theories are basically an exercise in cashing in real control (agency over your actions and attitude and believes) for fake control (made-up explanations that remove the uncertainty and ambiguity of life). This trade-off doesn’t make a lot of sense when you put it on those terms, but that’s really what’s going on with conspiracy theories. People would rather be impotent in a world that makes sense than potent in a world that doesn’t.

The whole “Men have been socialised, for their entire lives, to take up space.” thing is not exactly the same, but it’s pretty close. Which makes it understandable, but still sad.

Radical Ideology as Stupidity Enabler

I just finished reading A River in Darkness, the autobiography of a Korean-Japanese man who escaped from North Korea. It’s a tragic and engrossing read, but one detail that stuck out was the way that North Korean bureaucrats forced North Korean farmers to grow rice in ways that even a kid from urban Japan who had never studied agriculture knew were incorrect. (Basically, they planted the rice much too close together.)

It reminded me of another book, The Three-Body Problem, which depicted some of the real-life events of the Cultural Revolution in China, in particular “struggle sessions” in which Chinese professors were publicly humiliated and tortured by a mob in part for refusing to recant scientific principles that had been deemed incompatible with political doctrine.

Panchen Lama during the struggle session in 1964 (Wikipedia)

The communists in North Korea and China were in good company. Matt Ridley, in The Origins of Virtue, recounts the Soviet Union’s own peculiar war on science:

Trofim Lysenko argued, and those who gainsaid him were shot, that wheat could be made more frost-hardy not by selection but by experience. Millions died hungry to prove him wrong. The inheritance of acquired characteristics remained in official doctrine of Soviet biology until 1964.

So in North Korea they insisted on disregarding ancient agricultural knowledge because the Party knew best, up to and including triggering massive starvation. In China they executed, exiled, and fired an entire generation of trained scientists because the Party knew best. And in the Soviet Union they insisted on trying to create frost-resistant wheat by freezing the seeds first and created even more massive starvation. Genetics, quantum mechanics, and common sense: why did the Party think they knew so much?

Let me tell you what got me thinking about this. A friend of mine posted a link to this article from Duke University’s The Chronicle detailing that a graduate program director who urged foreign students studying at Duke to speak English has been forced to step down as a result of her advice. Now, I don’t have enough information about the outrage du jour to have a strong opinion about it. As a matter of basic ethics and common sense, it’s rude and counterproductive to go to a foreign country to study and work and then hang around other people speaking your own language instead of adopting the language of the country you’ve moved to. Of course there are exceptions and I don’t generally think it’s a good idea to enforce every aspect of etiquette and common sense with formal policies, but that’s not really the point. I don’t want to take a strong position on the Duke case because I don’t know or care that much about it.

On the other hand, my friend who posted the article knew everything there is to know about it. I will not quote from the post (it was not shared publicly), but she interpreted everything through the standard lens or racism / colonialism / privilege / etc. and as a result she had zero doubts about anything. She spoke with absolute confidence and black-and-white judgment. Then all of her like-minded friends piled on, congratulating her. She knew and they knew that there was one and only explanation, one and only one answer, and that it was obvious.

I tried to engage in some discussion, leading with a simple question: have you ever lived in a foreign country and did you insist on speaking your language there? Do you even speak a foreign language? She hasn’t, so she couldn’t, and she doesn’t. (I have, I could but I did not, and I do.) Instead of considering that her view might be wrong, however, she just called for another friend to come in because they were a specialist in linguistic imperialism. So, as far as I know, this friend also has zero relevant experience but has a bigger ideological toolbox to whack people over the head with. Other commenters–even when they were polite–were just as clueless, sharing stories about growing up in bilingual homes or teaching English as a second language at the elementary school level. What do either of these things–interesting as they may be in themselves–have to do with speaking English in a graduate program? Not a single thing.

There are two things going on.

First, radical ideologies are incredibly dangerous things because they enable stupidity on a massive scale. People embrace radical ideologies because they are powerful explanation-machines. Life confronts all of us with ambiguity, complexity, and uncertainty. Also, disappointment and difficulty. Radical ideologies are a perfect antidote to the ambiguity, complexity, and uncertainty. They are, functionally speaking, fulfilling the same role that conspiracy theories do. They don’t improve your life, they aren’t meaningfully accurate, but they make your life explicable. They turn all of the randomness into order. This doesn’t actually make your situation objectively better, but it makes it feel better.

This can be relatively harmless. Radical ideology, conspiracy theories, and superstition have harmless manifestations where they don’t really do anything except waste time in exchange for a false feeling of control. Sure, you might be throwing away money to get your palm read, but it’s not really hurting anyone, right?

Sure, but things get dicier when your kooky explanation-machine happens to target, say, vaccines. Or all of modern psychiatry. Or, heck, modern medicine from start to finish. Even in these cases, the damage is limited to mostly yourself and, in particularly tragic cases, maybe your kids.

But when the explanation-machine that you’ve adopted is a political ideology, we go through a kind of phase-change and things get much, much worse.

Unlike micro explanation machines–superstition and conspiracy theories, for example–political ideologies are macro explanation machines. They have two functions. The first is the same as micro explanation machines: to quickly and easily make your life experiences intelligible. But they don’t stop there. They have a second function, and that function is to accumulate power. And that’s where things go off the rails and we get industrial-scale stupidity enabling.

To illustrate this, we have to understand why it was that Marxists in North Korea planted rice too close together, or Marxists in China executed physicists, or Marxists in the USSR kept using psuedo-science to try and grow frost-resistant wheat. You see, it wasn’t just some kind of weird accident that happened to be harmful, in the way that some people cling to harmless conspiracy theories like Bigfoot and others cling to harmful ones like the anti-vax crowd. Nope, the Marxists in North Korea, China, and the USSR were following a script laid down intentionally and inevitably by Lenin and Stalin.

Here’s philosopher Steven L. Goldman’s recounting:

This imperialism of the scientific world view—that there is such an imperialism—has a kind of, let’s call it, acute support that one doesn’t ordinarily encounter from an odd quarter, and that is from V.I. Lenin and Joseph Stalin. Before he was preoccupied with becoming the head of the government of the Union of Soviet Socialist Republics, Lenin wrote a book called Materialism and Imperio-Criticism in which he harshly criticized Ernst Mach’s philosophy of science, and other philosophies of science influenced by Mach, that denied that the object of scientific knowledge was reality—that denied that scientific knowledge was knowledge of what is real and what is true.

Lenin strongly defended a traditional—not a revolutionary—but a traditional conception of scientific knowledge, because otherwise Marxism itself becomes merely convention. In order to protect the truth of Marxist philosophy of history and of society—in order to protect to the idea that Marxist scientific materialism is “True” with a capital “T,” Lenin attacked these phenomenalistic theories, these conventionalistic theories—that we have seen defended by not just Mach, but also by Pierre Duhem, Heinrich Hertz, Henri Poincare, at about the same time that Lenin was writing Materialism and Imperio-Criticism.

Stalin in the 1930s made it clear that the theory of relativity and quantum theory, with its probability distributions as descriptions of nature—”merely” probabilistic descriptions of nature—”merely” I always say in quotation marks—that these were unacceptable in a Communist environment. Again, this is for the same reasons, because Marxist scientific materialism must be true. So, scientific theories that settle for probabilities and that are relative, are misunderstanding that special and general theories of relativity are in fact absolute and deterministic theories.


The willful stupidity of Marxist-Leninist ideology is not an accidental byproduct. It is a direct consequence of the fact that radical political ideologies are not content to be one explanation-machine among many but–as organized political movements in a battle for power–have to fight to be the explanation machine. This leads directly towards conflict between Marxist-Leninist ideology and any other contender, including both science and religion.

When these macro explanation machines aren’t killing millions of people, the absurdity can be hilarious. Here’s Goldman again:

A curious thing happened, namely that Russian physicists of the 1930s, 1940s and even into the 1950s, in books that they published on relativity and quantum theory, had to have a preface in which they made it clear that this book was not about reality—that these theories were not true, but they were very interesting and useful. It was okay to explore, but of course they’re not true because if they were, they would contradict Marxist scientific materialism.

This is quite funny because back in the 16th century, when Copernicus’ On the Revolution of the Heavenly Spheres was published in 1543, it was accompanied—unbeknownst to Copernicus, who was dying at the time—that the man who saw it through publication was a Protestant named Andreas Osiander—who stuck in a preface in order to protect Copernicus, because he knew that the Church would be upset if this theory of the heavens were taken literally. We know Galileo was in trouble for that. We talked a lot about that. So Osiander stuck in a preface saying, “You know, I don’t mean that the Earth really moves, but if you assume that it does, then look how nice and less complicated astronomy is.”

Now, I’m a religious person. I don’t think there’s any unavoidable conflict between religion and science. But when religion becomes a political ideology–as it was in the days of Copernicus and Galileo–then it is functionally equivalent to any other macro explanation machine (like Marxist-Leninism) and you will get the same absurd results (and, more often than not, the same horrific death tolls).

So here’s what I’ve learned. Human evil is never dangerous when it’s obvious. All of the great evils that we recognize today–fascism, slavery, Marxist-Leninism–were attractive in their day. And not to cackling, sinister villains rubbing their hands together with glee at the thought of inflicting evil misery on the world. Ordinary people thought that each of these monstrous evils was reasonable and, in many cases, even preferable.

If you roll that logic forward, it implies that the greatest evils of our time will be non-obvious. The movement that in 40 or 50 years from now we will revile and disavow is a movement that seems respectable and even attractive to many decent and intelligent people today. It is a macro explanation engine that appeals to people individually because it brings order to their personal narratives and–because it is functioning in the political realm–it is a macro explanation engine that will seek to crowd out all competitors and will therefore be hostile not only to alternative political ideologies but also to micro explanation engines that function in totally disparate realms like religion in science.

And, precisely because it seeks to undermine all other explanation engines even when operating in domains where it has zero utility or applicability, it will be most easily recognized in one way: as a massive enabler of stupidity.

Because that’s what happens when you have a mighty hammer. You start to see nothing but nails.

Gillette, Culture, and Class

I didn’t really want to write about the infamous Gilette #MeToo advertisement, because I didn’t really care about it. I still don’t, personally. But then a Glenn Reynolds piece at USA Today showed me the controversy in a new light.

In his article, Reynolds made the observation that:

… in America class warfare is usually disguised as cultural warfare. But underneath the surface, talk is a battle between the New Class and what used to be the middle class.

This is definitely something that I’ve noticed. It was one of the things I learned when I did the research for the most frequently-read post here at Difficult Run: When Social Justice Isn’t about Justice. And it totally fits with the fallout I’ve seen on my Facebook feed. Pretty much every single person I’ve seen angry or opposed to the ad comes from a blue collar and/or rural background. Why are they mad? Not because they object with the message of the ad per se (who objects to “hey, stop bullying”?) but because they know they are being talked down to, patronized, and scolded. And they’re right.

And all the folks I’ve seen making fun of them and mocking anyone who has a problem with the ad? College educated, often with a graduate degree, and frequently working as a professional intellectuals. They see it as a culture war issue instead of class war issue because that’s one of the most important functions of social justice activism: to cloak class interest in progressive ideals.

Also, the silliness of anti-capitalists celebrating ad campaigns, no matter how superficially idealistic, is pretty amusing.

But, while we’re admiring the nimble messaging of capitalism, here’s a message that might actually contribute to men being good men.