Tell Your Children When You’re Wrong

Man kisses baby he's holding
Man kisses baby he's holding
By @kellysikkema via Unsplash

In one of the most poignant scenes from The Adam Project, Ryan Reynold’s character meets Jennifer Garner’s character by chance in a bar. Garner is Reynold’s mom, but she doesn’t recognize him because he has time-traveled into the past. He overhears her talking to the bartender about the difficulties she’s having with her son (Reynold’s younger self) in the wake of her husband’s death. Reynolds interjects to comfort his mom (which is tricky, since he’s a total stranger, as far as she knows). If this is hard to follow, just watch the clip. It’s less than three minutes long.

The line that really stuck with me from this was Reynolds telling Garner, “the problem with acting like you have it all together is he believes it.”

When I talked about this scene with Liz and Carl and Ben for a recent episode of Pop Culture on the Apricot Tree, I realized that there’s an important connection between this line and apologizing to our kids. And it’s probably not the reason you’re thinking.

The reason you’re thinking has to do with demonstrating to our kids how they should react to making a bad choice. This is also a good reason to apologize to your children. Admitting mistakes is hard. Guilt doesn’t feel good, and it takes practice to handle it positively. You don’t want to allow healthy guilt to turn into unhealthy shame (going from “I did bad” to “I am bad”). And you don’t want to allow the pain of guilt to cause you to lash out even more, like a wounded animal defending itself when there’s no threat. So yeah, apologize to your kids for your mistakes so they know how to apologize for theirs.

And, while we’re on the subject, don’t assume that apologizing to your kids will undermine your authority as a parent. It is important for parents to have authority. Your job is to keep your kids safe and teach them. This only works if you’re willing to set rules they don’t want to follow and teach lessons they don’t want to learn. That requires authority. But the authority should come from the fact that you love them and know more than they do. Or, in the words of the Lord to Joseph Smith, authority should only be maintained “by persuasion, by long-suffering, by gentleness and meekness, and by love unfeigned; By kindness, and pure knowledge” (D&C 121:41-42). If your authority comes from kindness and knowledge, then it will never be threatened by apologizing in those cases when you get it wrong.

In fact, this takes us to that second reason. The one I don’t think as many folks have thought of. And it’s this: admitting that you made a mistake is an important way of showing your kids how hard you’re trying and, by extension, demonstrating your love for them.

The only way to never make a mistake is to never push yourself. Only by operating well within your capabilities can you have a flawless record over the long run. Think about an elite performer like an Olympic gymnast or figure skater. They are always chasing perfection, rarely finding it, and that’s in events that are very short (from a few minutes to a few seconds). If you saw a gymnast doing a flawless, 30-minute routine every day for a month you would know that that routine was very, very easy (for them, at least).

You can’t tell if someone makes mistakes because they’re at the limits of their capacity or just careless. But if someone never makes mistakes, then you know that whatever they are doing isn’t much of a challenge.

So what does that tell your kid if you never apologize? In effect, it tells them that you have it all together. That, for you at least, parenting is easy.

That’s not the worst message in the world, but it’s not a great one either. Not only is it setting them up for a world of hurt when they become parents one day, but you’re missing an opportunity to tell them the truth: that parenting is the hardest thing you’ve ever done, and you’re doing it for them.

Don’t take this too far. You don’t want to weigh your kids down with every worry and fear and stress you have. You don’t want to tell them every day how hard your job is. That’s a weight no child should carry. But it’s OK to let them know—sparingly, from time to time—that being a parent is really hard. What better, healthier way to convey that than to frankly admit when you make a mistake?

“The problem with acting like you have it all together is he believes it. Maybe he needs to know that you don’t. It’s OK if you don’t.”

Anti-Provincial Provincialism and Fighting Monsters

Part 1: Anti-Americanism Americanism

I ran across a humorous meme in a Facebook group that got me thinking about anti-provincial provincialism. Well, not the meme, but a response to it. Here’s the original meme:

Now check out this (anonymized) response to it (and my response to them):

What has to happen, I wondered, for someone to assert that English is the most common language “only in the United States“? Well, they have to be operating from a kind of anti-Americanism that is so potent it has managed to swing all the way around back to being an extreme American centrism view again. After all, this person was so eager to take the US down a peg (I am assuming) that they managed to inadvertently erase the entire Anglosphere. The only people who exclude entire categories of countries from consideration are rabid America Firsters and rabid America Lasters. The commonality? They’re both only thinking about America.

It is a strange feature of our times that so many folks seem to become the thing they claim to oppose. The horseshoe theory is having its day.

The conversation got even stranger when someone else showed up to tell me that I’d misread the Wikipedia article that I’d linked. Full disclosure: I did double check this before I linked to it, but I still had an “uh oh” moment when I read their comment. Wouldn’t be the first time I totally misread a table, even when specifically checking my work. Here’s the comment:

Thankfully, dear reader, I did not have to type out the mea culpa I was already composing in my mind. Here’s the data (and a link):

My critic had decided to focus only on the first language (L1) category. The original point about “most commonly spoken” made no such distinction. So why rely on it? Same reason, I surmise, as the “only in the US” line of argument: to reflexively oppose anything with the appearance of American jingoism.

Because we can all see that’s the subtext here, right? To claim that English is the “most common language” when it is also the language (most) Americans speak is to appear to be making some of rah-rah ‘Murica statement. Except that… what happens if it’s just objectively true?

And it is objectively true. English has the greatest number of total speakers in the world by a wide margin. Even more tellingly, the number of English 2L speakers outnumbers Chinees 2L speakers by more than 5-to-1. This means that when someone chooses a language to study, they pick English 5 times more often than Chinese. No matter how you slice it, the fact that English is the most common language is just a fact about the reality we currently inhabit.

Not only that, but the connection of this fact to American chauvanism is historically ignorant. Not only is this a discussion about the English language and not the American one, but the linguistic prevalence of English predates the rise of America a great power. If you think millions of Indians conduct business in English because of America then you need to open a history book. The Brits started this stated of affairs back when the sun really did never set on their empire. We just inherited it.

I wonder if there’s something about opposing something thoughtlessly that causes you to eventually, ultimately, become that thing. Maybe Nietzsche’s aphorism doesn’t just sound cool. Maybe there’s really something to it.

This image doesn’t include enough of the quote, which is: “Beware that, when fighting monsters, you yourself do not become a monster… for when you gaze long into the abyss. The abyss gazes also into you.” But it’s cute. Original from Instagram.

Part 2: Anti-Provincial Provincialism

My dad taught me the phrase “anti-provincial provincialism” when I was a kid. We were talking about the tendency of some Latter-day Saint academics to over-correct for the provincialism of their less-educated Latter-day Saint community and in the process recreate a variety of the provincialism they were running away from. Let me fill this in a bit.

First, a lot of Latter-day Saints can be provincial.

This shouldn’t shock anyone. Latter-day Saint culture is tight-knit and uniform. For teenagers when I was growing up, you had:

  • Three hours of Church on Sunday
  • About an hour of early-morning seminary in the church building before school Monday – Friday
  • Some kind of 1-2 hour youth activity in the church building on Wednesday evenings

This is plenty of time to tightly assimilate and indoctrinate the rising generation, and for the most part this is a good thing. I am a strong believer in liberalism, which sort of secularizes the public square to accommodate different religious traditions. This secularization isn’t anti-religious, it is what enables those religions to thrive by carving out their own spaces to flourish. State religions have a lot of power, but this makes them corrupt and anemic in terms of real devotion. Pluralism is good for all traditions.

But a consequences of the tight-knit culture is that Latter-day Saints can grow up unable to clearly differentiate between general cultural touchstones (Latter-day Saints love Disney and The Princess Bride, but so do lots of people) and unique cultural touchstones (like Saturday’s Warrior and Johnny Lingo).

We have all kinds of arcane knowledge that nobody outside our culture knows or cares about, especially around serving two-year missions. Latter-day Saints know what the MTC is (even if they mishear it as “empty sea” when they’re little, like I did) and can recount their parents’ and relatives’ favorite mission stories. They also employ some theological terms in ways that non-LDS (even non-LDS Christians) would find strange.

And the thing is: if nobody tells you, then you never learn which things are things everyone knows and which things are part of your strange little religious community alone. Once, when I was in elementary school, I called my friend on the phone and his mom picked up. I addressed her as “Sister Apple” because Apple was their last name and because at that point in my life the only adults I talked to were family, teachers, or in my church. Since she wasn’t family or a teacher, I defaulted to addressing her as I was taught to address the adults in my church.

As I remember it today, her reaction was quite frosty. Maybe she thought I was in a cult. Maybe I’d accidentally raised the specter of the extremely dark history of Christians imposing their faith on Jews (my friend’s family was Jewish). Maybe I am misremembering. All I know for sure is I felt deeply awkward, apologized profusely, tried to explain, and then never made that mistake ever again. Not with her, not with anyone else.

I had these kinds of experiences–experiences that taught me to see clearly the boundaries between Mormon culture and other cultures–not only because I grew up in Virginia but also because (for various reasons) I didn’t get along very well with my LDS peer group for most of my teen years. I had very few close LDS friends from the time that I was about 12 until I was in my 30s. Lots of LDS folks, even those who grew up outside of Utah, didn’t have them. Or had fewer.

So the dynamic you can run into is when a Latter-day Saint without this kind of awareness trips over some of the (to them) invisible boundaries between Mormon culture and the surrounding culture. If they do this in front of another Latter-day Saint who does know, then the one who’s in the know has a tendency to cringe.

This is where you get provincialism (the Latter-day Saint who doesn’t know any better) and anti-provincial provincialism (the Latter-day Saint who is too invested in knowing better). After all, why should one Latter-day Saint feel so threatened by a social faux pax of another Latter-day Saint unless they are really invested in that group identity?

My dad was frustrated, at the time, with Latter-day Saint intellectuals who liked to discount their own culture and faith. They were too eager to write off Mormon art or culture or research that was amenable to faithful LDS views. They thought they were being anti-provincial. They thought they were acting like the people around them, outgrowing their culture. But the fact is that their fear of being seen as or identified with Mormonism made them just as obsessed with Mormonism as the mots provincial Mormon around. And twice as annoying.

Part 3: Beyond Anti

Although I should have known better, given what my parents taught me growing up, I became one of those anti-provincial provincials for a while. I had a chip on my shoulder about so-called “Utah Mormons”. I felt that the Latter-day Saints in Utah looked down on us out in the “mission field,” so I turned the perceived slight into a badge of honor. Yeah maybe this was the mission field, and if so that meant we out here doing the work were better than Utah Mormons. We had more challenges to overcome, couldn’t be lazy about our faith, etc.

And so, like an anti-Americanist who becomes an Americanist I became an anti-provincial provincialist. I carried that chip on my shoulder into my own mission where, finally meeting a lot of Utah Mormons on neutral territory, I got over myself. Some of them were great. Some of them were annoying. They were just folk. There are pros and cons to living in a religious majority or a minority. I still prefer living where I’m in the minority, but I’m no longer smug about it. It’s just a personal preference. There are tradeoffs.

One of the three or four ideas that’s had the most lasting impact on my life is the idea that there are fundamentally only two human motivations. Love, attraction, or desire on the one hand. Fear, avoidance, or aversion on the other.

Why is it that fighting with monsters turns you into a monster? I suspect the lesson is that how and why you fight your battles is an important as what battles you choose to fight. I wrote a Twitter thread about this on Saturday, contrasting tribal reasons for adhering to a religion and genuine conversion. The thread starts here, but here’s the relevant Tweet:

If you’re concerned about American jingoism: OK. That’s a valid concern. But there are two ways you can stand against it. In fear of the thing. Or out of love for something else. Choose carefully. Because if you’re motivated by fear, then you will–in the end–become the thing your fear motivates you to fight against. You will try to fight fire with fire, and then you will become the fire.

If you’re concerned about Mormon provincialism: OK. There are valid concerns. Being able to see outside your culture and build bridges with other cultures is a good thing. But, here again, you have to ask if you’re more afraid of provincialism or more in love with building bridges. Because if you’re afraid of provincialism, well… that’s how you get anti-provincial provincialism. And no bridges, by the way.

I might rewrite my pinned Tweet one day.

It’s not two truths. It’s just one. You want something? Fight for it. Fighting against things only gets you nothing in the end.
2 Timothy 1:7, KJV

Acceptance, Humility, and Goal Setting

In theory, setting goals should be simple. Start with where you are by measuring your current performance. Decide where you want to be. Then map out a series of goals that will get you from Point A to Point B with incremental improvements. Despite the simple theory, my lived reality was a nightmare of failure and self-reproach.

It took decades and ultimately I fell into the solution more by accident than by design, but I’m at a point now where setting goals feels easy and empowering. Since I know a lot of people struggle with this—and maybe some even struggle in the same ways that I did—I thought I’d write up my thoughts to make the path a little easier for the next person.

The first thing I had to learn was humility.

For most of my life, I set goals by imagining where I should be and then making that my goal. I utterly refused to begin with where I currently was. I refused to even contemplate it. Why? Because I was proud, and pride always hides insecurity. I could accept that I wasn’t where I should be in a fuzzy, abstract sense. That was the point of setting the goal in the first place. But to actually measure my current achievement and then use that as the starting point? That required me to look in the mirror head-on, and I couldn’t bear to do it. I was too afraid of what I’d see. My sense of worth depended on my achievement, and so if my achievements were not what they should be then I wasn’t what I should be. The first step of goal setting literally felt like an existential threat.

This was particularly true because goal-setting is an integral part of Latter-day Saint culture. Or theology. The line is fuzzy. Either way, the worst experiences I had with goal setting were on my mission. The stakes felt incredibly high. I was the kind of LDS kid who always wanted to go on a mission. I grew up on stories of my dad’s mission and other mission stories. And I knew that being a mission was like this singular opportunity to prove myself. So on top of the general religious pressure there was this additional pressure of “now or never”. If I wasn’t a good missionary, I’d regret it for the rest of my life. Maybe for all eternity. No pressure.

So when we had religious leaders come and say things like, “If you all contact at least 10 people a day, baptisms in the mission will double,” I was primed for an entirely dysfunctional and traumatizing experience. Which is exactly what I get.

(I’m going to set aside the whole metrics-approach-to-missionary-work conversation, because that’s a topic unto itself.)

I tried to be obedient. I set the goal. And I recorded my results. I think I maybe hit 10 people a day once. Certainly never for a whole week straight (we planned weekly). When I failed to hit the goal, I prayed and I fasted and I agonized and I beat myself up. But I never once—not once!—just started with my total from last week and tried to do an incremental improvement on that. That would mean accepting that the numbers from last week were not some transitory fluke. They were my actual current status. I couldn’t bear to face that.

This is one of life’s weird little contradictions. To improve yourself—in other words, to change away from who you currently are—the first step is to accept who you currently are. Accepting reality—not accepting that its good enough, but just that it is—will paradoxically make it much easier to change reality.

Another one of life’s little paradoxes is that pride breeds insecurity and humility breeds confidence. Or maybe it’s the other way around. When I was a missionary at age 19-21, I still didn’t really believe that I was a child of God. That I had divine worth. That God loved me because I was me. I thought I had to earn worth. I didn’t realize it was a gift. That insecurity fueled pride as a coping mechanism. I wanted to be loved and valuable. I thought those things depended on being good enough. So I had to think of myself as good enough.

But once I really accepted that my value is innate—as it is for all of us—that confidence meant I no longer had to keep up an appearance of competence and accomplishment for the sake of protecting my ego. Because I had no ego left to protect.

Well, not no ego. It’s a process. I’m not perfectly humble yet. But I’m getting there, and that confidence/humility has made it possible for me to just look in the mirror and accept where I am today.

The second thing is a lot simpler: keeping records.

The whole goal setting process as I’ve outlined it depends on being able to measure something. This is why I say I sort of stumbled into all of this by accident. The first time I really set good goals and stuck to them was when I was training for my first half marathon. Because it wasn’t a religious goal, the stakes were a lot lower, so I didn’t have a problem accepting my starting point.

More importantly, I was in the habit of logging my miles for every run. So I could look back for a few weeks (I’d just started) and easily see how many miles per week I’d been running. This made my starting point unambiguous.

Next, I looked up some training plans. These showed how many miles per week I should be running before the marathon. They also had weekly plans, but to make the goals personal to me, I modified a weekly plan so that it started with my current miles-per-week and ramped up gradually in the time I had to the goal mies-per-week.

I didn’t recognize this breakthrough for what it was at the time, but—without even thinking about it—I started doing a similar thing with other goals. I tracked my words when I wrote, for example, and that was how I had some confidence that I could do the Great Challenge. I signed up to write a short story every week for 52 weeks in a year because I knew I’d been averaging about 20,000 words / month, which was within range of 4-5 short stories / month.

And then, just a month ago, I plotted out how long I wanted it to take for me to get through the Book of Mormon attribution project I’m working on. I’ve started that project at least 3 or 4 times, but never gotten through 2 Nephi, mostly because I didn’t have a road map. So I built a road map, using exactly the same method that I used for the running plan and the short stories plan.

And that was when I finally realized I was doing what I’d wanted to do my whole life: setting goals. Setting achievable goals. And then achieving them. Here’s my chart for the Book of Mormon attribution project, by the way, showing that I’m basically on track about 2 months in.

The y-axis is characters, if you’re curious. There are about 1.4m characters in the BoM.

I hope this helps someone out there. It’s pretty basic, but it’s what I needed to learn to be able to start setting goals. And setting goals has enabled me to do two things that I really care about. First, it lets me take on bigger projects and improve myself more / faster than I had in the past. Second, and probably more importantly, it quiets the demon of self-doubt. Like I said: I’m not totally recovered from that “I’m only worth what I accomplish” mental trap. When I look at my goals and see that I’m making consistent progress towards things I care about, it reminds me what really matters. Which is the journey.

I guess that’s the last of life’s little paradoxes for this blog post: you have to care about pursuing your goals to live a meaningful, fulfilling life. But you don’t actually have to accomplish anything. Accomplishment is usually outside your control. It’s always subject to fortune. The output is the destination. It doesn’t really matter.

But you still have to really try to get that output. Really make a journey towards that destination. Because that’s the input. That’s what you put in. That does depend on you.

I believe that doing my best to put everything into my life is what will make my life fulfilling. What I get out of it? Well, after I’ve done all I can, then who cares how it actually works out. That’s in God’s hands.

Free Speech – A Culture of Tolerance

Tara Henley’s recent podcast episode with Danish free speech advocate Jacob Mchangama was fascinating and encouraging. A quote from Orwell came up that I hadn’t heard before, and it’s worth emphasizing:

The relative freedom which we enjoy depends on public opinion. The law is no protection. Governments make laws, but whether they are carried out, and how the police behave, depends on the general temper in the country. If large numbers of people are interested in freedom of speech, there will be freedom of speech, even if the law forbids it; if public opinion is sluggish, inconvenient minorities will be persecuted, even if laws exist to protect them.

The fact that free speech is not just a legal matter is a vitally important one, because those who restrict free speech to the minimum legal interpretation are actively undermining—wittingly or not—the culture that actual free speech depends on.

Mchangama brought up the example of Athens, which enjoyed a cultural free speech called parrhesia which, Mchangama said, “means something like fearless or uninhibited speech.” Although there was no legal basis for parrhesia, it “permeated the Athenian democracy” and let to a “culture of tolerance”.

Clearly a culture of tolerance is not sufficient. Just ask Socrates. But at the same time legal free speech rights aren’t sufficient, either. The historical examples are too numerous to cite, especially in repressive 20th century regimes that often paid lip service to human rights (including the late-stage USSR). The laws were there on paper, but a lot of good it did anyone.

The Death of Socrates - Wikipedia
When the culture of tolerance wanes and there’s no legal recourse…

Mchangama went on to say that “if people lose faith in free speech and become more intolerant than laws will reflect that change and become more intolerant.” So fostering this culture is vital both to preserve the rights on paper and to ensure those legal rights are actually honored in the real world. So, “how do we foster a culture of free speech?” Mchangama asked. His response, in part:

It is ultimately down to each one of us. So those of us who believe in free speech have a responsibility of making the case for free speech to others, and do it in an uncondescending way, and also one which doesn’t just rely on calling people who want to restrict free speech fascists or totalitarians… [We must] take seriously the concerns of those who are worried about the ugly sides and harmful sides of free speech.

This is a tough balance to strike, but I want to do my part. So let me make two points.

First, the popular line of argument that dismisses anything that’s not a technical violation of the First Amendment is unhelpful. Just as an example, here’s an XKCD cartoon (and I’m usually a huge fan) to show what I mean.

The problem with this kind of free speech minimalism is that its intrinsically unstable. If you support free speech but only legally, then you don’t really support free speech at all. Wittingly or not, you are adopting an anti-free speech bias. Because, as Orwell and Mchangama observe, a legal free speech right without accompanying support is a paper tiger with a short life span.  

Second, the question isn’t binary. It’s not about whether we should have free speech. It’s about the boundaries of tolerance—legal and cultural—for unpopular speech. To this end, Mchangama decries use of pejoratives like “social justice warrior” for those who want to draw a tighter boundary around what speech is legally and culturally permissible.

I’ve used the SJW term a lot. You can find plenty of instances of it here on this blog. I’ve always been a little uncomfortable with it because I don’t want to use a pejorative, but I wasn’t sure how else to refer to adherents of the post-liberal “successor ideology.”

Maybe that decision to use SJW was understandable, but I’m rethinking it. Either way, the reality is that I’ve imbibed at least some of the tribal animus that comes with the use of the term. I have—again, you can probably find old examples here on this blog—characterized my political opponents by their most extreme examples rather than by the moderate and reasonable folks who have genuine concerns about (in this context) how free speech can negatively impact minorities.

I am not changing my position on free speech. Like Mchangama, I strongly believe that the benefits of a broadly tolerant free speech culture greatly outweigh the costs for the disempowered. But that doesn’t mean there are no costs.

Admitting that it’s a tradeoff, that critics have legitimate concerns, and that the question isn’t binary will—I hope—make me more persuasive as a free speech advocate. Because I really do believe that a thriving culture of free speech is vitally important for the health of liberal democracies and everyone who lives within them. I do not want people to lose that faith.

Political Doom Spirals

Amid COVID-19, politics and the US Capitol breach, how to cope with the  stress of the news - ABC News
ABC News

Yesterday I retweeted Ross Douthat:

This morning I read a piece from Damon Linker that I felt could use a little boost from Douthat’s perspective. In the bloody power of symbolic gestures, Damon lays blame for breaching the capitol on the extreme right-wing media and the politicians who have tried to cash in on it.

The Republican Party and its media allies have created this monster for the sake of personal advantage and without the slightest shred of regard for its consequences on the country’s capacity for self-government. The monster is a faction of Republican voters who are increasingly incapable of participating in democratic politics and who long for a form of tyrannical rule.

Damon is not wrong. Just yesterday, Rush Limbaugh said:

There’s a lot of people calling for the end of violence. There’s a lot of conservatives, social media, who say that any violence or aggression at all is unacceptable, regardless of the circumstances. I’m glad Sam Adams, Thomas Paine, the actual Tea Party guys, the men at Lexington and Concord didn’t feel that way.

Or did he? I used to listen to Rush Limbaugh pretty consistently from 2006ish to 2008ish. It was a deliberate choice to expose myself to different viewpoints. One of the things I quickly learned is that a lot of the mainstream media attacks on Limbaugh (e.g. NPR) were dishonest. A common approach was to take something he said in an obviously joking way and report it as serious. At the time, I was listening to most of his 3-hour show (at work), so when the latest brouhaha erupted, it would be about a segment I’d heard first hand. I don’t trust establishment media to be honest about conservatives.

On the other hand, plenty of what Rush said disgusted me on its own merits. I remember during the fractious 2016 Republican primary season (I listened again during that period to see what conservative radio was up to), he adroitly refused to back any candidate while giving the impression of being on everyone’s side. It struck me as profoundly cowardly. Rush claimed to care so much about conservative principles and to be so influential, but clearly he was afraid to back a losing horse. Someone really committed to the county and their ideals uses their influence as wisely as they can, they don’t pose as influential while just waiting to see who the winner is so that they can ride their coattails.

But I digress.

Damon Linker is right, but his viewpoint is incomplete, as I said on Twitter:

The important thing to keep in mind about the left-right conflict in the public square is that it’s dramatically asymmetrical. The left owns all the high ground: the universities, the publishing houses, Hollywood, all the major journalist outputs. They have the entrenched power, which gives them more to lose and also more options to choose from, and the result is a careful, low-grade, but brutally relentless prosecution.

The right is at the margin. They have very little to lose and much less traditional power to call on, and so their tactics are far more radical and risky.

To an outside observer, this makes the right much easier to depict as the bad guy, but really the dynamic has nothing to do with ideology and everything to do with the asymmetric nature of the conflict. Just look at any asymmetric military conflict, like Vietnam or Afghanistan or Iraq. The American military is overwhelmingly superior and so engages in (relatively) careful operations that emphasize numbers and technical superiority. Because they have lots to lose and lots to work with.

The insurgents adopt much more extreme tactics, from suicide bombing to blue-on-green assassinations, because they have little to lose and little to work with.

Another stark example is the Israeli-Palestinian conflict, where–again–Israel is the cautious incumbent and the Palestinians are the reckless insurgents.

My point isn’t to excuse either side in any way. It’s merely to highlight that the nature of asymmetrical conflict brings about characteristically divergent tactics and approaches. The traditionally powerful side tends to behave conservatively, whether it’s the New York Times or Israel. The traditionally weaker side tends to behave more radically, whether it’s Rush Limbaugh or Hamas.

Some readers might be outraged that I’m excusing Hamas’s tactics or downplaying Israel’s atrocities, but that’s exactly what I want to highlight. I’m not taking a position on these issues. That’s not the topic for today. But I want people to realize that when they like the dominant power (e.g. the New York Times) they interpret their conservatism as mature, respectable, etc. But when they don’t like the dominant power (e.g. someone who opposes Israel) they will emphasize that even relatively moderate tactics (bulldozing houses vs. suicide bombers) are horrific in their aggregate output and will thereby interpret the same conservative tendency as oppression and exploitation.

We have the same double-standard for the weaker side, either lauding the audacity, bravery, and sacrifice of radicalism or condemning the brutality, savagery, and dishonorable nature of the same tendency.

Once you realize that the strong side is conservative and the weak side is radical and also realize that there are ways to interpret each of those positively and negatives–and only once you have these two realizations–you can be read to start thinking about the split between mainstream media (leftwing) and alternative media (rightwing) in America.

As long as the American left dominates our cultural institutions and as long as the American right is willing to burn those institutions down rather than lose them, there is no point assigning blame because the doom spiral will destroy us all.

The only way out is to re-establish norms of tolerance and diversity in the public sphere. I’m not calling for some kind of affirmative action for conservatives. There’s no such thing as objectivity, and so that is impossible to really implement and easy to game and manipulate. For the foreseeable future, the left will be the incumbent, dominant power in all the former institutions. But they need to be more willing to tolerate differences and refrain from exiling conservatives from their public sphere. They need to exercise restraint.

The right has the same obligation: exercising restraint in order to avoid the temptation to act like irresponsible radical and burn our institutions down.

Three final thoughts.

First, there is no policy that will solve our problems. We can’t fix this with laws or procedures. This is a change in how Americans relate to each other across perceived political differences.

Second, it might actually be more effective to depoliticize instead of diversify. In other words, making space for left and right viewpoints is important, but also very hard. It can be easier to just not see everything as political.

Third, I am addressing our cultural institutions. In terms of political institutions, the situation is somewhat reversed. Generally speaking, the conservatives have been the dominant players in recent years, especially when you consider state-level governments as well as the Presidency, and it’s Democrats who have been frustrated by their lack of access to formal power. I’m acknowledging this, but not addressing it for the simple reason that I feel more competent to weigh in on public culture than on formal politics.

Why Trump Isn’t Getting Impeached This Time

I’m no political expert, but here is my thinking as to why Nancy Pelosi and the Democrats may have decided not to even try to impeach and remove President Trump (this time): they want him right where he is.

Before the mob breached the Capitol yesterday, Trump was powerful enough within the GOP that he could pose a real threat to Democrats in the future. That’s why they tried to remove him previously. (Yes, he was also guilty in my mind, but the political considerations may have dominated.) In 2020 he really scared them by performing much better than polls expected. So a Trump strong enough to dominate the GOP and maybe run again in 2024 is a real threat.

But after the mob breached the capitol yesterday, Trump’s stock went down dramatically with the middle and the right (not just the left). Even many of his own supporters were horrified, which is why we already have ridiculous conspiracy theories about a false flag operation. That tells you how badly Trump screwed up: his devoted followers refuse to believe it was him. As a result, he no longer poses as much of a threat to the Democrats, but he’s still strong enough to hamstring the GOP from within.

If he is impeached and removed he cannot run again in 2024. That would be a killing blow (politically). His faction of the GOP was always a personality cult and never about ideology, so it would collapse without him. (There’s some chance it restarts under Ivanka or something, but thta’s probably a more distant threat.)

The GOP could respond to the vacuum left by Trump’s faction by ushering back in the #NeverTrump conservatives: the only Republicans left with any honor intact after yesterday. They, in turn, would probably be wise enough to incorporate the legitimate populist grievances that Trump initially leveraged, infusing their high-brow philosophy with a compelling pitch to ordinary Americans. This isn’t certain, but there’s a real path forward to a rehabilitated, resurgent GOP that is even stronger than it ever was before or under Trump.

The simplest route for Democrats to stave off this possibility is to leave Trump where he is: a polarizing figure with enough power to tear the GOP apart but not enough to dominate it as he has in the past. If he can run in 2024, he’s not going anywhere and his reduced faction will stay in place, forestalling any sweeping reform of the GOP. The GOP, in turn, will still be strong enough to pre-empt the rise of a real replacement party.

Napoleon said: “Never interrupt your enemy when he is making a mistake.” That applies here.
Trump’s buffoonish attempts at a coup were a colossal political mistake. Now that Trump has enraged the broad middle of America (not just the left), they won’t remove him ’cause he’s too useful where he is: an anvil on the prospects of their political rivals.

Pro-Life: A Fiercely Held Moderate Position

The Legal Status of Abortion, Revisited

I’ve talked to Terryl Givens a few times since his article on abortion for Public Square came out. Both of us are disappointed, but not at all surprised, by some of the reactions from fellow Latter-day Saints. I’ll dive into one such response–a post from Sam Brunson at By Common Consent–but only after taking a minute to underscore the difference between an extreme position and a fiercely-held moderate position. 

There’s a reason why the first section in Terryl’s piece is an explanation of the current legal status of abortion in the United States. Unlike many other developed nations, where abortion laws were gradually liberalized through democratic means, the democratic process in the United States was short-circuited by the Roe vs. Wade decision (along with Doe vs. Bolton). As Terryl explained, American abortion law since Roe is an extreme outlier: “America is one of very few countries in the world that permit abortion through the 9th month of pregnancy.”

If the spectrum of possible abortion laws runs from “never and under no circumstances” to “always and under any circumstances,” our present situation is very close to the “any circumstances” extreme. 

In his rejoinder, Sam rightly points out that Roe is not the last word on the legality of abortion in the United States. Decades of laws and court cases–including return trips to the Supreme Court–have created an extremely complex legal landscape full of technicalities, ambiguities, and contradictions.

But the bottom line to the question of the legal status of abortion in the United States doesn’t require sophisticated legal analysis if we can answer a much simpler set of questions instead. Is it true that there are a large number of late-term abortions in the United States that are elective (i.e. not medically necessary or the result of rape) and legal? That’s the fundamental question, and it’s one Terryl unambiguously asked and answered in his piece (with sources):

Current numbers are between 10,000 and 15,000 late-term abortions performed per year… “[M]ost late-term abortions are elective, done on healthy women with healthy fetuses, and for the same reasons given by women experiencing first-trimester abortions.” 

Thus, the key question of the current legal status of abortion in the United States is irrefutably answered. We live in a country where late-term, elective abortions are legal, and we’re one of the only countries in the world with such a radical and extreme position

This radical position is not at all  popular with Americans, as countless polling demonstrates. Here’s Gallup

It is possible to quibble about whether the current regime is logically identical to “legal under any circumstances.” No matter, the present state of abortion legality is so incredibly extreme that we can afford to be very generous in our analysis. Here are more detailed poll numbers:

American abortion law is more extreme than “legal under most circumstances” but even if we grant that we still find that combined support for the status quo is just 38% while the vast majority would prefer abortion laws stricter than what we have now. And stricter than what we can have, as long as the root of the tree (Roe) is left intact. 

How Does A Radical, Unpopular Position Endure?

These opinions have been roughly stable, and so you might reasonably wonder: how is it possible for such an unpopular state of affairs to last for so long? One reason is structural. Because Roe short-circuited the democratic, legislative process the issue is largely out of the hands of (more responsive) state and federal legislatures. Individual states can, and have, passed laws to restrict abortion around the margins, but the kind of simple, “No abortion in the third trimester except for these narrow exceptions” law that they would accurately reflect popular sentiment would inevitably run up against Supreme Court where either the restriction or Roe would have to be ovturned.

As long as Roe is in place, it’s impossible to limit abortion laws. And yet support for overturning Roe, perversely, is very low. Here’s a recent NBC poll:

Here we have a contradiction. Most Americans want abortion to be limited to only a few, exceptional cases. But this is impossible without overturning Roe. And most Americans don’t want to overturn Roe. 

This is the fundamental contradiction that has perpetuated an extremist, unpopular abortion status quo for so long.

The most straightforward explanation is that Americans support Roe while at the same time supporting the kinds of laws that Roe precludes because they don’t understand Roe. And this is where we get to the heart of the responses to Terryl’s piece (and many, many more like his). The only way to convince Americans to support Roe (even though it goes counter to their actual preferences) is to convince them that Roe is the moderate position and that it is the pro-life side that is radical and extreme.

This involves two crucial myths:

  1. Elective, late-term abortions do not take place, or only take place for exceptional reasons 
  2. The only alternative to Roe is a blanket ban on all abortions

If Americans understood that elective, late-term abortions are legal and do, in fact, take place in great numbers and if they understood that they only way to change this state of affairs would be to repeal Roe, then support for Roe would plummet. Thus, for those who wish to support the present situation, the agenda is clear: the status quo can only survive as long as the pro-life position is misrepresented as the extreme one. 

The Violence of Abortion Protects Abortion

With tragic irony, the indefensibleness of elective abortion makes this task much easier. Abortion is an act of horrific violence against a tiny human being. It is impossible to contemplate this reality and not be traumatized. This is why pro-life activists suffer from burn-out. 

It is not just pro-life activists who are traumatized by the violence of abortion, however. The most deeply traumatized are the ones most frequently and closely exposed to the violence of abortion: the abortionists themselves. This is what led Lisa Harris to write “Second Trimester Abortion Provision: Breaking the Silence and Changing the Discourse.” She reports how the advent of second-trimester D&E abortions, which entail the manual dismemberment of the unborn human, led to profound psychological consequences for practitioners when they became common in the late 1970s:

As D&E became increasingly accepted as a superior means of accomplishing second trimester abortion… a small amount of research on provider perspectives on D&E resulted. Kaltreider et al found that some doctors who provided D&E had ‘‘disquieting’’ dreams and strong emotional reactions.

Hern found that D&E was ‘‘qualitatively a different procedure – both medically and emotionally – than early abortion’’. Many of his staff members reported: ‘‘. . .serious emotional reactions that produced physiological symptoms, sleep disturbances (including disturbing dreams), effects on interpersonal relationships and moral anguish.’’

This is the perspective from an abortionist, not some pro-life activist. In fact, Lisa describes performing an abortion herself while she was pregnant: 

When I was a little over 18 weeks pregnant with my now pre-school child, I did a second trimester abortion for a patient who was also a little over 18 weeks pregnant. As I reviewed her chart I realised that I was more interested than usual in seeing the fetal parts when I was done, since they would so closely resemble those of my own fetus. I went about doing the procedure as usual, removed the laminaria I had placed earlier and confirmed I had adequate dilation. I used electrical suction to remove the amniotic fluid, picked up my forceps and began to remove the fetus in parts, as I always did. I felt lucky that this one was already in the breech position – it would make grasping small parts (legs and arms) a little easier. With my first pass of the forceps, I grasped an extremity and began to pull it down. I could see a small foot hanging from the teeth of my forceps. With a quick tug, I separated the leg. Precisely at that moment, I felt a kick – a fluttery ‘‘thump, thump’’ in my own uterus. It was one of the first times I felt fetal movement. There was a leg and foot in my forceps, and a ‘‘thump, thump’’ in my abdomen. Instantly, tears were streaming from my eyes – without me – meaning my conscious brain – even being aware of what was going on. I felt as if my response had come entirely from my body, bypassing my usual cognitive processing completely. A message seemed to travel from my hand and my uterus to my tear ducts. It was an overwhelming feeling – a brutally visceral response – heartfelt and unmediated by my training or my feminist pro-choice politics. It was one of the more raw moments in my life. Doing second trimester abortions did not get easier after my pregnancy; in fact, dealing with little infant parts of my born baby only made dealing with dismembered fetal parts sadder.

This is hard to read. Harder to witness. Harder still to perform. That’s Lisa’s point. “There is violence in abortion,” she plainly states, and the point of her paper is that abortionists, like her, need support to psychologically withstand the trauma of perpetrating that violence on tiny human beings again and again and agin.

Thus, pro-lifers find themselves in a Catch-22 position. If we do nothing, or soft-peddle the true state of affairs, then the pro-choice myths remain uncontested, and misuided public opinion keeps Roe–and elective abortion at any point–safe.

But if we do describe the true nature of abortion then our audienc recoils in shock. Ordinary Americans living their ordinary lives are unprepared for the horror going on all around them, and they are overwhelmed. They desperately want this not to be true. They don’t even want to think about it. 

If you tell someone that your neighbor screams at his kids, they will be sad. If you say your neighbor hits his kids, they will urge you to call the police or contact social services. If you say your neighbor is murdering his children one by one and burying them in the backyard, they will probably stop talking to you and maybe even call the cops on you. Sometimes, the worse a problem is the harder it is to get anyone to look at it. That is the case with abortion.

Terryl’s piece was a moderate position strongly argued. He is taking literally the most common position in America: that abortion should be illegal in all but a few circumstances (but that it should be legal in those circumstances). He never stated or even implied that other, ancillary efforts should not be tried (such as birth control). 

Abortion is a large, complex issue and there are a lot of ambiguous and aspects to it. But not every single aspect is ambiguous. Some really are clear cut. Such as the fact that it should not be legal to get a late-term abortion for elective reasons. Virtually all Americans assent to this. Why, then, was Terryl’s essay so utterly rejected by some fellow Latter-day Saints?

On Sam Brunson and Abortion

As I mentioned earlier, Sam’s response to Terryl is the first extended one I’ve seen and is pretty representative of the kinds of arguments that are typical in response to pro-life positions. It is a great way to see the factors I’ve described above–the myth-protecting and the violence-denial–put into practice. 

Bad Faith

Terryl’s piece includes this line in the first paragraph:

I taught in a private liberal arts college for three decades, where, as is typical in higher education, political views are as diverse as in the North Korean parliament.

This is what is colloquially referred to by most people as: “a joke,” but Sam’s response see it as an opening to impugn Terryl’s motives: 

Moreover, bringing up North Korea—an authoritarian dictatorship where dissent can lead to execution—strongly hints that he’s creating a straw opponent, not engaging in good-faith discourse.

Is a humorous line at the expense of North Korea really a violation of “good-faith discourse”? Did he really not recognize that this line was written humorously? I suppose it’s possible, since he seemed to think the BCC audience wouldn’t know North Korea was an authoritarian dictatorship without being told. Also, he is a tax attorney.

Still, it seems awfully convenient to fail to recognize a joke in such a way that lets you invent some kind of nefarious, implied straw-man argument. 

It almost seems like bad faith.

Facts and the Law

This is the section where the first of the two core objectives (defend the myths) is undertaken. Sam characterizes Terryl’s piece as “deeply misleading” and specifically refers to “big [legal] problems,” yet his rejoinder is curiously devoid of substance to validate these claims. For example:

And, in fact, just last week the Sixth Circuit upheld a Kentucky law requiring that abortion clinics have a hospital transfer agreement. So the idea that abortion regulation always fails in the courts is absolutely absurd.

So Terryl says elective abortions are generally legal at any time and the reply is, well those clinics that can provide the abortion at any time for any reason may be required to “have a hospital transfer agreement”. In what way is this in any sense a refutation of the point at hand? 

It’s like I said, “buying a light bulb is generally legal at any time for any reason” and you said, “well, yeah, but hardware stores can be required to follow safety regulations.”

… OK?

None of the legal analysis in this section gets to the core fact: are late-term abortions frequently conducted in the United States for elective reasons and are they legal? If that is the case, then no amount of legal analysis can obfuscate the bottom line: yes, abortions are legal for basically any reason at basically any time. There’s room to quibble or qualify, but–so long as that central fact stands–not to fundamentally rebut the assertion. 

Sam never even tries. 

That’s because, as we covered above, the facts are unimpeachable. Terryl’s source is the Guttmacher Institute, which is the research arm of the nation’s most prolific abortion provider. It’s an objective fact that elective, late-term abortions are legal in the United States as a result of the legal and policy ecosystem descending from the Roe and Doe decisions. Instead of a strong rebuttal of this claim, as we were promised, all we get are glancing, irrelevancies. 

Moral Repugnance

Having attempted to defend the myth that later-term abortions are illegal, in this section Sam turns to the next objective: leveraging the violence of abortion to deflect attention from the violence of abortion.

He begins by citing Terryl:

I am not personally opposed to abortion because of religious commitment or precept, because of some abstract principle of “the sanctity of life.” I am personally opposed because my heart and mind, my basic core humanity revolts at the thought of a living sensate human being undergoing vivisection in the womb, being vacuum evacuated, subjected to a salt bath, or, in the “late-term” procedure, having its skull pierced and brain vacuumed out.

Then he presents his own pararaphse: “[Terryl] finds abortion physically disgusting and, at least partly in consequence of that disgust, finds it morally repugnant.” 

This is an egregious mischaracterization. I understand that the clause “at least partly” offers a kind of fig leaf so that Sam can say–when pressed–that he’s not actually substituting Terryl’s moral revulsion for a mere gag reflex, but since the rest of the piece exclusively focuses on the straw-man version of Terryl’s argument, that is in fact what he is doing, tenuous preemptive plausible deniability excuses notwithstanding. 

To say that the horror of tiny arms and legs being ripped away from a little, living body is the same species of disgust as watching a gall bladder operation is an act of stunning moral deadness.

I understand that urge to look away from abortion. I wish I had never heard of it. And, as we covered above, even a staunch pro-choice feminist and abortionist like Lisa not only admits that abortion is intrinsically violent, but insists that this violence causes a psychological trauma that merits sympathy and support for abortionists who subject themselves to it. She is not the only one, by the way. Another paper, Dangertalk: Voices of abortion providers, includes many additional examples of the way that abortionists frankly discuss abortion in terms of violence, killing, and war when they are away from public view. “It’s like a slaughterhouse–it’s like–line ’em up and kill ’em and then go on to the next one — I feel like that sometimes,” said one. “[Abortion work] feels like being in a war,” said another, “I think about what soldiers feel like when they kill.”

Given this, Sam’s substitution from moral revulsion at an act of violence to physical revulsion at blood and guts is unmasked for what it truly is: a deflection. Abortion is very, very hard to look at. And that’s convenient for pro-choicers, because they don’t want you to see it.

The reset of this section is largely a continuation of the deflection. 

  • “Why no support for, for example, free access to high-quality contraception?” 
  • “[Y]ou know what I consider disgusting? Unnecessary maternal death.”
  • “You know what else I consider morally repugnant? Racial inequality.” 

I do not wish to sound remotely dismissive of these entirely valid statements. Each of them is legitimate and worthy of consideration. Nothing in Terryl’s piece or in my position contradicts any of them. Let us have high-quality contraception, high-quality maternal care, and a commitment to ending racial inequality. 

But we do not have to stint on our dedication to any of those policies or causes to note that–separate and independent from these considerations–elective, late-term abortions are horrifically violent. 

These are serious considerations, and they deserve to be treated as more than mere padding to create psychological distance from the trauma of abortion. Just as women, too, deserve a better solution for the hardship of an unplanned pregnancy than abortion.

To the Latter-day Saints?

Sam writes that, “Although he claims to be making a Latter-day Saint defense of the unborn, his argumentation is almost entirely devoid of Latter-day Saint content.” Here, at least, he is basically correct. Not that he’s made some insightful observation, of course. He’s just repeating Terryl’s words from the prior quote: “I am not personally opposed to abortion because of religious commitment or precept.”

In this section, which I find the least objectionable, Sam tries to carve out space for Latter-day Saints to be personally pro-life and publicly pro-choice, including a few half-hearted references to General Authorities and Latter-day Saint theology. I say “half-hearted” because I suspect Sam knows as well as I do that President Nelson and President Oaks, to name just two, have spoken forcefully and clearly in direct contradiction to Latter-day Saint attempts to find some kind of wiggle room around “free agency” or to view the lives of unborn human beings as anything less than equal with the lives of born human beings. 

No one should countenance the legality of elective abortion at all, but Latter-day Saints especially so.

The Moral Imagination

In the final section, Sam circles back to the myth-defending. The first myth to defend is that elective, late-term abortions do not take place in the United States, which is just a detailed way of saying that the pro-choice side seeks to conceal the radicalism of our current laws.

The second myth to defend, which occupies this section, is the myth that any alternative to our current laws must be the truly extreme option. 

So in this section we encounter new straw-men such as: “it’s not worth pursuing other routes to reduce the prevalence of abortion.” Terryl never says that, nothing in his article logically entails that, and I know he does not believe that. One can say, for example, “theft ought to be illegal” and also support anti-poverty measures, after-school programs, and free alarm systems to reduce theft. Much the same is true here: one can support banning elective abortion and support a whole range of additional policies to lower the demand for abortion.

Similarly, he writes that “I find abortion troubling as, I believe, most people do. But I also find a world without legal abortion troubling.” That sounds modest and reasonable enough, but it’s another straw-man. The Church, as he noted in the previous section, “has no issue with abortion” in exceptional cases. Neither does Terryl. So the issue is not “a world without legal abortion” because no one has argued for that position. The issue is “a world without legal elective abortion”. Omitting that word is another great example of a straw-man argument.

By making it appear as though Terryl wishes to ban all abortions and refuses to consider any alternative policies to reduce abortions, Sam creates the impression that Terryl is the one with the radical proposal. Except, as I’ve noted, neither of those assertions is grounded in anything other than an attempt to preserve vital myths.

Wrap-up

I echo what Terryl originally said: “I do not see reproductive rights and female autonomy as simple black and white issues.” Abortion as a whole is very, very complicated issue legally, scientifically, and morally. There is ample room for nuanced discussion, policy compromise, and common ground. 

But for us to have that kind of a conversation, we have to start with honesty. That means dispensing with the myths that American abortion law is moderate or that abortion is anything other than deeply and intrinsically violent. And it means allowing pro-lifers to speak for themselves, rather than substituting extremist views for their actual positions. 

Because abortion is so horrific, pro-lifers face a tough, up-hill slog. But because it is so horrific, we can’t abandon the calling. We will continue to seek out the best ways to boldly stand for the innocent who have no voice, to balance the competing and essential welfare needs of women with the right to life of unborn children, and to advance our modest positions as effectively as possible.

The Real Social Dilemma

The Social Dilemma is a newly available documentary on Netflix about the peril of social networks. 

The documentary does a decent job of introducing some of the ways social networks (Facebook, Twitter, Pinterest, etc.) are negatively impacting society. If this is your entry point to the topic, you could do worse.

But if you’re looking for really thorough analysis of what is going wrong or for possible solutions, then this documentary will leave you wanting more. Here are four specific topics–three small and one large–where The Social Dilemma fell short.

AI Isn’t That Impressive

I published a piece in June on Why I’m an AI Skeptic and a lot of what I wrote then applies here. Terms like “big data” and “machine learning” are overhyped, and non-experts don’t realize that these tools are only at their most impressive in a narrow range of circumstances. In most real-world cases, the results are dramatically less impressive. 

The reason this matters is that a lot of the oomph of The Social Dilemma comes from scaring people, and AI just isn’t actually that scary. 

Randall Munroe’s What If article is technically about a robot apocalypse, but the gist of it applies to AI as well.

I don’t fault the documentary for not going too deep into the details of machine learning. Without a background in statistics and computer science, it’s hard to get into the details. That’s fair. 

I do fault them for sensationalism, however. At one point Tristan Harris (one of the interviewees) makes a really interesting point that we shouldn’t be worried about when AI surpasses human strengths, but when it surpasses human weaknesses. We haven’t reached the point where AI is better than a human at the things humans are good at–creative thinking, language, etc. But we’ve already long since passed the point where AI is better than humans at things humans are bad at, such as memorizing and crunching huge data sets. If AI is deployed in ways that leverage human weaknesses, like our cognitive biases, then we should already be concerned. So far this is reasonable, or at least interesting.

But then his next slide (they’re showing a clip of a presentation he was giving) says something like: “Checkmate humanity.”

I don’t know if the sensationalism is in Tristan’s presentation or The Real Dilemma’s editing, but either way I had to roll my eyes.

All Inventions Manipulate Us

At another point, Tristan tries to illustrate how social media is fundamentally unlike other human inventions by contrasting it with a bicycle. “No one got upset when bicycles showed up,” he says. “No one said…. we’ve just ruined society. Bicycles are affecting society, they’re pulling people away from their kids. They’re ruining the fabric of democracy.”

Of course, this isn’t really true. Journalists have always sought sensationalism and fear as a way to sell their papers, and–as this humorous video shows–there was all kinds of panic around the introduction of bicycles.

Tristan’s real point, however, is that bicycles were were a passive invention. They don’t actively badger you to get you to go on bike rides. They just sit there, benignly waiting for you to decide to use them or not. In this view, you can divide human inventions into everything before social media (inanimate objects that obediently do our bidding) and after social media (animate objects that manipulate us into doing their bidding).

That dichotomy doesn’t hold up. 

First of all, every successful human invention changes behavior individually and collectively. If you own a bicycle, then the route you take to work may very well change. In a way, the bike does tell you where to go. 

To make this point more strongly, try to imagine what 21st century America would look like if the car had never been invented. No interstate highway system, no suburbs or strip malls, no car culture. For better and for worse, the mere existence of a tool like the car transformed who we are both individually and collectively. All inventions have cultural consequences like that, to a greater or less degree.

Second, social media is far from the first invention that explicitly sets out to manipulate people. If you believe the argumentative theory, then language and even rationality itself evolved primarily as ways for our primate ancestors to manipulate each other. It’s literally what we evolved to do, and we’ve never stopped.

Propaganda, disinformation campaigns, and psy-ops are one obvious category of examples with roots stretching back into prehistory. But, to bring things closer to social networks, all ad-supported broadcast media have basically the same business model: manipulate people to captivate their attention so that you can sell them ads. That’s how radio and TV got their commercial start: with the exact same mission statement as GMail, Google search, or Facebook. 

So much for the idea that you can divide human inventions into before and after social media. It turns out that all inventions influence the choices we make and plenty of them do so by design.

That’s not to say that nothing has changed, of course. The biggest difference between social networks and broadcast media is that your social networking feed is individualized

With mass media, companies had to either pick and choose their audience in broad strokes (Saturday morning for kids, prime time for families, late night for adults only) or try to address two audiences at once (inside jokes for the adults in animated family movies marketed to children). With social media, it’s kind of like you have a radio station or a TV studio that is geared just towards you.

Thus, social media does present some new challenges, but we’re talking about advancements and refinements to humanity’s oldest game–manipulating other humans–rather than some new and unprecedented development with no precursor or context. 

Consumerism is the Real Dilemma

The most interesting subject in the documentary, to me at least, was Jaron Lanier. When everyone else was repeating that cliché about “you’re the product, not the customer” he took it a step or two farther. It’s not that you are the product. It’s not even that your attention is the product. What’s really being sold by social media companies, Lanier pointed out, is the ability to incrementally manipulate human behavior.

This is an important point, but it raises a much bigger issue that the documentary never touched. 

This is the amount of money spent in the US on advertising as a percent of GDP over the last century:

Source: Wikipedia

It’s interesting to note that we spent a lot more (relative to the size of our economy) on advertising in the 1920s and 1930s than we do today. What do you think companies were buying for their advertising dollars in 1930 if not “the ability to incrementally manipulate human behavior”?

Because if advertising doesn’t manipulate human behavior then why spend the money? If you can’t manipulate human behavior with a billboard or a movie trailer or a radio spot, then nobody would ever spend money on any of those things.

This is the crux of my disagreement with The Social Dilemma. The poison isn’t social media. The poison is advertising. The danger of social media is just that (within the current business model) it’s a dramatically more effective method of delivering the poison.

Let me stipulate that advertising is not an unalloyed evil. There’s nothing intrinsically wrong with showing people a new product or service and trying to persuade them to pay you for it. The fundamental premise of a market economy is that voluntary exchange is mutually beneficial. It leaves both people better off. 

And you can’t have voluntary exchange without people knowing what’s available. Thus, advertising is necessary to human commerce and is a part of an ecosystem flourishing, mutually beneficial exchanges and healthy competition. You could not have modern society without advertising of some degree and type. 

That doesn’t mean the amount of advertising–or the kind of advertising–that we accept in our society is healthy. As with  basically everything, the difference between poison and medicine is found in the details of dosage and usage. 

There was a time, not too long ago, when the Second Industrial Revolution led to such dramatically increased levels of production that economists seriously theorized about ever shorter work weeks with more and more time spent pursuing art and leisure with our friends and families. Soon, we’d spend only ten hours a week working, and the rest developing our human potential.

And yet in the time since then, we’ve seen productivity skyrocket (we can make more and more stuff with the same amount of time) while hours worked have also remained roughly steady. The simplest reason for this? We’re addicted to consumption. Instead of holding production basically constant (and working fewer and fewer hours), we’ve tried to maximize consumption by keeping as busy as possible. This addiction to consumption, not necessarily having but to acquiring stuff, manifests in some really weird cultural anomalies that–if we witnessed them from an alien perspective–probably strike us as dysfunctional or even pathological.

I’ll start with a personal example: when I’m feeling a little down I can reliably get a jolt of euphoria from buying something. Doesn’t have to be much. Could be a gadget or a book I’ve wanted on Amazon. Could be just going through the drive-thru. Either way, clicking that button or handing over my credit card to the Chick-Fil-A worker is a tiny infusion of order and control in a life that can seem confusingly chaotic and complex. 

It’s so small that it’s almost subliminal, but every transaction is a flex. The benefit isn’t just the food or book you purchase. It’s the fact that you demonstrated the power of being able to purchase it. 

From a broader cultural perspective, let’s talk about unboxing videos. These are videos–you can find thousands upon thousands of them on YouTube–where someone gets a brand new gizmo and films a kind of ritualized process of unpacking it. 

This is distinct from a product review (a separate and more obviously useful genre). Some unboxing videos have little tidbits of assessment, but that’s beside the point. The emphasis is on the voyeuristic appeal of watching someone undress an expensive, virgin item. 

And yeah, I went with deliberately sexual language in that last sentence because it’s impossible not to see the parallels between brand newness and virginity, or between ornate and sophisticated product packaging and fashionable clothing, or between unboxing an item and unclothing a person. I’m not saying it’s literally sexual, but the parallels are too strong to ignore.

These do not strike me as the hallmarks of a healthy culture, and I haven’t even touched on the vast amounts of waste. Of course there’s the literal waste, both from all that aforementioned packaging and from replacing consumer goods (electronics, clothes, etc.) at an ever-faster pace. There’s also the opportunity cost, however. If you spend three or four or ten times more on a pair of shoes to get the right brand and style than you could on a pair of equally serviceable shoes without the right branding, well… isn’t that waste? You could have spent the money on something else or, better still, saved it or even worked less. 

This rampant consumerism isn’t making us objectively better off or happier. It’s impossible to separate consumerism from status, and status is a zero-sum game. For every winner, there must be a loser. And that means that, as a whole, status-seeking can never make us better off. We’re working ourselves to death to try and win a game that doesn’t improve our world. Why?

Advertising is the proximate cause. Somewhere along the way advertisers realized that instead of trying to persuade people directly that this product would serve some particular need, you could bypass the rational argument and appeal to subconscious desires and fears. Doing this allows for things like “brand loyalty.” It also detaches consumption from need. You can have enough physical objects, but you can you ever have enough contentment, or security, or joy, or peace? 

So car commercials (to take one example) might mention features, but most of the work is done by stoking your desires: for excitement if it’s a sports car, for prestige if it’s a luxury car, or for competence if it’s a pickup truck. Then those desires are associated with the make and model of the car and presto! The car purchase isn’t about the car anymore. It’s about your aspirations as a human being. 

The really sinister side-effect is that when you hand over the cash to buy whatever you’ve been persuaded to buy, what you’re actually hoping for is not a car or ice cream or a video game system. What you’re actually seeking is the fulfillment of a much deeper desire for belonging or safety or peace or contentment. Since no product can actually meet those deeper desires, advertising simultaneously stokes longing and redirects us away from avenues that could potentially fulfill it. We’re all like Dumbledore in the cave, drinking poison that only makes us thristier and thirstier.

One commercial will not have any discernible effect, of course, but life in 21st century America is a life saturated by these messages. 

And if you think it’s bad enough when the products sell you something external, what about all the products that promise to make you better? Skinnier, stronger, tanner, whatever. The whole outrage of fashion models photoshopped past biological possibility is just one corner of the overall edifice of an advertising ecosystem that is calculated to make us hungry and then sell us meals of thin air. 

I developed this theory that advertising fuels consumerism, which sabotages our happiness at an individual and social level, when I was a teenager in the 1990s. There was no social media back then.

So, getting back to The Social Dilemma, the problem isn’t that life was fine and dandy and then social networking came and destroyed everything. The problem is that we already lived in a sick, consumerist society where advertising inflamed desires and directed them away from any hope of fulfillment and then social media made it even worse

After all, everything that social media does has been done before. 

News feeds are tweaked to keep your scrolling endlessly? Radio stations have endlessly fiddled with their formulas for placing advertisements to keep you from changing that dial. TV shows were written around advertising breaks to make sure you waited for the action to continue. (Watch any old episode of Law and Order to see what I mean.) Social media does the same thing, it’s just better at it. (Partially through individualized feeds and AI algorithms, but also through effectively crowd-sourcing the job: every meme you post contributes to keeping your friends and family ensnared.)

Advertisements bypassing objective appeals to quality or function and appeal straight to your personal identity, your hopes, your fears? Again, this is old news. Consider the fact that you immediately picture in your mind different stereotypes for the kind of person who drives a Ford F-150, a Subaru Outback, or a Honda Civic. Old fashioned advertisements were already well on the way of fracturing society into “image tribes” that defined themselves and each other at least in part in terms of their consumption patterns. Social media just doubled down on that trend by allowing increasingly smaller and more homogeneous tribes to find and socialize with each other (and be targeted by advertisers). 

So the biggest thing that was missing from The Social Dilemma was the realization that social isn’t some strange new problem. It’s an old problem made worse. 

Solutions

The final shortcoming of The Social Dilemma is that there were no solutions offered. This is an odd gap because at least one potential solution is pretty obvious: stop relying on ad supported products and services. If you paid $5 / month for your Facebook account and that was their sole revenue stream (no ads allowed), then a lot of the perverse incentives around manipulating your feed would go away.

Another solution would be stricter privacy controls. As I mentioned above, the biggest differentiator between social media and older, broadcast media is individualization. I’ve read (can’t remember where) about the idea of privacy collectives: groups of consumers could band together, withhold their data from social media groups, and then dole it out in exchange for revenue (why shouldn’t you get paid for the advertisements you watch?) or just refuse to participate at all.

These solutions have drawbacks. It sounds nice to get paid for watching ads (nicer than the alternative, anyway) and to have control over your data, but there are some fundamental economic realities to consider. “Free” services like Facebook and Gmail and YouTube can never actually be free. Someone has to pay for the servers, the electricity, the bandwidth, the developers, and all of that. If advertisers don’t, then consumers will need to. Individuals can opt out and basically free-ride on the rest of us, but if everyone actually did it then the system would collapse. (That’s why I don’t use ad blockers, by the way. It violates the categorical imperative.)

And yeah, paying $5/month to Twitter (or whatever) would significantly change the incentives to manipulate your feed, but it wouldn’t actually make them go away. They’d still have every incentive to keep you as highly engaged as possible to make sure you never canceled your subscription and enlisted all your friends to sign up, too. 

Still, it would have been nice if The Social Dilemma had spent some time talking about specific possible solutions.

On the other hand, here’s an uncomfortable truth: there might not be any plausible solutions. Not the kind a Netflix documentary is willing to entertain, anyway.

In the prior section, I said “advertising is the proximate cause” of consumerism (emphasis added this time). I think there is a deeper cause, and advertising–the way it is done today–is only a symptom of that deeper cause.

When you stop trying to persuade people to buy your product directly–by appealing to their reason–and start trying to bypass their reason to appeal to subconscious desires you are effectively dehumanizing them. You are treating them as a thing to be manipulated. As a means to an end. Not as a person. Not as an end in itself. 

That’s the supply side: consumerism is a reflection of our willingness to tolerate treating each other as things. We don’t love others.

On the demand side, the emptier your life is, the more susceptible you become to this kind of advertising. Someone who actually feels belonging in their life on a consistent basis isn’t going to be easily manipulated into buying beer (or whatever) by appealing to that need. Why would they? The need is already being met.

That’s the demand side: consumerism is a reflection of how much meaning is missing from so many of our lives. We don’t love God (or, to be less overtly religious, feel a sense of duty and awe towards transcendent values).

As long as these underlying dysfunctions are in place, we will never successfully detoxify advertising through clever policies and incentives. There’s no conceivable way to reasonably enforce a law that says “advertising that objectifies consumers is illegal,” and any such law would violate the First Amendment in any case. 

The difficult reality is that social media is not intrinsically toxic any more than advertising is intrinsically toxic. What we’re witnessing is our cultural maladies amplified and reflected back through our technologies. They are not the problem. We are.

Therefore, the one and only way to detoxify our advertising and social media is to overthrow consumerism at the root. Not with creative policies or stringent laws and regulations, but with a fundamental change in our cultural values. 

We have the template for just such a revolution. The most innovative inheritance of the Christian tradition is the belief that, as children of God, every human life is individually and intrinsically valuable. An earnest embrace of this principle would make manipulative advertising unthinkable and intolerable. Christianity–like all great religions, but perhaps with particular emphasis–also teaches that a valuable life is found only in the service of others, service that would fill the emptiness in our lives and make us dramatically less susceptible to manipulation in the first place.

This is not an idealistic vision of Utopia. I am not talking about making society perfect. Only making it incrementally better. Consumerism is not binary. The sickness is a spectrum. Every step we could take away from our present state and towards a society more mindful of transcendent ideals (truth, beauty, and the sacred) and more dedicated to the love and service of our neighbors would bring a commensurate reduction in the sickness of manipulative advertising that results in tribalism, animosity, and social breakdown. 

There’s a word for what I’m talking about, and the word is: repentance. Consumerism, the underlying cause of toxic advertising that is the kernel of the destruction wrought by social media, is the cultural incarnation of our pride and selfishness. We can’t jury rig an economic or legal solution to a fundamentally spiritual problem. 

We need to renounce what we’re doing wrong, and learn–individually and collectively–to do better.

Cancel Culture Is Real

Cancel, Social, Society, Culture, Stop, Pull Back

Last month, Harper’s published an anti-cancel culture statement: A Letter on Justice and Open Debate.The letter was signed by a wide variety of writers and intellectuals, ranging from Noam Chomsky to J. K. Rowling. It was a kind of radical centrist manifesto, including major names like Jonathan Haidt and John McWhorter (two of my favorite writers) and also crossing lines to pick up folks like Matthew Yglesias (not one of my favorite writers, but I give him respect for putting his name to this letter.)

The letter kicked up a storm of controversy from the radical left, which basically boiled down to two major contentions. 

  1. There is no such thing as cancel culture.
  2. Everyone has limits on what speech they will tolerate, so there’s no difference between the social justice left and the liberal left other than where to draw the lines.

The “Profound Consequences” of Cancel Culture

The first contention was represented in pieces like this one from the Huffington Post: Don’t Fall For The ‘Cancel Culture’ Scam. In the piece, Michael Hobbes writes:

While the letter itself, published by the magazine Harper’s, doesn’t use the term, the statement represents a bleak apogee in the yearslong, increasingly contentious debate over “cancel culture.” The American left, we are told, is imposing an Orwellian set of restrictions on which views can be expressed in public. Institutions at every level are supposedly gripped by fears of social media mobs and dire professional consequences if their members express so much as a single statement of wrongthink.

This is false. Every statement of fact in the Harper’s letter is either wildly exaggerated or plainly untrue. More broadly, the controversy over “cancel culture” is a straightforward moral panic. While there are indeed real cases of ordinary Americans plucked from obscurity and harassed into unemployment, this rare, isolated phenomenon is being blown up far beyond its importance.

There is a kernel of truth to what Hobbes is saying, but it is only a kernel. Not that many ordinary Americans are getting “canceled”, and some of those who are cancelled are not entirely expunged from public life. They don’t all lose their jobs. 

But then, they don’t all have to lose their jobs for the rest of us to get the message, do they?

The basic analytical framework here is wrong. Hobbes assumes that the “profound consequences” of cancel culture have yet to be manifest. “Again and again,” he writes, “the decriers of “cancel culture” intimate that if left unchecked, the left’s increasing intolerance for dissent will result in profound consequences.”

The reason he can talk about hypothetical future consequences is that he’s thinking about the wrong consequences. Hobbes appears to think that the purpose of cancel culture is to cancel lots and lots of people. If we dont’ see hordes–thousands, maybe tens of thousands–of people canceled then there aren’t any “profound consequences”.

This is absurd. The mob doesn’t break kneecaps for the sake of breaking knee caps. They break knee caps to send a message to everyone else to pay up without resisting. Intimidation campaigns do not exist to make examples out of everyone. They make examples out of (a few) people in order to intimidate many more. 

Cancel culture is just such an intimidation campaign, and so the “profound consequences” aren’t the people who are canceled. The “profound consequences” are the people–not thousands or tens of thousands but millions–who hide their beliefs and stop speaking their minds because they’re afraid. 

And yes, I mean millions. Cato does polls on that topic, and they found that 58% of Americans had “political views they’re afraid to share” in 2017 and, as of just a month ago, that number has climbed to 62%. 

Gee, nearly two-thirds of Americans are afraid to speak their minds. How’s that for “profound consequences”?

Obviously Cato has a viewpoint here, but other studies are finding similar results. Politico did their own poll, and while it didn’t ask about self-censoring, it did ask what Americans think about cancel culture. According to the poll, 46% think it has gone “too far” while only 10% think it has gone “not far enough”. 

Moreover, these polls also reinforce something obvious: cancel culture is not just some general climate of acrimony. According to both the Cato and Politico polls, Republicans are much more likely to self-censor as a result of cancel culture (77% vs 52%)  and Democrats are much more likely to participate in the silencing (~50% of Democrats “have voiced their displeasure with a public figure on social media” vs. ~30% of Republicans).

Contrast these poll results with what Hobbes calls the “pitiful stakes” of cancel culture. He mocks low-grade intimidation like “New York Magazine published a panicked story about a guy being removed from a group email list.” Meanwhile, more than three quarters of Republicans are afraid to be honest about their own political beliefs. We don’t need to worry about hypothetical future profound consequences. They’re already here.

What Makes Cancel Culture Different

The second contention–which is that everyone has at least some speech they’d enthusiastically support canceling–is a more serious objection. After all: it’s true. All but the very most radical of free speech defenders will draw the line somewhere. If this is correct, then isn’t cancel culture just a redrawing of boundaries that have always been present?

To which I answer: no. There really is something new and different about cancel culture, and it’s not just the speed or ferocity of its adherents.

The difference goes back to a post I wrote a few months ago about the idea of an ideological demilitarized zone. I don’t think I clearly articulated my point in that post, so I’m going to reframe it (very briefly) in this one.

A normal, healthy person will draw a distinction between opinions they disagree with and actively oppose and opinions they disagree with that merit toleration or even consideration. That’s what I call the “demilitarized zone”: the collection of opinions that you think are wrong but also reasonable and defensible

Cancel culture has no DMZ.

Think I’m exaggerating? This is a post from a Facebook friend (someone I know IRL) just yesterday:

You can read the opinion of J. K. Rowling for yourself here. Agree or disagree, it is very, very hard for any reasonable person to come away thinking that Rowling has anything approaching personal animus towards anyone who is transgender for being transgender. (The kind of animus that might justify calling someone a “transphobic piece of sh-t” and trying to retconn her out of reality.) In the piece, she writes with empathy and compassion of the transgender community and states emphatically that, “I know transition will be a solution for some gender dysphoric people,” adding that:

Again and again I’ve been told to ‘just meet some trans people.’ I have: in addition to a few younger people, who were all adorable, I happen to know a self-described transsexual woman who’s older than I am and wonderful.

So here’s the difference between cancel culture and basically every other viewpoint on the political spectrum: they can acknowledge shades of grey and areas where reasonable people can see things differently, cancel culture can’t and won’t. Cancel culture is binary (ironically). You’re either 100% in conformity with the ideology or you’re “a —–phobic piece of sh-t”.

This is not incidental, by the way. Liberal traditions trace their roots back to the Enlightenment and include an assumption that truth exists as an objective category. As long as that’s the case–as long as there’s an objective reality out there–then there is a basis for discussion about it. There’s also room for mistaken beliefs about it. 

Cancel culture traces its roots back to critical theory, which rejects notions of reason and objective truth and sees instead only power. It’s not the case that people are disagreeing about a mutually accessible, external reality. Instead, all we have are subjective truth claims which can be maintained–not by appeal to evidence or logic–but only through the exercise of raw power.

Liberal traditions–be they on the left or on the right–view conflict through a lens that is philosophically compatible with humility, correction, cooperation, and compromise. That’s not to say that liberal traditions actually inhabit some kind of pluralist Utopia where no one plays dirty to win. It’s not like American politics (or politics anywhere) existed in some kind of genteel Garden of Eden until critical theory showed up. But no matter how acrimonious or dirty politics got before cancel culture, there was also the potential for cross-ideological discussion. Cancel culture doesn’t even have that.

This means that, while it’s possible for other viewpoints to coexist in a pluralist society, it is not possible for cancel culture to do the same. It isn’t a different variety of the same kind of thing. It’s a new kind of thing, a totalitarian ideology that has no self-limiting principle and views any and all dissent as an existential threat because it’s own truth claims are rooted solely in an appeal to power. For cancel culture, being right and winning are the same thing, and every single debate is a facet of the same existential struggle

So yes, all ideologies want to cancel something else. But only cancel culture wants to cancel everything else.

Last Thoughts

Lots of responders to the Harper’s letter pointed out that the signers were generally well-off elites. It seemed silly, if not outright hypocritical, for folks like that to whine about cancel culture, right?

My perspective is rather different. As someone who’s just an average Joe with no book deals, no massive social media following, no tenure, nor anything like that: I deeply appreciate someone with J. K. Rowling’s stature trading some of her vast hoard of social capital to keep the horizons of public discourse from narrowing ever farther. 

And that’s exactly why the social justice left hates her so much. They understand power, and they know how crippling it is to their cause to have someone like her demure from their rigid orthodoxy. Their concern isn’t alleviated because her dissent is gentle and reasonable. It’s worsened, because it makes it even harder to cancel her and underscores just how toxic their totalitarian ideology really is.

I believe in objective reality. I believe in truth. But I’m pragmatic enough to understand that power is real, too. And when someone like J. K. Rowling uses some of her power in defense of liberalism and intellectual diversity, I feel nothing but gratitude for the help.

We who want to defend the ideals of classical liberalism know just how much we could use it.

Note on Critics of Civilization

Shrapnel from a Unabomber attack. Found on Flickr.

Came across this article in my Facebook feed: Children of Ted. The lead in states:

Two decades after his last deadly act of ecoterrorism, the Unabomber has become an unlikely prophet to a new generation of acolytes.

I don’t have a ton of patience for this whole line of reasoning, but it’s trendy enough that I figure I ought to explain why it’s so silly.

Critics of industrialization are far from new, and obviously they have a point. As long as we don’t live in a literal utopia, there will be things wrong with our society. They are unlikely to get fixed without acknowledging them. What’s more, in any sufficiently complex system (and human society is pretty complex), any change is going to have both positive and negative effects, many of which will not be immediately apparent.

So if you want to point out that there are bad things in our society: yes, there are. If you want to point out that this or that particular advance has had deleterious side effects: yes, all changes do. But if you take the position that we would have been better off in a pre-modern, per-industrial, or even pre-agrarian society: you’re a hypocritical nut job.

I addressed this trendy argument when I reviewed Yuval Noah Harai’s Sapiens: A Brief History of Humankind. Quoting myself:

Harari is all-in for the hypothesis that the Agricultural Revolution was a colossal mistake. This is not a new idea. I’ve come across it several times, and when I did a quick Google search just now I found a 1987 article by Jared Diamond with the subtle title: The Worst Mistake in the History of the Human Race. Diamond’s argument then is as silly as Harari’s argument is now, and it boils down to this: life as a hunter-gatherer is easy. Farming is hard. Ergo, the Agricultural Revolution was a bad deal. If we’d all stuck around being hunter-gatherers we’d be happier.

There are multiple problems with this argument, and the one that I chose to focus on at the time is that it’s hedonistifc. Another observation one can make is that if being a hunter-gatherer is so great, nothing’s really stopping Diamond or Harai from living that way. I’m not saying it would be trivial, but for all the folks who sagely nod their head and agree with the books and articles that claim our illiterate ancestors had it so much better… how many are even seriously making the attempt?

The argument I want to make is slightly different than than the ones I’ve made before and is based on economics.

Three fundamental macroeconomic concepts are: production, consumption, and investment. Every year a society produces are certain amount of stuff (mining minerals, refining them, turning them into goods, growing crops, etc.) All of that stuff is eventually used in one of two ways: either it’s consumed (you eat the crops) or invested (you plant the seeds instead of eating them).

From a material standpoint, the biggest change in human history has been the dramatic rise in per-capita production over the last few centuries, especially during the Industrial Revolution. This is often seen as a triumph of science, but that is mostly wrong. Virtually none of the important inventions of the Industrial Revolution were produced by scientists or even my lay persons attempting to apply scientific principles. They were almost uniformly invented by self-taught tinkerers who were experimenting with practical rather than theoretical innovations.

Another way to see this is to observe that many of the “inventions” of the Industrial Revolution had been discovered many times in the past. A good example of this is the steam engine. In “Destiny Disrupted,” Tamim Ansary observes:

Often, we speak of great inventions as if they make their own case merely by existing. But in fact, people don’t start building and using a device simply because it’s clever. The technological breakthrough represented by an invention is only one ingredient in its success. The social context is what really determines whether it will take. The steam engine provides a case in point. What could be more useful? What could be more obviously world-changing? Yet the steam engine was invented in the Muslim world over three centuries before it popped up in the West, and in the Muslim world it didn’t change much of anything. The steam engine invented there was used to power a spit so that a whole sheep might be roasted efficiently at a rich man’s banquet. (A description of this device appears in a 1551 book by the Turkish engineer Taqi al-Din.) After the spit, however, no other application for the device occurred to anyone, so it was forgotten.

Ansary understands that the key ingredient in whether or not an invention takes off (like the steam engine in Western Europe in the 18th century) or dies stillborn (like the steam engine in the 15th century Islamic world) is the social context around it.

Unfortunately, Ansary mostly buys into the same absurd notion that I’m debunking, which is that all this progress is a huge mistake. According to him, the Chinese could have invented mechanized industry in the 10th century, but the benevolent Chinese state had the foresight to see that this would take away jobs from its peasant class and, being benevolent, opted instead to keep the Chinese work force employed.

This is absurd. First, because there’s no chance that the Chinese state (or anyone) could have foreseen the success and conseqeunces of mechanized industry in the 10th century and made policy based on it even if they’d wanted to. Second, because the idea that it’s better to keep society inefficient rather than risk unemployment is, in the long run, disastrous.

According to Ansary, the reason that steam engines, mechanized industry, etc. all took place in the West was misanthropic callousness:

Of course, this process [modernization] left countless artisans and craftspeople out of work, but this is where 19th century Europe differed from 10th century China. In Europe, those who had the means to install industrial machinery had no particular responsibility for those whose livelihood would be destroyed by a sudden abundance of cheap machine-made goods. Nor were the folks they affected down-stream–their kinfolk or fellow tribesmen–just strangers who they had never met and would never know by name. What’s more, it was somebody else’s job to deal with the social disruptions caused by widespread unemployment, not theirs. Going ahead with industrialization didn’t signify some moral flaw in them, it merely reflected the way this particular society was compartmentalized. The Industrial Revolution could take place only where certain social preconditions existed and in Europe at that time they happened to exist.

Not a particular moral flaw in the individual actors, Ansary concedes, but still a society that was wantonly reckless and unconcerned with the fate of its poor relative to the enlightened empires that foresaw the Industrial Revolution from end-to-end and declined for the sake of their humble worker class.

The point is that when a society has the right incentives (I’d argue that we need individual liberty via private property and a restrained state alongside compartmentalization) individual innovations are harnessed, incorporated, and built upon in a snowball effect that leads to ever and ever greater productivity. A lot of the productivity comes from the cool new machines, but not all of it.

You see, once you have a few machines that give that initial boost to productivity, you free up people in your society to do other things. When per-capita production is very, very low, everyone has to be a farmer. You can have a tiny minority doing rudimentary crafts, but the vast majority of your people need to work day-in and day-out just to provide enough food for the whole population not to starve to death.

When per-capita production is higher, fewer and few people need to do work creating the basic rudiments (food and clothes) and this frees people up to specialize. And specialization is the second half of the secret (along with new machines) that leads to the virtuous cycle of modernization. New tools boost productivity, this frees up new workers to try doing new things, and some of those new things include making even more new tools.

I’m giving you the happy side of the story. Some people go from being farmers to being inventors. I do not mean to deny but simply to balance the unhappy side of the story, which is what some people go from being skilled workers to being menial labors if a machine renders their skills obsolete. That also happens, although it’s worth noting that the threat to modernization is generally not to the very poorest. Americans like to finger-wag at “sweatshops”, but if your alternative is subsistence farming, then even sweatshops may very well look appealing. Which is why so many of the very poorest keep migrating from farms to cities (in China) and why the opposition to modernization never comes from the poorest classes (who have little to lose) but from the precarious members of the middle class (who do).

So my high-level story of modernization has a couple of key points.

  1. If you want a high standard of living for a society, you need a high level of per capita production.
  2. You get a high level of per capita production through a positive feedback loop between technological innovation and specialization. (This might be asymptotic.)
  3. The benefits of this positive feedback loop include high-end stuff (like modern medicine) and also things we take for granted. And I don’t mean electricity (although, that, too) but also literacy.
  4. The costs of this positive feedback loop include the constant threat of obsolescence for at least some workers, along with greater capacity to destroy on an industrial scale (either the environment or each other).

So the fundamental question you have to ask is whether you want to try and figure out how to manage the costs so that you can enjoy the benefits, or whether the whole project isn’t worth it and we should just give up and start mailing bombs to each other until it all comes crashing down.

The part that really frustrates me the most, that part that spurred me to write this today, is that folks like Ted Kaczynski (the original Unabomber) or John Jacobi (the first of his acolytes profiles in the New York Mag story) are only even possible in a modern, industrialized society.

They are literate, educated denizens of a society that produces so much stuff that lots of its members can survive basically without producing much of all. We live in an age of super abundance, and it turns out that abundance creates it’s own variety of problems. Obesity is one. Another, apparently, is a certain class of thought that advocates social suicide.

Because that’s what we’re talking about. As much as Diamond and Harai are just toying with the notion because it sells books and makes them look edgy, folks like John Jacobi or Ted Kaczynski would–if they had their way–bring about a world without any of the things that make their elitist theorizing possible in the first place.

It is a great tragedy of human nature that the hard-fought victories of yesterday’s heroic pioneers and risk-takers are casually dismissed by the following generation who don’t even realize that their apparent radicalism is just another symptom of super-abundance.

They will never succeed in reducing humanity to a pre-industrial state but they–and others who lack the capacity to appreciate what they’ve been given–can make enough trouble that the rising generation will, we hope, have a more constructive, aspirational, and less-suicidal frame of mind.