Anti-Provincial Provincialism and Fighting Monsters

Part 1: Anti-Americanism Americanism

I ran across a humorous meme in a Facebook group that got me thinking about anti-provincial provincialism. Well, not the meme, but a response to it. Here’s the original meme:

Now check out this (anonymized) response to it (and my response to them):

What has to happen, I wondered, for someone to assert that English is the most common language “only in the United States“? Well, they have to be operating from a kind of anti-Americanism that is so potent it has managed to swing all the way around back to being an extreme American centrism view again. After all, this person was so eager to take the US down a peg (I am assuming) that they managed to inadvertently erase the entire Anglosphere. The only people who exclude entire categories of countries from consideration are rabid America Firsters and rabid America Lasters. The commonality? They’re both only thinking about America.

It is a strange feature of our times that so many folks seem to become the thing they claim to oppose. The horseshoe theory is having its day.

The conversation got even stranger when someone else showed up to tell me that I’d misread the Wikipedia article that I’d linked. Full disclosure: I did double check this before I linked to it, but I still had an “uh oh” moment when I read their comment. Wouldn’t be the first time I totally misread a table, even when specifically checking my work. Here’s the comment:

Thankfully, dear reader, I did not have to type out the mea culpa I was already composing in my mind. Here’s the data (and a link):

My critic had decided to focus only on the first language (L1) category. The original point about “most commonly spoken” made no such distinction. So why rely on it? Same reason, I surmise, as the “only in the US” line of argument: to reflexively oppose anything with the appearance of American jingoism.

Because we can all see that’s the subtext here, right? To claim that English is the “most common language” when it is also the language (most) Americans speak is to appear to be making some of rah-rah ‘Murica statement. Except that… what happens if it’s just objectively true?

And it is objectively true. English has the greatest number of total speakers in the world by a wide margin. Even more tellingly, the number of English 2L speakers outnumbers Chinees 2L speakers by more than 5-to-1. This means that when someone chooses a language to study, they pick English 5 times more often than Chinese. No matter how you slice it, the fact that English is the most common language is just a fact about the reality we currently inhabit.

Not only that, but the connection of this fact to American chauvanism is historically ignorant. Not only is this a discussion about the English language and not the American one, but the linguistic prevalence of English predates the rise of America a great power. If you think millions of Indians conduct business in English because of America then you need to open a history book. The Brits started this stated of affairs back when the sun really did never set on their empire. We just inherited it.

I wonder if there’s something about opposing something thoughtlessly that causes you to eventually, ultimately, become that thing. Maybe Nietzsche’s aphorism doesn’t just sound cool. Maybe there’s really something to it.

This image doesn’t include enough of the quote, which is: “Beware that, when fighting monsters, you yourself do not become a monster… for when you gaze long into the abyss. The abyss gazes also into you.” But it’s cute. Original from Instagram.

Part 2: Anti-Provincial Provincialism

My dad taught me the phrase “anti-provincial provincialism” when I was a kid. We were talking about the tendency of some Latter-day Saint academics to over-correct for the provincialism of their less-educated Latter-day Saint community and in the process recreate a variety of the provincialism they were running away from. Let me fill this in a bit.

First, a lot of Latter-day Saints can be provincial.

This shouldn’t shock anyone. Latter-day Saint culture is tight-knit and uniform. For teenagers when I was growing up, you had:

  • Three hours of Church on Sunday
  • About an hour of early-morning seminary in the church building before school Monday – Friday
  • Some kind of 1-2 hour youth activity in the church building on Wednesday evenings

This is plenty of time to tightly assimilate and indoctrinate the rising generation, and for the most part this is a good thing. I am a strong believer in liberalism, which sort of secularizes the public square to accommodate different religious traditions. This secularization isn’t anti-religious, it is what enables those religions to thrive by carving out their own spaces to flourish. State religions have a lot of power, but this makes them corrupt and anemic in terms of real devotion. Pluralism is good for all traditions.

But a consequences of the tight-knit culture is that Latter-day Saints can grow up unable to clearly differentiate between general cultural touchstones (Latter-day Saints love Disney and The Princess Bride, but so do lots of people) and unique cultural touchstones (like Saturday’s Warrior and Johnny Lingo).

We have all kinds of arcane knowledge that nobody outside our culture knows or cares about, especially around serving two-year missions. Latter-day Saints know what the MTC is (even if they mishear it as “empty sea” when they’re little, like I did) and can recount their parents’ and relatives’ favorite mission stories. They also employ some theological terms in ways that non-LDS (even non-LDS Christians) would find strange.

And the thing is: if nobody tells you, then you never learn which things are things everyone knows and which things are part of your strange little religious community alone. Once, when I was in elementary school, I called my friend on the phone and his mom picked up. I addressed her as “Sister Apple” because Apple was their last name and because at that point in my life the only adults I talked to were family, teachers, or in my church. Since she wasn’t family or a teacher, I defaulted to addressing her as I was taught to address the adults in my church.

As I remember it today, her reaction was quite frosty. Maybe she thought I was in a cult. Maybe I’d accidentally raised the specter of the extremely dark history of Christians imposing their faith on Jews (my friend’s family was Jewish). Maybe I am misremembering. All I know for sure is I felt deeply awkward, apologized profusely, tried to explain, and then never made that mistake ever again. Not with her, not with anyone else.

I had these kinds of experiences–experiences that taught me to see clearly the boundaries between Mormon culture and other cultures–not only because I grew up in Virginia but also because (for various reasons) I didn’t get along very well with my LDS peer group for most of my teen years. I had very few close LDS friends from the time that I was about 12 until I was in my 30s. Lots of LDS folks, even those who grew up outside of Utah, didn’t have them. Or had fewer.

So the dynamic you can run into is when a Latter-day Saint without this kind of awareness trips over some of the (to them) invisible boundaries between Mormon culture and the surrounding culture. If they do this in front of another Latter-day Saint who does know, then the one who’s in the know has a tendency to cringe.

This is where you get provincialism (the Latter-day Saint who doesn’t know any better) and anti-provincial provincialism (the Latter-day Saint who is too invested in knowing better). After all, why should one Latter-day Saint feel so threatened by a social faux pax of another Latter-day Saint unless they are really invested in that group identity?

My dad was frustrated, at the time, with Latter-day Saint intellectuals who liked to discount their own culture and faith. They were too eager to write off Mormon art or culture or research that was amenable to faithful LDS views. They thought they were being anti-provincial. They thought they were acting like the people around them, outgrowing their culture. But the fact is that their fear of being seen as or identified with Mormonism made them just as obsessed with Mormonism as the mots provincial Mormon around. And twice as annoying.

Part 3: Beyond Anti

Although I should have known better, given what my parents taught me growing up, I became one of those anti-provincial provincials for a while. I had a chip on my shoulder about so-called “Utah Mormons”. I felt that the Latter-day Saints in Utah looked down on us out in the “mission field,” so I turned the perceived slight into a badge of honor. Yeah maybe this was the mission field, and if so that meant we out here doing the work were better than Utah Mormons. We had more challenges to overcome, couldn’t be lazy about our faith, etc.

And so, like an anti-Americanist who becomes an Americanist I became an anti-provincial provincialist. I carried that chip on my shoulder into my own mission where, finally meeting a lot of Utah Mormons on neutral territory, I got over myself. Some of them were great. Some of them were annoying. They were just folk. There are pros and cons to living in a religious majority or a minority. I still prefer living where I’m in the minority, but I’m no longer smug about it. It’s just a personal preference. There are tradeoffs.

One of the three or four ideas that’s had the most lasting impact on my life is the idea that there are fundamentally only two human motivations. Love, attraction, or desire on the one hand. Fear, avoidance, or aversion on the other.

Why is it that fighting with monsters turns you into a monster? I suspect the lesson is that how and why you fight your battles is an important as what battles you choose to fight. I wrote a Twitter thread about this on Saturday, contrasting tribal reasons for adhering to a religion and genuine conversion. The thread starts here, but here’s the relevant Tweet:

If you’re concerned about American jingoism: OK. That’s a valid concern. But there are two ways you can stand against it. In fear of the thing. Or out of love for something else. Choose carefully. Because if you’re motivated by fear, then you will–in the end–become the thing your fear motivates you to fight against. You will try to fight fire with fire, and then you will become the fire.

If you’re concerned about Mormon provincialism: OK. There are valid concerns. Being able to see outside your culture and build bridges with other cultures is a good thing. But, here again, you have to ask if you’re more afraid of provincialism or more in love with building bridges. Because if you’re afraid of provincialism, well… that’s how you get anti-provincial provincialism. And no bridges, by the way.

I might rewrite my pinned Tweet one day.

It’s not two truths. It’s just one. You want something? Fight for it. Fighting against things only gets you nothing in the end.
2 Timothy 1:7, KJV

Free Speech – A Culture of Tolerance

Tara Henley’s recent podcast episode with Danish free speech advocate Jacob Mchangama was fascinating and encouraging. A quote from Orwell came up that I hadn’t heard before, and it’s worth emphasizing:

The relative freedom which we enjoy depends on public opinion. The law is no protection. Governments make laws, but whether they are carried out, and how the police behave, depends on the general temper in the country. If large numbers of people are interested in freedom of speech, there will be freedom of speech, even if the law forbids it; if public opinion is sluggish, inconvenient minorities will be persecuted, even if laws exist to protect them.

The fact that free speech is not just a legal matter is a vitally important one, because those who restrict free speech to the minimum legal interpretation are actively undermining—wittingly or not—the culture that actual free speech depends on.

Mchangama brought up the example of Athens, which enjoyed a cultural free speech called parrhesia which, Mchangama said, “means something like fearless or uninhibited speech.” Although there was no legal basis for parrhesia, it “permeated the Athenian democracy” and let to a “culture of tolerance”.

Clearly a culture of tolerance is not sufficient. Just ask Socrates. But at the same time legal free speech rights aren’t sufficient, either. The historical examples are too numerous to cite, especially in repressive 20th century regimes that often paid lip service to human rights (including the late-stage USSR). The laws were there on paper, but a lot of good it did anyone.

The Death of Socrates - Wikipedia
When the culture of tolerance wanes and there’s no legal recourse…

Mchangama went on to say that “if people lose faith in free speech and become more intolerant than laws will reflect that change and become more intolerant.” So fostering this culture is vital both to preserve the rights on paper and to ensure those legal rights are actually honored in the real world. So, “how do we foster a culture of free speech?” Mchangama asked. His response, in part:

It is ultimately down to each one of us. So those of us who believe in free speech have a responsibility of making the case for free speech to others, and do it in an uncondescending way, and also one which doesn’t just rely on calling people who want to restrict free speech fascists or totalitarians… [We must] take seriously the concerns of those who are worried about the ugly sides and harmful sides of free speech.

This is a tough balance to strike, but I want to do my part. So let me make two points.

First, the popular line of argument that dismisses anything that’s not a technical violation of the First Amendment is unhelpful. Just as an example, here’s an XKCD cartoon (and I’m usually a huge fan) to show what I mean.

The problem with this kind of free speech minimalism is that its intrinsically unstable. If you support free speech but only legally, then you don’t really support free speech at all. Wittingly or not, you are adopting an anti-free speech bias. Because, as Orwell and Mchangama observe, a legal free speech right without accompanying support is a paper tiger with a short life span.  

Second, the question isn’t binary. It’s not about whether we should have free speech. It’s about the boundaries of tolerance—legal and cultural—for unpopular speech. To this end, Mchangama decries use of pejoratives like “social justice warrior” for those who want to draw a tighter boundary around what speech is legally and culturally permissible.

I’ve used the SJW term a lot. You can find plenty of instances of it here on this blog. I’ve always been a little uncomfortable with it because I don’t want to use a pejorative, but I wasn’t sure how else to refer to adherents of the post-liberal “successor ideology.”

Maybe that decision to use SJW was understandable, but I’m rethinking it. Either way, the reality is that I’ve imbibed at least some of the tribal animus that comes with the use of the term. I have—again, you can probably find old examples here on this blog—characterized my political opponents by their most extreme examples rather than by the moderate and reasonable folks who have genuine concerns about (in this context) how free speech can negatively impact minorities.

I am not changing my position on free speech. Like Mchangama, I strongly believe that the benefits of a broadly tolerant free speech culture greatly outweigh the costs for the disempowered. But that doesn’t mean there are no costs.

Admitting that it’s a tradeoff, that critics have legitimate concerns, and that the question isn’t binary will—I hope—make me more persuasive as a free speech advocate. Because I really do believe that a thriving culture of free speech is vitally important for the health of liberal democracies and everyone who lives within them. I do not want people to lose that faith.

The Real Social Dilemma

The Social Dilemma is a newly available documentary on Netflix about the peril of social networks. 

The documentary does a decent job of introducing some of the ways social networks (Facebook, Twitter, Pinterest, etc.) are negatively impacting society. If this is your entry point to the topic, you could do worse.

But if you’re looking for really thorough analysis of what is going wrong or for possible solutions, then this documentary will leave you wanting more. Here are four specific topics–three small and one large–where The Social Dilemma fell short.

AI Isn’t That Impressive

I published a piece in June on Why I’m an AI Skeptic and a lot of what I wrote then applies here. Terms like “big data” and “machine learning” are overhyped, and non-experts don’t realize that these tools are only at their most impressive in a narrow range of circumstances. In most real-world cases, the results are dramatically less impressive. 

The reason this matters is that a lot of the oomph of The Social Dilemma comes from scaring people, and AI just isn’t actually that scary. 

Randall Munroe’s What If article is technically about a robot apocalypse, but the gist of it applies to AI as well.

I don’t fault the documentary for not going too deep into the details of machine learning. Without a background in statistics and computer science, it’s hard to get into the details. That’s fair. 

I do fault them for sensationalism, however. At one point Tristan Harris (one of the interviewees) makes a really interesting point that we shouldn’t be worried about when AI surpasses human strengths, but when it surpasses human weaknesses. We haven’t reached the point where AI is better than a human at the things humans are good at–creative thinking, language, etc. But we’ve already long since passed the point where AI is better than humans at things humans are bad at, such as memorizing and crunching huge data sets. If AI is deployed in ways that leverage human weaknesses, like our cognitive biases, then we should already be concerned. So far this is reasonable, or at least interesting.

But then his next slide (they’re showing a clip of a presentation he was giving) says something like: “Checkmate humanity.”

I don’t know if the sensationalism is in Tristan’s presentation or The Real Dilemma’s editing, but either way I had to roll my eyes.

All Inventions Manipulate Us

At another point, Tristan tries to illustrate how social media is fundamentally unlike other human inventions by contrasting it with a bicycle. “No one got upset when bicycles showed up,” he says. “No one said…. we’ve just ruined society. Bicycles are affecting society, they’re pulling people away from their kids. They’re ruining the fabric of democracy.”

Of course, this isn’t really true. Journalists have always sought sensationalism and fear as a way to sell their papers, and–as this humorous video shows–there was all kinds of panic around the introduction of bicycles.

Tristan’s real point, however, is that bicycles were were a passive invention. They don’t actively badger you to get you to go on bike rides. They just sit there, benignly waiting for you to decide to use them or not. In this view, you can divide human inventions into everything before social media (inanimate objects that obediently do our bidding) and after social media (animate objects that manipulate us into doing their bidding).

That dichotomy doesn’t hold up. 

First of all, every successful human invention changes behavior individually and collectively. If you own a bicycle, then the route you take to work may very well change. In a way, the bike does tell you where to go. 

To make this point more strongly, try to imagine what 21st century America would look like if the car had never been invented. No interstate highway system, no suburbs or strip malls, no car culture. For better and for worse, the mere existence of a tool like the car transformed who we are both individually and collectively. All inventions have cultural consequences like that, to a greater or less degree.

Second, social media is far from the first invention that explicitly sets out to manipulate people. If you believe the argumentative theory, then language and even rationality itself evolved primarily as ways for our primate ancestors to manipulate each other. It’s literally what we evolved to do, and we’ve never stopped.

Propaganda, disinformation campaigns, and psy-ops are one obvious category of examples with roots stretching back into prehistory. But, to bring things closer to social networks, all ad-supported broadcast media have basically the same business model: manipulate people to captivate their attention so that you can sell them ads. That’s how radio and TV got their commercial start: with the exact same mission statement as GMail, Google search, or Facebook. 

So much for the idea that you can divide human inventions into before and after social media. It turns out that all inventions influence the choices we make and plenty of them do so by design.

That’s not to say that nothing has changed, of course. The biggest difference between social networks and broadcast media is that your social networking feed is individualized

With mass media, companies had to either pick and choose their audience in broad strokes (Saturday morning for kids, prime time for families, late night for adults only) or try to address two audiences at once (inside jokes for the adults in animated family movies marketed to children). With social media, it’s kind of like you have a radio station or a TV studio that is geared just towards you.

Thus, social media does present some new challenges, but we’re talking about advancements and refinements to humanity’s oldest game–manipulating other humans–rather than some new and unprecedented development with no precursor or context. 

Consumerism is the Real Dilemma

The most interesting subject in the documentary, to me at least, was Jaron Lanier. When everyone else was repeating that cliché about “you’re the product, not the customer” he took it a step or two farther. It’s not that you are the product. It’s not even that your attention is the product. What’s really being sold by social media companies, Lanier pointed out, is the ability to incrementally manipulate human behavior.

This is an important point, but it raises a much bigger issue that the documentary never touched. 

This is the amount of money spent in the US on advertising as a percent of GDP over the last century:

Source: Wikipedia

It’s interesting to note that we spent a lot more (relative to the size of our economy) on advertising in the 1920s and 1930s than we do today. What do you think companies were buying for their advertising dollars in 1930 if not “the ability to incrementally manipulate human behavior”?

Because if advertising doesn’t manipulate human behavior then why spend the money? If you can’t manipulate human behavior with a billboard or a movie trailer or a radio spot, then nobody would ever spend money on any of those things.

This is the crux of my disagreement with The Social Dilemma. The poison isn’t social media. The poison is advertising. The danger of social media is just that (within the current business model) it’s a dramatically more effective method of delivering the poison.

Let me stipulate that advertising is not an unalloyed evil. There’s nothing intrinsically wrong with showing people a new product or service and trying to persuade them to pay you for it. The fundamental premise of a market economy is that voluntary exchange is mutually beneficial. It leaves both people better off. 

And you can’t have voluntary exchange without people knowing what’s available. Thus, advertising is necessary to human commerce and is a part of an ecosystem flourishing, mutually beneficial exchanges and healthy competition. You could not have modern society without advertising of some degree and type. 

That doesn’t mean the amount of advertising–or the kind of advertising–that we accept in our society is healthy. As with  basically everything, the difference between poison and medicine is found in the details of dosage and usage. 

There was a time, not too long ago, when the Second Industrial Revolution led to such dramatically increased levels of production that economists seriously theorized about ever shorter work weeks with more and more time spent pursuing art and leisure with our friends and families. Soon, we’d spend only ten hours a week working, and the rest developing our human potential.

And yet in the time since then, we’ve seen productivity skyrocket (we can make more and more stuff with the same amount of time) while hours worked have also remained roughly steady. The simplest reason for this? We’re addicted to consumption. Instead of holding production basically constant (and working fewer and fewer hours), we’ve tried to maximize consumption by keeping as busy as possible. This addiction to consumption, not necessarily having but to acquiring stuff, manifests in some really weird cultural anomalies that–if we witnessed them from an alien perspective–probably strike us as dysfunctional or even pathological.

I’ll start with a personal example: when I’m feeling a little down I can reliably get a jolt of euphoria from buying something. Doesn’t have to be much. Could be a gadget or a book I’ve wanted on Amazon. Could be just going through the drive-thru. Either way, clicking that button or handing over my credit card to the Chick-Fil-A worker is a tiny infusion of order and control in a life that can seem confusingly chaotic and complex. 

It’s so small that it’s almost subliminal, but every transaction is a flex. The benefit isn’t just the food or book you purchase. It’s the fact that you demonstrated the power of being able to purchase it. 

From a broader cultural perspective, let’s talk about unboxing videos. These are videos–you can find thousands upon thousands of them on YouTube–where someone gets a brand new gizmo and films a kind of ritualized process of unpacking it. 

This is distinct from a product review (a separate and more obviously useful genre). Some unboxing videos have little tidbits of assessment, but that’s beside the point. The emphasis is on the voyeuristic appeal of watching someone undress an expensive, virgin item. 

And yeah, I went with deliberately sexual language in that last sentence because it’s impossible not to see the parallels between brand newness and virginity, or between ornate and sophisticated product packaging and fashionable clothing, or between unboxing an item and unclothing a person. I’m not saying it’s literally sexual, but the parallels are too strong to ignore.

These do not strike me as the hallmarks of a healthy culture, and I haven’t even touched on the vast amounts of waste. Of course there’s the literal waste, both from all that aforementioned packaging and from replacing consumer goods (electronics, clothes, etc.) at an ever-faster pace. There’s also the opportunity cost, however. If you spend three or four or ten times more on a pair of shoes to get the right brand and style than you could on a pair of equally serviceable shoes without the right branding, well… isn’t that waste? You could have spent the money on something else or, better still, saved it or even worked less. 

This rampant consumerism isn’t making us objectively better off or happier. It’s impossible to separate consumerism from status, and status is a zero-sum game. For every winner, there must be a loser. And that means that, as a whole, status-seeking can never make us better off. We’re working ourselves to death to try and win a game that doesn’t improve our world. Why?

Advertising is the proximate cause. Somewhere along the way advertisers realized that instead of trying to persuade people directly that this product would serve some particular need, you could bypass the rational argument and appeal to subconscious desires and fears. Doing this allows for things like “brand loyalty.” It also detaches consumption from need. You can have enough physical objects, but you can you ever have enough contentment, or security, or joy, or peace? 

So car commercials (to take one example) might mention features, but most of the work is done by stoking your desires: for excitement if it’s a sports car, for prestige if it’s a luxury car, or for competence if it’s a pickup truck. Then those desires are associated with the make and model of the car and presto! The car purchase isn’t about the car anymore. It’s about your aspirations as a human being. 

The really sinister side-effect is that when you hand over the cash to buy whatever you’ve been persuaded to buy, what you’re actually hoping for is not a car or ice cream or a video game system. What you’re actually seeking is the fulfillment of a much deeper desire for belonging or safety or peace or contentment. Since no product can actually meet those deeper desires, advertising simultaneously stokes longing and redirects us away from avenues that could potentially fulfill it. We’re all like Dumbledore in the cave, drinking poison that only makes us thristier and thirstier.

One commercial will not have any discernible effect, of course, but life in 21st century America is a life saturated by these messages. 

And if you think it’s bad enough when the products sell you something external, what about all the products that promise to make you better? Skinnier, stronger, tanner, whatever. The whole outrage of fashion models photoshopped past biological possibility is just one corner of the overall edifice of an advertising ecosystem that is calculated to make us hungry and then sell us meals of thin air. 

I developed this theory that advertising fuels consumerism, which sabotages our happiness at an individual and social level, when I was a teenager in the 1990s. There was no social media back then.

So, getting back to The Social Dilemma, the problem isn’t that life was fine and dandy and then social networking came and destroyed everything. The problem is that we already lived in a sick, consumerist society where advertising inflamed desires and directed them away from any hope of fulfillment and then social media made it even worse

After all, everything that social media does has been done before. 

News feeds are tweaked to keep your scrolling endlessly? Radio stations have endlessly fiddled with their formulas for placing advertisements to keep you from changing that dial. TV shows were written around advertising breaks to make sure you waited for the action to continue. (Watch any old episode of Law and Order to see what I mean.) Social media does the same thing, it’s just better at it. (Partially through individualized feeds and AI algorithms, but also through effectively crowd-sourcing the job: every meme you post contributes to keeping your friends and family ensnared.)

Advertisements bypassing objective appeals to quality or function and appeal straight to your personal identity, your hopes, your fears? Again, this is old news. Consider the fact that you immediately picture in your mind different stereotypes for the kind of person who drives a Ford F-150, a Subaru Outback, or a Honda Civic. Old fashioned advertisements were already well on the way of fracturing society into “image tribes” that defined themselves and each other at least in part in terms of their consumption patterns. Social media just doubled down on that trend by allowing increasingly smaller and more homogeneous tribes to find and socialize with each other (and be targeted by advertisers). 

So the biggest thing that was missing from The Social Dilemma was the realization that social isn’t some strange new problem. It’s an old problem made worse. 

Solutions

The final shortcoming of The Social Dilemma is that there were no solutions offered. This is an odd gap because at least one potential solution is pretty obvious: stop relying on ad supported products and services. If you paid $5 / month for your Facebook account and that was their sole revenue stream (no ads allowed), then a lot of the perverse incentives around manipulating your feed would go away.

Another solution would be stricter privacy controls. As I mentioned above, the biggest differentiator between social media and older, broadcast media is individualization. I’ve read (can’t remember where) about the idea of privacy collectives: groups of consumers could band together, withhold their data from social media groups, and then dole it out in exchange for revenue (why shouldn’t you get paid for the advertisements you watch?) or just refuse to participate at all.

These solutions have drawbacks. It sounds nice to get paid for watching ads (nicer than the alternative, anyway) and to have control over your data, but there are some fundamental economic realities to consider. “Free” services like Facebook and Gmail and YouTube can never actually be free. Someone has to pay for the servers, the electricity, the bandwidth, the developers, and all of that. If advertisers don’t, then consumers will need to. Individuals can opt out and basically free-ride on the rest of us, but if everyone actually did it then the system would collapse. (That’s why I don’t use ad blockers, by the way. It violates the categorical imperative.)

And yeah, paying $5/month to Twitter (or whatever) would significantly change the incentives to manipulate your feed, but it wouldn’t actually make them go away. They’d still have every incentive to keep you as highly engaged as possible to make sure you never canceled your subscription and enlisted all your friends to sign up, too. 

Still, it would have been nice if The Social Dilemma had spent some time talking about specific possible solutions.

On the other hand, here’s an uncomfortable truth: there might not be any plausible solutions. Not the kind a Netflix documentary is willing to entertain, anyway.

In the prior section, I said “advertising is the proximate cause” of consumerism (emphasis added this time). I think there is a deeper cause, and advertising–the way it is done today–is only a symptom of that deeper cause.

When you stop trying to persuade people to buy your product directly–by appealing to their reason–and start trying to bypass their reason to appeal to subconscious desires you are effectively dehumanizing them. You are treating them as a thing to be manipulated. As a means to an end. Not as a person. Not as an end in itself. 

That’s the supply side: consumerism is a reflection of our willingness to tolerate treating each other as things. We don’t love others.

On the demand side, the emptier your life is, the more susceptible you become to this kind of advertising. Someone who actually feels belonging in their life on a consistent basis isn’t going to be easily manipulated into buying beer (or whatever) by appealing to that need. Why would they? The need is already being met.

That’s the demand side: consumerism is a reflection of how much meaning is missing from so many of our lives. We don’t love God (or, to be less overtly religious, feel a sense of duty and awe towards transcendent values).

As long as these underlying dysfunctions are in place, we will never successfully detoxify advertising through clever policies and incentives. There’s no conceivable way to reasonably enforce a law that says “advertising that objectifies consumers is illegal,” and any such law would violate the First Amendment in any case. 

The difficult reality is that social media is not intrinsically toxic any more than advertising is intrinsically toxic. What we’re witnessing is our cultural maladies amplified and reflected back through our technologies. They are not the problem. We are.

Therefore, the one and only way to detoxify our advertising and social media is to overthrow consumerism at the root. Not with creative policies or stringent laws and regulations, but with a fundamental change in our cultural values. 

We have the template for just such a revolution. The most innovative inheritance of the Christian tradition is the belief that, as children of God, every human life is individually and intrinsically valuable. An earnest embrace of this principle would make manipulative advertising unthinkable and intolerable. Christianity–like all great religions, but perhaps with particular emphasis–also teaches that a valuable life is found only in the service of others, service that would fill the emptiness in our lives and make us dramatically less susceptible to manipulation in the first place.

This is not an idealistic vision of Utopia. I am not talking about making society perfect. Only making it incrementally better. Consumerism is not binary. The sickness is a spectrum. Every step we could take away from our present state and towards a society more mindful of transcendent ideals (truth, beauty, and the sacred) and more dedicated to the love and service of our neighbors would bring a commensurate reduction in the sickness of manipulative advertising that results in tribalism, animosity, and social breakdown. 

There’s a word for what I’m talking about, and the word is: repentance. Consumerism, the underlying cause of toxic advertising that is the kernel of the destruction wrought by social media, is the cultural incarnation of our pride and selfishness. We can’t jury rig an economic or legal solution to a fundamentally spiritual problem. 

We need to renounce what we’re doing wrong, and learn–individually and collectively–to do better.

Cancel Culture Is Real

Cancel, Social, Society, Culture, Stop, Pull Back

Last month, Harper’s published an anti-cancel culture statement: A Letter on Justice and Open Debate.The letter was signed by a wide variety of writers and intellectuals, ranging from Noam Chomsky to J. K. Rowling. It was a kind of radical centrist manifesto, including major names like Jonathan Haidt and John McWhorter (two of my favorite writers) and also crossing lines to pick up folks like Matthew Yglesias (not one of my favorite writers, but I give him respect for putting his name to this letter.)

The letter kicked up a storm of controversy from the radical left, which basically boiled down to two major contentions. 

  1. There is no such thing as cancel culture.
  2. Everyone has limits on what speech they will tolerate, so there’s no difference between the social justice left and the liberal left other than where to draw the lines.

The “Profound Consequences” of Cancel Culture

The first contention was represented in pieces like this one from the Huffington Post: Don’t Fall For The ‘Cancel Culture’ Scam. In the piece, Michael Hobbes writes:

While the letter itself, published by the magazine Harper’s, doesn’t use the term, the statement represents a bleak apogee in the yearslong, increasingly contentious debate over “cancel culture.” The American left, we are told, is imposing an Orwellian set of restrictions on which views can be expressed in public. Institutions at every level are supposedly gripped by fears of social media mobs and dire professional consequences if their members express so much as a single statement of wrongthink.

This is false. Every statement of fact in the Harper’s letter is either wildly exaggerated or plainly untrue. More broadly, the controversy over “cancel culture” is a straightforward moral panic. While there are indeed real cases of ordinary Americans plucked from obscurity and harassed into unemployment, this rare, isolated phenomenon is being blown up far beyond its importance.

There is a kernel of truth to what Hobbes is saying, but it is only a kernel. Not that many ordinary Americans are getting “canceled”, and some of those who are cancelled are not entirely expunged from public life. They don’t all lose their jobs. 

But then, they don’t all have to lose their jobs for the rest of us to get the message, do they?

The basic analytical framework here is wrong. Hobbes assumes that the “profound consequences” of cancel culture have yet to be manifest. “Again and again,” he writes, “the decriers of “cancel culture” intimate that if left unchecked, the left’s increasing intolerance for dissent will result in profound consequences.”

The reason he can talk about hypothetical future consequences is that he’s thinking about the wrong consequences. Hobbes appears to think that the purpose of cancel culture is to cancel lots and lots of people. If we dont’ see hordes–thousands, maybe tens of thousands–of people canceled then there aren’t any “profound consequences”.

This is absurd. The mob doesn’t break kneecaps for the sake of breaking knee caps. They break knee caps to send a message to everyone else to pay up without resisting. Intimidation campaigns do not exist to make examples out of everyone. They make examples out of (a few) people in order to intimidate many more. 

Cancel culture is just such an intimidation campaign, and so the “profound consequences” aren’t the people who are canceled. The “profound consequences” are the people–not thousands or tens of thousands but millions–who hide their beliefs and stop speaking their minds because they’re afraid. 

And yes, I mean millions. Cato does polls on that topic, and they found that 58% of Americans had “political views they’re afraid to share” in 2017 and, as of just a month ago, that number has climbed to 62%. 

Gee, nearly two-thirds of Americans are afraid to speak their minds. How’s that for “profound consequences”?

Obviously Cato has a viewpoint here, but other studies are finding similar results. Politico did their own poll, and while it didn’t ask about self-censoring, it did ask what Americans think about cancel culture. According to the poll, 46% think it has gone “too far” while only 10% think it has gone “not far enough”. 

Moreover, these polls also reinforce something obvious: cancel culture is not just some general climate of acrimony. According to both the Cato and Politico polls, Republicans are much more likely to self-censor as a result of cancel culture (77% vs 52%)  and Democrats are much more likely to participate in the silencing (~50% of Democrats “have voiced their displeasure with a public figure on social media” vs. ~30% of Republicans).

Contrast these poll results with what Hobbes calls the “pitiful stakes” of cancel culture. He mocks low-grade intimidation like “New York Magazine published a panicked story about a guy being removed from a group email list.” Meanwhile, more than three quarters of Republicans are afraid to be honest about their own political beliefs. We don’t need to worry about hypothetical future profound consequences. They’re already here.

What Makes Cancel Culture Different

The second contention–which is that everyone has at least some speech they’d enthusiastically support canceling–is a more serious objection. After all: it’s true. All but the very most radical of free speech defenders will draw the line somewhere. If this is correct, then isn’t cancel culture just a redrawing of boundaries that have always been present?

To which I answer: no. There really is something new and different about cancel culture, and it’s not just the speed or ferocity of its adherents.

The difference goes back to a post I wrote a few months ago about the idea of an ideological demilitarized zone. I don’t think I clearly articulated my point in that post, so I’m going to reframe it (very briefly) in this one.

A normal, healthy person will draw a distinction between opinions they disagree with and actively oppose and opinions they disagree with that merit toleration or even consideration. That’s what I call the “demilitarized zone”: the collection of opinions that you think are wrong but also reasonable and defensible

Cancel culture has no DMZ.

Think I’m exaggerating? This is a post from a Facebook friend (someone I know IRL) just yesterday:

You can read the opinion of J. K. Rowling for yourself here. Agree or disagree, it is very, very hard for any reasonable person to come away thinking that Rowling has anything approaching personal animus towards anyone who is transgender for being transgender. (The kind of animus that might justify calling someone a “transphobic piece of sh-t” and trying to retconn her out of reality.) In the piece, she writes with empathy and compassion of the transgender community and states emphatically that, “I know transition will be a solution for some gender dysphoric people,” adding that:

Again and again I’ve been told to ‘just meet some trans people.’ I have: in addition to a few younger people, who were all adorable, I happen to know a self-described transsexual woman who’s older than I am and wonderful.

So here’s the difference between cancel culture and basically every other viewpoint on the political spectrum: they can acknowledge shades of grey and areas where reasonable people can see things differently, cancel culture can’t and won’t. Cancel culture is binary (ironically). You’re either 100% in conformity with the ideology or you’re “a —–phobic piece of sh-t”.

This is not incidental, by the way. Liberal traditions trace their roots back to the Enlightenment and include an assumption that truth exists as an objective category. As long as that’s the case–as long as there’s an objective reality out there–then there is a basis for discussion about it. There’s also room for mistaken beliefs about it. 

Cancel culture traces its roots back to critical theory, which rejects notions of reason and objective truth and sees instead only power. It’s not the case that people are disagreeing about a mutually accessible, external reality. Instead, all we have are subjective truth claims which can be maintained–not by appeal to evidence or logic–but only through the exercise of raw power.

Liberal traditions–be they on the left or on the right–view conflict through a lens that is philosophically compatible with humility, correction, cooperation, and compromise. That’s not to say that liberal traditions actually inhabit some kind of pluralist Utopia where no one plays dirty to win. It’s not like American politics (or politics anywhere) existed in some kind of genteel Garden of Eden until critical theory showed up. But no matter how acrimonious or dirty politics got before cancel culture, there was also the potential for cross-ideological discussion. Cancel culture doesn’t even have that.

This means that, while it’s possible for other viewpoints to coexist in a pluralist society, it is not possible for cancel culture to do the same. It isn’t a different variety of the same kind of thing. It’s a new kind of thing, a totalitarian ideology that has no self-limiting principle and views any and all dissent as an existential threat because it’s own truth claims are rooted solely in an appeal to power. For cancel culture, being right and winning are the same thing, and every single debate is a facet of the same existential struggle

So yes, all ideologies want to cancel something else. But only cancel culture wants to cancel everything else.

Last Thoughts

Lots of responders to the Harper’s letter pointed out that the signers were generally well-off elites. It seemed silly, if not outright hypocritical, for folks like that to whine about cancel culture, right?

My perspective is rather different. As someone who’s just an average Joe with no book deals, no massive social media following, no tenure, nor anything like that: I deeply appreciate someone with J. K. Rowling’s stature trading some of her vast hoard of social capital to keep the horizons of public discourse from narrowing ever farther. 

And that’s exactly why the social justice left hates her so much. They understand power, and they know how crippling it is to their cause to have someone like her demure from their rigid orthodoxy. Their concern isn’t alleviated because her dissent is gentle and reasonable. It’s worsened, because it makes it even harder to cancel her and underscores just how toxic their totalitarian ideology really is.

I believe in objective reality. I believe in truth. But I’m pragmatic enough to understand that power is real, too. And when someone like J. K. Rowling uses some of her power in defense of liberalism and intellectual diversity, I feel nothing but gratitude for the help.

We who want to defend the ideals of classical liberalism know just how much we could use it.

Note on Critics of Civilization

Shrapnel from a Unabomber attack. Found on Flickr.

Came across this article in my Facebook feed: Children of Ted. The lead in states:

Two decades after his last deadly act of ecoterrorism, the Unabomber has become an unlikely prophet to a new generation of acolytes.

I don’t have a ton of patience for this whole line of reasoning, but it’s trendy enough that I figure I ought to explain why it’s so silly.

Critics of industrialization are far from new, and obviously they have a point. As long as we don’t live in a literal utopia, there will be things wrong with our society. They are unlikely to get fixed without acknowledging them. What’s more, in any sufficiently complex system (and human society is pretty complex), any change is going to have both positive and negative effects, many of which will not be immediately apparent.

So if you want to point out that there are bad things in our society: yes, there are. If you want to point out that this or that particular advance has had deleterious side effects: yes, all changes do. But if you take the position that we would have been better off in a pre-modern, per-industrial, or even pre-agrarian society: you’re a hypocritical nut job.

I addressed this trendy argument when I reviewed Yuval Noah Harai’s Sapiens: A Brief History of Humankind. Quoting myself:

Harari is all-in for the hypothesis that the Agricultural Revolution was a colossal mistake. This is not a new idea. I’ve come across it several times, and when I did a quick Google search just now I found a 1987 article by Jared Diamond with the subtle title: The Worst Mistake in the History of the Human Race. Diamond’s argument then is as silly as Harari’s argument is now, and it boils down to this: life as a hunter-gatherer is easy. Farming is hard. Ergo, the Agricultural Revolution was a bad deal. If we’d all stuck around being hunter-gatherers we’d be happier.

There are multiple problems with this argument, and the one that I chose to focus on at the time is that it’s hedonistifc. Another observation one can make is that if being a hunter-gatherer is so great, nothing’s really stopping Diamond or Harai from living that way. I’m not saying it would be trivial, but for all the folks who sagely nod their head and agree with the books and articles that claim our illiterate ancestors had it so much better… how many are even seriously making the attempt?

The argument I want to make is slightly different than than the ones I’ve made before and is based on economics.

Three fundamental macroeconomic concepts are: production, consumption, and investment. Every year a society produces are certain amount of stuff (mining minerals, refining them, turning them into goods, growing crops, etc.) All of that stuff is eventually used in one of two ways: either it’s consumed (you eat the crops) or invested (you plant the seeds instead of eating them).

From a material standpoint, the biggest change in human history has been the dramatic rise in per-capita production over the last few centuries, especially during the Industrial Revolution. This is often seen as a triumph of science, but that is mostly wrong. Virtually none of the important inventions of the Industrial Revolution were produced by scientists or even my lay persons attempting to apply scientific principles. They were almost uniformly invented by self-taught tinkerers who were experimenting with practical rather than theoretical innovations.

Another way to see this is to observe that many of the “inventions” of the Industrial Revolution had been discovered many times in the past. A good example of this is the steam engine. In “Destiny Disrupted,” Tamim Ansary observes:

Often, we speak of great inventions as if they make their own case merely by existing. But in fact, people don’t start building and using a device simply because it’s clever. The technological breakthrough represented by an invention is only one ingredient in its success. The social context is what really determines whether it will take. The steam engine provides a case in point. What could be more useful? What could be more obviously world-changing? Yet the steam engine was invented in the Muslim world over three centuries before it popped up in the West, and in the Muslim world it didn’t change much of anything. The steam engine invented there was used to power a spit so that a whole sheep might be roasted efficiently at a rich man’s banquet. (A description of this device appears in a 1551 book by the Turkish engineer Taqi al-Din.) After the spit, however, no other application for the device occurred to anyone, so it was forgotten.

Ansary understands that the key ingredient in whether or not an invention takes off (like the steam engine in Western Europe in the 18th century) or dies stillborn (like the steam engine in the 15th century Islamic world) is the social context around it.

Unfortunately, Ansary mostly buys into the same absurd notion that I’m debunking, which is that all this progress is a huge mistake. According to him, the Chinese could have invented mechanized industry in the 10th century, but the benevolent Chinese state had the foresight to see that this would take away jobs from its peasant class and, being benevolent, opted instead to keep the Chinese work force employed.

This is absurd. First, because there’s no chance that the Chinese state (or anyone) could have foreseen the success and conseqeunces of mechanized industry in the 10th century and made policy based on it even if they’d wanted to. Second, because the idea that it’s better to keep society inefficient rather than risk unemployment is, in the long run, disastrous.

According to Ansary, the reason that steam engines, mechanized industry, etc. all took place in the West was misanthropic callousness:

Of course, this process [modernization] left countless artisans and craftspeople out of work, but this is where 19th century Europe differed from 10th century China. In Europe, those who had the means to install industrial machinery had no particular responsibility for those whose livelihood would be destroyed by a sudden abundance of cheap machine-made goods. Nor were the folks they affected down-stream–their kinfolk or fellow tribesmen–just strangers who they had never met and would never know by name. What’s more, it was somebody else’s job to deal with the social disruptions caused by widespread unemployment, not theirs. Going ahead with industrialization didn’t signify some moral flaw in them, it merely reflected the way this particular society was compartmentalized. The Industrial Revolution could take place only where certain social preconditions existed and in Europe at that time they happened to exist.

Not a particular moral flaw in the individual actors, Ansary concedes, but still a society that was wantonly reckless and unconcerned with the fate of its poor relative to the enlightened empires that foresaw the Industrial Revolution from end-to-end and declined for the sake of their humble worker class.

The point is that when a society has the right incentives (I’d argue that we need individual liberty via private property and a restrained state alongside compartmentalization) individual innovations are harnessed, incorporated, and built upon in a snowball effect that leads to ever and ever greater productivity. A lot of the productivity comes from the cool new machines, but not all of it.

You see, once you have a few machines that give that initial boost to productivity, you free up people in your society to do other things. When per-capita production is very, very low, everyone has to be a farmer. You can have a tiny minority doing rudimentary crafts, but the vast majority of your people need to work day-in and day-out just to provide enough food for the whole population not to starve to death.

When per-capita production is higher, fewer and few people need to do work creating the basic rudiments (food and clothes) and this frees people up to specialize. And specialization is the second half of the secret (along with new machines) that leads to the virtuous cycle of modernization. New tools boost productivity, this frees up new workers to try doing new things, and some of those new things include making even more new tools.

I’m giving you the happy side of the story. Some people go from being farmers to being inventors. I do not mean to deny but simply to balance the unhappy side of the story, which is what some people go from being skilled workers to being menial labors if a machine renders their skills obsolete. That also happens, although it’s worth noting that the threat to modernization is generally not to the very poorest. Americans like to finger-wag at “sweatshops”, but if your alternative is subsistence farming, then even sweatshops may very well look appealing. Which is why so many of the very poorest keep migrating from farms to cities (in China) and why the opposition to modernization never comes from the poorest classes (who have little to lose) but from the precarious members of the middle class (who do).

So my high-level story of modernization has a couple of key points.

  1. If you want a high standard of living for a society, you need a high level of per capita production.
  2. You get a high level of per capita production through a positive feedback loop between technological innovation and specialization. (This might be asymptotic.)
  3. The benefits of this positive feedback loop include high-end stuff (like modern medicine) and also things we take for granted. And I don’t mean electricity (although, that, too) but also literacy.
  4. The costs of this positive feedback loop include the constant threat of obsolescence for at least some workers, along with greater capacity to destroy on an industrial scale (either the environment or each other).

So the fundamental question you have to ask is whether you want to try and figure out how to manage the costs so that you can enjoy the benefits, or whether the whole project isn’t worth it and we should just give up and start mailing bombs to each other until it all comes crashing down.

The part that really frustrates me the most, that part that spurred me to write this today, is that folks like Ted Kaczynski (the original Unabomber) or John Jacobi (the first of his acolytes profiles in the New York Mag story) are only even possible in a modern, industrialized society.

They are literate, educated denizens of a society that produces so much stuff that lots of its members can survive basically without producing much of all. We live in an age of super abundance, and it turns out that abundance creates it’s own variety of problems. Obesity is one. Another, apparently, is a certain class of thought that advocates social suicide.

Because that’s what we’re talking about. As much as Diamond and Harai are just toying with the notion because it sells books and makes them look edgy, folks like John Jacobi or Ted Kaczynski would–if they had their way–bring about a world without any of the things that make their elitist theorizing possible in the first place.

It is a great tragedy of human nature that the hard-fought victories of yesterday’s heroic pioneers and risk-takers are casually dismissed by the following generation who don’t even realize that their apparent radicalism is just another symptom of super-abundance.

They will never succeed in reducing humanity to a pre-industrial state but they–and others who lack the capacity to appreciate what they’ve been given–can make enough trouble that the rising generation will, we hope, have a more constructive, aspirational, and less-suicidal frame of mind.

In Favor of Real Meritocracy

The meritocracy has come in for a lot of criticism recently, basically in the form of two arguments. 

There’s a book by Daniel Markovits called The Meritocracy Trap that basically argues that meritocracy makes everyone miserable and unequal by creating this horrific grind to get into the most elite colleges and then, after you get your elite degree, to grind away working 60 – 100 hours to maintain your position at the top of the corporate hierarchy. 

There was also a very interesting column by Ross Douthat that makes a separate but related point. According to Douthat, the WASP-y elite that dominated American society up until the early 20th century decided to “dissolve their own aristocracy” in favor of a meritocracy, but the meritocracy didn’t work out as planned because it sucks talent away from small locales (killing off the diverse regional cultures that we used to have) and because:

the meritocratic elite inevitably tends back toward aristocracy, because any definition of “merit” you choose will be easier for the children of these self-segregated meritocrats to achieve.

What Markovits and Douthat both admit without really admitting it is one simple fact: the meritocracy isn’t meritocratic.

Just to be clear, I’ll adopt Wikipedia’s definition of a meritocracy for this post:

Meritocracy is a political system in which economic goods and/or political power are vested in individual people on the basis of talent, effort, and achievement, rather than wealth or social class. Advancement in such a system is based on performance, as measured through examination or demonstrated achievement.

When people talk about meritocracy today, they’re almost always referring to the Ivy League and then–working forward and backward–to the kinds of feeder schools and programs that prepare kids to make it into the Ivy League and the types of high-powered jobs (and the culture surrounding them) that Ivy League students go onto after they graduate. 

My basic point is a pretty simple one: there’s nothing meritocratic about the Ivy League. The old WASP-y elite did not, as Douthat put it, “dissolve”. It just went into hiding. Americans like to pretend that we’re a classless society, but it’s a fiction. We do have class. And the nexus for class in the United States is the Ivy League. 

If Ivy League admission were really meritocratic, it would be based as much as possible on objective admission criteria. This is hard to do, because even when you pick something that is in a sense objective–like SAT scores–you can’t overcome the fact that wealthy parents can and will hire tutors to train their kids to artificially inflate their scores relative to the scores an equally bright, hard-working lower-class student can attain without all expensive tutoring and practice tests. 

Still, that’s nothing compared to the way that everything else that goes into college admissions–especially the litany of awards, clubs, and activities–tilts the game in favor of kids with parents who (1) know the unspoken rules of the game and (2) have cash to burn playing it. An expression I’ve heard before is that the Ivy League is basically privilege laundering racket. It has a facade of being meritocratic, but the game is rigged so that all it really does is perpetuate social class. “Legacy” admissions are just the tip of the iceberg in that regard.

What’s even more outrageous than the fiction of meritocratic admission to the Ivy League (or other elite, private schools) is the equally absurd fiction that students with Ivy League degrees have learned some objectively quantifiable skillset that students from, say, state schools have not. There’s no evidence for this. 

So students from outside the social elite face double discrimination: first, because they don’t have an equal chance to get into the Ivy Leagues and second, because then they can’t compete with Ivy League graduates on the job market. It doesn’t matter how hard you work or how much you learn, your Statue U degree is never going to stand out on a resume the way Harvard or Yale does.

There’s nothing meritocratic about that. And that’s the point. The Ivy League-based meritocracy is a lie.

So I empathize with criticisms of American meritocracy, but it’s not actually a meritocracy they’re criticizing. It’s a sham meritocracy that is, in fact, just a covert class system. 

The problem is that if we blame the meritocracy and seek to circumvent it, we’re actually going to make things worse. I saw a WaPo headline that said “No one likes the SAT. It’s still the fairest thing about admissions.” And that’s basically what I’m saying: “objective” scores can be gamed, but not nearly as much as the qualitative stuff. If you got rid of the SAT on college admissions you would make it less meritocratic and also less fair. At least with the SAT someone from outside the elite social classes has a chance to compete. Without that? Forget it.

Ideally, we should work to make our system a little more meritocratic by downplaying prestige signals like Ivy League degrees and emphasizing objective measurements more. But we’re never going to eradicate class entirely, and we shouldn’t go to radical measures to attempt it. Pretty soon, the medicine ends up worse than the disease if we go that route. That’s why you end up with absurd, totalitarian arguments that parents shouldn’t read to their children and that having an intact, loving, biological family is cheating. That way lies madness.

We should also stop pretending that our society is fully meritocratic. It’s not. And the denial is perverse. This is where Douthat was right on target:

[E]ven as it restratifies society, the meritocratic order also insists that everything its high-achievers have is justly earned… This spirit discourages inherited responsibility and cultural stewardship; it brushes away the disciplines of duty; it makes the past seem irrelevant, because everyone is supposed to come from the same nowhere and rule based on technique alone. As a consequence, meritocrats are often educated to be bad leaders, and bad people…

Like Douthat, I’m not calling for a return to WASP-y domination. (Also like Douthat, I’d be excluded from that club.) A diverse elite is better than a monocultural elite. But there’s one vital thing that the WASPy elite had going for it that any elite (and there’s always an elite) should reclaim:

the WASPs had at least one clear advantage over their presently-floundering successors: They knew who and what they were.

What Anti-Poverty Programs Actually Reduce Poverty?

According to the Tax Policy Center,

The earned income tax credit (EITC) provides substantial support to low- and moderate-income working parents, but very little support to workers without qualifying children (often called childless workers). Workers receive a credit equal to a percentage of their earnings up to a maximum credit. Both the credit rate and the maximum credit vary by family size, with larger credits available to families with more children. After the credit reaches its maximum, it remains flat until earnings reach the phaseout point. Thereafter, it declines with each additional dollar of income until no credit is available (figure 1).

By design, the EITC only benefits working families. Families with children receive a much larger credit than workers without qualifying children. (A qualifying child must meet requirements based on relationship, age, residency, and tax filing status.) In 2018, the maximum credit for families with one child is $3,461, while the maximum credit for families with three or more children is $6,431.

…Research shows that the EITC encourages single people and primary earners in married couples to work (Dickert, Houser, and Sholz 1995; Eissa and Liebman 1996; Meyer and Rosenbaum 2000, 2001). The credit, however, appears to have little effect on the number of hours they work once employed. Although the EITC phaseout could cause people to reduce their hours (because credits are lost for each additional dollar of eanings, which is effectively a surtax on earnings in the phaseout range), there is little empirical evidence of this happening (Meyer 2002).

The one group of people that may reduce hours of work in response to the EITC incentives is lower-earning spouses in a married couple (Eissa and Hoynes 2006). On balance, though, the increase in work resulting from the EITC dwarfs the decline in participation among second earners in married couples.

If the EITC were treated like earnings, it would have been the single most effective antipoverty program for working-age people, lifting about 5.8 million people out of poverty, including 3 million children (CBPP 2018).

The EITC is concentrated among the lowest earners, with almost all of the credit going to households in the bottom three quintiles of the income distribution (figure 2). (Each quinitle contains 20 percent of the population, ranked by household income.) Very few households in the fourth quinitle receive an EITC (fewer than 0.5 percent).

Recent evidence supports this view of the EITC. From a brand new article in Contemporary Economic Policy:

First, the evidence suggests that longer-run effects[1]”Our working definition of “longer run” in this study is 10 years” (pg. 2).[/ref] of the EITC are to increase employment and to reduce poverty and public assistance, as long as we rely on national as well as state variation in EITC policy. Second, tighter welfare time limits also appear to reduce poverty and public assistance in the longer run. We also find some evidence that higher minimum wages, in the longer run, may lead to declines in poverty and the share of families on public assistance, whereas higher welfare benefits appear to have adverse longer-run effects, although the evidence on minimum wages and welfare benefits—and especially the evidence on minimum wages—is not robust to using only more recent data, nor to other changes. In our view, the most robust relationships we find are consistent with the EITC having beneficial longer-run impacts in terms of reducing poverty and public assistance, whereas there is essentially no evidence that more generous welfare delivers such longer-run benefits, and some evidence that more generous welfare has adverse longer-run effects on poverty and reliance on public assistance—especially with regard to time limits (pg. 21).

Let’s stick with programs that work.

Demographics & Inequality: 2018 Edition

Every year, economist Mark Perry draws on Census Bureau reports to paint of picture of the demographics of inequality. Looking at 2018 data, he constructed the following table:

Once again, he concludes,

Household demographics, including the average number of earners per household and the marital status, age, and education of householders are all very highly correlated with American’s household income. Specifically, high-income households have a greater average number of income-earners than households in lower-income quintiles, and individuals in high-income households are far more likely than individuals in low-income households to be well-educated, married, working full-time, and in their prime earning years. In contrast, individuals in lower-income households are far more likely than their counterparts in higher-income households to be less-educated, working part-time, either very young (under 35 years) or very old (over 65 years), and living in single-parent or single households.

The good news about the Census Bureau is that the key demographic factors that explain differences in household income are not fixed over our lifetimes and are largely under our control (e.g., staying in school and graduating, getting and staying married, working full-time, etc.), which means that individuals and households are not destined to remain in a single income quintile forever. Fortunately, studies that track people over time find evidence of significant income mobility in America such that individuals and households move up and down the income quintiles over their lifetimes, as the key demographic variables highlighted above change, see related CD posts herehere and here. Those links highlight the research of social scientists Thomas Hirschl (Cornell) and Mark Rank (Washington University) showing that as a result of dynamic income mobility nearly 70% of Americans will be in the top income quintile for at least one year while almost one-third will be in the top quintile for ten years or more (see chart below).

What’s more, Perry points out elsewhere that the new data demonstrate that the middle class is shrinking…along with the lower class. Meanwhile, the percentage of high-income households has more than tripled since 1967:

In short, the percentage of middle and lower-income households has declined because they’ve been moving up.

Is Religious Faith a Global Force for Good?

Image result for family

According to a new report from the Institute for Family Studies and the Wheatley Institution, religion appears to be a net gain “in 11 countries in the Americas, Europe, and Oceania.” From the executive summary:

When it comes to relationship quality in heterosexual relationships, highly religious couples enjoy higher-quality relationships and more sexual satisfaction, compared to less/mixed religious couples and secular couples. For instance, women in highly religious relationships are about 50% more likely to report that they are strongly satisfied with their sexual relationship than their secular and less religious counterparts. Joint decision-making, however, is more common among men in shared secular relationships and women in highly religious relationships, compared to their peers in less/mixed religious couples.

When it comes to fertility, data from low-fertility countries in the Americas, East Asia, and Europe show that religion’s positive influence on fertility has become stronger in recent decades. Today, people ages 18-49 who attend religious services regularly have 0.27 more children than those who never, or practically never, attend. The report also indicates that marriage plays an important role in explaining religion’s continued positive influence on childbearing because religious men and women are more likely to marry compared to their more secular peers, and the married have more children than the unmarried.

When it comes to domestic violence, religious couples in heterosexual relationships do not have an advantage over secular couples or less/mixed religious couples. Measures of intimate partner violence (IPV)—which includes physical abuse, as well as sexual abuse, emotional abuse, and controlling behaviors—do not differ in a statistically significant way by religiosity. Slightly more than 20% of the men in our sample report perpetuating IPV, and a bit more than 20% of the women in our sample indicate that they have been victims of IPV in their relationship. Our results suggest, then, that religion is not protective against domestic violence for this sample of couples from the Americas, Europe, and Oceania. However, religion is not an increased risk factor for domestic violence in these countries, either.

The relationships between faith, feminism, and family outcomes are complex. The impact of gender ideology on the outcomes covered in this report, for instance, often varies by the religiosity of our respondents. When it comes to relationship quality, we find a J-Curve in overall relationship quality for women, such that women in shared secular, progressive relationships enjoy comparatively high levels of relationship quality, whereas women in the ideological and religious middle report lower levels of relationship quality, as do traditionalist women in secular relationships; but women in highly religious relationships, especially traditionalists, report the highest levels of relationship quality. For domestic violence, we find that progressive women in secular relationships report comparatively low levels of IPV compared to conservative women in less/mixed religious relationships. In sum, the impact of gender ideology on contemporary family life may vary a great deal by whether or not a couple is highly religious, nominally religious, or secular.

There’s also some useful data on family prayer and worldwide family structure, socioeconomic conditions, family satisfaction, and attitudes and norms. Check it out.

What Would the World Look Like Without FDI?

What would happen if foreign direct investment (FDI) simply disappeared? Or, more specifically, what would “a hypothetical world without outward and inward FDI from and to low- and lower-middle-income countries” look like? A brand new study tries to quantify this hypothetical. They find,

On average, the gains from FDI in the poorer countries in the world amount to 7% of world’s trade in 2011, the year of our counterfactual analysis. Second, all countries lose from the counterfactual elimination of FDI in the poorer countries.  Third, the impact is heterogeneous. Poorer countries lose the most, but the impact varies widely even within this group – some lose over 50% and some very little. The impact on countries in the rest of the world is significant as well. Some countries lose a lot (e.g. Luxembourg, Singapore, and Ireland) while others (such as India, Ecuador, and Dominican Republic) lose less. Pakistan and Sri Lanka actually see an increase in their total exports due to the elimination of FDI.

Figure 1 Percentage change in total exports from eliminating outward and inward FDI to and from low- and lower-middle-income countries

There’s more:

On average, the gains from FDI amount to 6% of world’s welfare in 2011. Further, all countries in the world have benefited from FDI, but the effects are very heterogeneous. The directly affected low- and lower-middle-income countries see welfare changes up to over 50% (Morocco and Nigeria), while some of the remaining 68 countries, such as Ecuador, Turkmenistan, and Dominican Republic are hardly affected. A higher country-specific production share of FDI leads to larger welfare losses, all else equal.  Intuitively, a larger importance of FDI in production leads to larger welfare losses when restricting FDI. A larger net log FDI position leads to larger welfare losses. Intuitively, if a country has more inward than outward FDI, restricting FDI will lead to larger welfare losses, as FDI is complementary to other production factors and therefore overall income increases more than FDI payments.

Figure 2 Welfare effects of eliminating outward and inward FDI to and from low- and lower-middle-income countries (%)

The authors conclude, “Overall, the analysis reveals that FDI is indeed an important component of the modern world economic system. The results suggest positive payoffs to policies designed to facilitate FDI, particularly those concerning protection of intellectual property.”