Hold Up Your Light

You’ve probably seen something like this meme in your own social media network feeds. 

I’m gonna do two things to this meme. First: debunk it. Not because it’s all that notable, but because it’s a pretty typical example of something scary and nasty in our society. And that’s what we’re going to get to second: zooming out from this particular specimen to the whole species.

This meme has the appearance of being some kind of insight or realization into American politics in the context of an important current event (the pandemic), but all of that is just a front. There is no analysis and there is no insight. It’s just a pretext to deliver the punchline: conservatives are selfish and bad. 

You can think of the pseudo-argument as being like the outer coating on a virus. The sole purpose is to penetrate the cell membrane to deliver a payload. It’s a means to an end, nothing more. 

Which means the meme, if you ignore the candy coating, is just a cleverly packaged insult. 

You see, conservatives don’t object to pandemic regulations because they would rather watch their neighbors die than shoulder a trivial inconvenience. They object to pandemic regulations (when they do; I think the existence of objections is exaggerated) because Americans in general and conservatives in particular have an anti-authoritarian streak a mile wide. Anti-authoritarianism is part of who we are. It’s not always reasonable or mature, but then again, it’s not a bad reflex to have, all things considered.

One of the really clever things about the packaging around this insult is that it’s kind of self-fulfilling. It accuses conservatives of being stubborn while it also insults them. What happens to people who are already being a little stubborn if you start insulting them? In most cases, they get more stubborn. Which means every time a conservative gets mad about this meme, a liberal spreading it can think, “Yeah, see? I knew I was right.”

Oh, and if incidentally it happens to actually discourage mask use? Oh well. That’s just collateral damage. Because people who spread memes like this care more about winning political battles than epidemiological ones. 

Liberals who share this meme are guaranteed to get what they really want: that little frisson of superiority. Because they care. They are willing to sacrifice. They are reasonable. So reasonable that they are happy to titillate their own feeling of superiority even if it has the accidental side effect of, you know, undermining compliance with those rules they care so much about. 

I’m being a little cynical here, but only a little. This meme is just one example of countless millions that all have the same basic function: stir controversy. And yes, there are conservatives analogs to this liberal meme that do the exact same thing. I don’t see as many of them because I’m quicker to mute fellow conservatives who aggravate me than liberals. 

Why did we get here?

You can blame the Russians, if you like. The KGB meddled with American politics as much as they could for decades before the fall of the USSR and Putin was around for that. Why would the FSB (contemporary successor) have given up the old hobby? But the KGB wasn’t ever any good at it, and I’m skeptical that the FSB has cracked the code. I’m sure their efforts don’t help, but I also don’t think they’re largely to blame. 

We’re doing this to ourselves.

The Internet runs on ads, and that means the currency of the Internet is attention. You are not the customer. You are the commodity. That’s not just true of Facebook and it’s not just a slogan. It’s the underlying reality of the Internet, and it sets the incentives that every content producer has to contend with if they want to survive.

The way to harvest attention is through engagement. Every content producer out there wants to hijack your attention by getting you engaged in what they’re telling you. There are a lot of ways to do this. Clickbait headlines hook your curiosity, attractive models wearing little clothes snag  your libido, and so on. But the king of engagement seems to be outrage, and there’s an insidious reason why.

Other attention grabbers work on only a select audience at a time. Other than bisexuals, attractive male models will grab one half of the audience and attractive female models will grab the other half, but you have to pick either / or. 

But outrage lets you engage two audiences with one piece of content. That’s what a meme like this one does, and it’s why it’s so successful. It infuriates conservatives while at the same time titillating liberals. (Again: I could just as easily find a conservative liberal that does the opposite.) 

When you realize that this meme is actually targeting conservatives and liberals, you also realize that the logical deficiency of the argument isn’t a bug. It’s a feature. It’s just another provocation, the way that some memes intentionally misspell words just to squeeze out a few more interactions, a few more clicks, a few more shares. If you react to this meme with an angry rant, you’re still reacting to this meme. That means you’ve already lost, because you’ve given away your attention. 

A lot of the most dangerous things in our environment aren’t trying to hurt us. Disease and natural disasters don’t have any intentions. And even the evils we do to each other are often byproducts of misaligned incentives. There just aren’t that many people out there who really like hurting other people. Most of us don’t enjoy that at all. So the conventional image of evil–mustache-twirling super-villians who want to murder and torture–is kind of a distraction. The real damage isn’t going to come from the tiny population of people who want to cause harm. It’s going to come from the much, much, much larger population of people who don’t have any particular desire to do harm, but who aren’t really that concerned with avoiding it, either. These people will wreck the world faster than anyone else because none of them are doing that much damage on their own and because none of them are motivated by malice. That makes it easier for them to rationalize their individual contribution to an environment that, in the aggregate, becomes extremely toxic. 

At this point, I’d really, really like everyone reading this to take a break and read Scott Alexander’s short story, “Sort by Controversial“. Go ahead. I’ll wait.

Back? OK, good, let’s wrap this up. The meme above is a scissor (that’s from Alexander’s story, if you thought you could skip reading it). The meme works by presenting liberals with an obviously true statement and conservatives with an obviously false statement. For liberals: You should tolerate minor inconveniences to save your neighbors. For conservatives: You should do whatever the government tells you to do without question.

That’s the actual mechanism behind scissors. It’s why half the people think it’s obviously true and the other half think it’s obviously false. They’re not actually reacting to the same issue. But they are reacting to the same meme. And so they fight, and–since they both know their position is obvious–the disagreement rapidly devolves. 

The reality is that most people agree on most issues. You can’t really find a scissor where half the population thinks one thing and half the population thinks the other because there’s too much overlap. But you can present two halves of the population with subtly different messages at the same time such that one half viscerally hates what they hear and the other half passionately loves what they hear, and–as often as not–they won’t talk to each other long enough to realize that their not actually fighting over the same proposition. 

This is how you destroy a society.

The truth is that it would be better, in a lot of ways, if there were someone out there who was doing this to us. If it was the FSB or China or terrorists or even a scary AI (like a nerdier version of Skynet) there would be some chance they could be opposed and–better still–a common foe to unite against.

But there isn’t. Not really. There’s no conspiracy. There’s no enemy. There’s just perverse incentives and human nature. There’s just us. We’re doing this to ourselves.

That doesn’t necessarily mean we’re doomed, but it does mean there’s no easy or quick solution. I don’t have any brilliant ideas at all other than some basic ones. Start off with: do no harm. Don’t share memes like this. To be on the safe side, maybe just don’t share political memes at all. I’m not saying we should have a law. Just that, individually and of our own free will, we should collectively maybe not

As a followup: talk to people you disagree with. You don’t have to do it all the time, but look for opportunities to disagree with people in ways that are reasonable and compassionate. When you do get into fights–and you will–try to reach out afterwards and patch up relationships. Try to build and maintain bridges. 

Also: Resist the urge to adopt a warfare mentality. War is a common metaphor–and there’s a reason it works–but if you buy into that way of thinking it’s really hard not to get sucked into a cycle of endless mutual radicalization. If you want a Christian way of thinking about it, go with Ephesians 6:12

For our struggle is not against enemies of blood and flesh, but against the rulers, against the authorities, against the cosmic powers of this present darkness, against the spiritual forces of evil in the heavenly places.

There are enemies, but the people in your social network are not them. Not even when they’re wrong. Those people are your brothers and your sisters. You want to win them over, not win over them.

Lastly: cultivate all your in-person friendships. Especially the random ones. The coworkers you didn’t pick? The family members you didn’t get to vote on? The neighbors who happen to live next door to you? Pay attention to those little relationships. They are important because they’re random. When you build relationships with people who share your interests and perspectives you’re missing out on one of the most fundamental and essential aspects of human nature: you can relate to anyone. Building relationships with people who just happen to be in your life is probably the single most important way we can repair our society, because that’s what society is. It’s not the collection of people you chose that defines our social networks, it’s the extent to which we can form attachments to people we didn’t choose. 

What are the politics of your coworkers and family and neighbors? Who cares. Don’t let politics define all your relationships, positive or negative. Find space outside politics, and cherish it. 

Times are dark. They may yet get darker, and none of us can change that individually.

But by looking for the good in the people who are randomly in your life, you can hold up a light.

So do it.

Why I’m An AI Skeptic

There are lots of people who are convinced that we’re a few short years away from economic apocalypse as robots take over all of our jobs. Here’s an example from Business Insider:

Top computer scientists in the US warned over the weekend that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies

These fears are based on hype. The capabilities of real-world AI-like systems are far, far from what non-experts expect from those devices, and the gap between where we are and where people expect to be is vast and–in the short-term at least–mostly insurmountable. 

Let’s take a look at where the hype comes from, why it’s wrong, and what to expect instead. For starters, we’ll take all those voice-controlled devices (Alexa, Siri, Google Assistant) and put them in their proper context.

Voice Controls Are a Misleading Gimmick

Prologue: Voice Controls are Frustrating

A little while back I was changing my daughter’s diaper and thought, hey: my hands are occupied but I’d like to listen to my audiobook. I said, “Alexa, resume playing Ghost Rider on Audible.” Sure enough: Alexa not only started playing my audiobook, but the track picked up exactly where I’d left off on my iPhone a few hours previously. Neat!

There was one problem: I listen to my audiobooks at double speed, and Alexa was playing it at normal speed. So I said, “Alexe, double playbook speed.” Uh-oh. Not only did Alexa not increase the playback speed, but it did that annoying thing where it starts prattling on endlessly about irrelevant search results that have nothing to do with your request. I tried five or six different varieties of the command and none of them worked, so I finally said, “Alexa, shut up.” 

This is my most common command to Alexa. And also Siri. And also the Google Assistant. I hate them all.

They’re supposed to make life easier but, as a general rule, they do the exact opposite. When we got our new TV I connected it to Alexa because: why not? It was kind of neat to turn it on using a voice command, but it really wasn’t that useful because voice commands didn’t work for things like switching video inputs so you still had to find the remote anyway and because the voice command to turn it off never worked, even when the volume was pretty low. 

Then one day the TV stopped working with Alexa. Why? Who knows. I have half-heartedly tried to fix it six or seven times over the last year to no avail. I spent more time setting up and unsuccessfully debugging the connection than I ever saved. 

This isn’t a one-off exception; it’s the rule. Same thing happened with a security camera I use as a baby monitor. For a few weeks it worked with Alexa until it didn’t. I got that one working again, but then it broke again and I gave up. Watching on the Alexa screen wasn’t ever really more useful than watching on my phone anyway.  

So what’s up? Why is all this nifty voice-activated stuff so disappointing?

If you’re like me, you were probably really excited by all this voice-activation stuff when it first started to come out because it reminded you of Star Trek: The Next Generation. And if you’re like me, you also got really annoyed and jaded after actually trying to use some of this stuff when you realized it’s all basically an inconvenient, expensive, privacy-smashing gimmick.  

Before we get into that, let me give y’all one absolutely vital caveat. The one true and good application of voice control technology is accessibility. For folks who are blind or can’t use keyboards or mice or other standard input devices, this technology is not a gimmick at all. It’s potentially life-transforming. I don’t want any of my cynicism to take away from that really, really important exception.

But that’s not how this stuff is being packaged and marketed to the broad audience, and it’s that–the explicit and implicit promises and all the predictions people make based on top of them–that I want to address.

CLI vs. GUI

To put voice command gimmicks in their proper context, you have to go back to the beginning of popular user interfaces, and the first of those was the CLI: Command Line Interface. A CLI is a screen, a keyboard, and a system that allows you to type commands and see feedback. If you’re tech savvy then you’ve used the command line (AKA terminal) on Mac or Unix machines. If you’re not, then you’ve probably still seen the Windows command prompt at some point. All of these are different kinds of CLI. 

In the early days of the PC (note: I’m not going back to the ancient days of punch cards, etc.) the CLI was all you had. Eventually this changed with the advent of the GUI: graphical user interface.

The GUI required new technology (the mouse), better hardware (to handle the graphics) and also a whole new way of thinking about the user interaction with the computer. Instead of thinking about commands, the GUI emphasizes objects. In particular, the GUI has used a kind of visual metaphor from the very beginning. The most common of these are icons, but it goes deeper than that. Buttons to click, a “desktop” as a flat surface to organize things, etc. 

Even though you can actually do a lot of the same things in either a CLI or a GUI (like moving or renaming files), the whole interaction paradigm is different. You have concepts like clicking, double-clicking, right-clicking, dragging-and-dropping in the GUI that just don’t have any analog in the CLI.

It’s easy to think of the GUI as superior to the CLI since it came later and is what most people use most of the time, but that’s not really the case. Some things are much better suited to a GUI, including some really obvious ones like photo and video editing. But there are still plenty of tasks that make more sense in a CLI, especially related to installing and maintaining computer systems. 

The biggest difference between a GUI and a CLI is feedback. When you interact with a GUI you get constant, immediate feedback to all of your actions. This in turn aids in discoverability. What this means is that you really don’t need much training to use a GUI. By moving the mouse around on the screen, you can fairly easily see what commands are available, for example. This means you don’t need to memorize how to execute tasks in a GUI. You can memorize the shortcuts for copy and paste, but you can also click on “Edit” and find them there. (And if you forget they’re under the edit menu, you can click File, View, etc. until you find them.)

The feedback and discoverability of the GUI is what has made it the dominant interaction paradigm. It’s much easier to get started and much more forgiving of memory lapses. 

Enter the VUI

When you see commercials of attractive, well-dressed people interacting with voice assistants, the most impressive thing is that they use normal-sounding commands. The interactions sound conversational. This is what sets the (false) expectation that interacting with Siri is going to be like interacting with the computer on board the Enterprise (NCC 1701-D). This way leads frustration and madness, however. A better way to think of voice control is as a third user interface paradigm, the VUI or voice user interface.

There is one really cool aspect of a VUI, and that’s the ability of the computer to transcribe spoken words to written text. That’s the magic. 

However, once you account for that you realize that the rest of the VUI experience is basically a CLI… without a screen. Which means: without feedback and discoverability.

Those two traits that make the GUI so successful for everyday life are conspicuously absent from a VUI. Just like when interacting with a CLI, using a VUI successfully means that you have to memorize a bunch of commands and then invoke them just so. There is a little more leeway with a VUI than a CLI, but not much. And that leeway is at least partially offset by the fact that when you type in a command at the terminal, you can pause and re-read it to see if you got it all right before you hit enter and commit. You can’t even do that with a VUI. Once you open your  mouth and start talking, your commands are being executed (or, more often than not: failing to execute) on the fly. 

This is all bad enough, but in addition to basically being 1970s tech (except for the transcription part), the VUI faces the additional hurdle of being held up against an unrealistic expectation because it sounds like natural speech. 

No one sits down in front of a terminal window and expects to be able to type in a sentence or two of plain English and get the computer to do their bidding. Here I am, asking Bash what time it is. It doesn’t go well:

Even non-technical folks understand that you have to have a whole skillset to be able to interact with a computer using the CLI. That’s why the command line is so intimidating for so many folks.

But the thing is, if you ask Siri (or whatever), “What time is it?” you’ll get an answer. This gives the impression that–unlike a CLI–interacting with a VUI won’t require any special training. Which is to say: that a VUI is intelligent enough to understand you.

It’s not, and it doesn’t. 

A VUI is much closer to a CLI than a GUI, and our expectations for it should be set at the 1970s level instead of, like with a GUI, more around the 1990s. Aside from the transcription side of things, and with a few exceptions for special cases, a VUI is a big step backwards in useability. 

AI vs. Machine Learning

Machine Learning Algorithms are Glorified Excel Trendlines

When we zoom out to get a larger view of the tech landscape, we find basically the same thing: mismatched expectations and gimmicks that can fool people into thinking our technology is much more advanced than it really is.

As one example of this, consider the field of machine learning, which is yet another giant buzzword. Ostensibly, machine learning is a subset of artificial intelligence (the Grand High Tech Buzzword). Specifically, it’s the part related to learning. 

This is another misleading concept, though. The word “learning” carries an awful lot of hidden baggage. A better way to think of machine learning is just: statistics. 

If you’ve worked with Excel at all, you probably know that you can insert trendlines into charts. Without going into too much detail, an Excel trendline is an application of the simplest and most commonly used form of statistical analysis: ordinary least-squares regression. There are tons of guides out there to explain the concept to you, my point is just that nobody thinks the ability to click “show trendline” on an Excel chart means the computer is “learning” anything. There’s no “artificial intelligence” at play here, just a fairly simple set of steps to solve a minimization problem. 

Although the bundle of algorithms available for data scientists conducting machine learning are much broader and more interesting, they’re the same kind of thing. Random forests, support vector machines, naive Bayesian classifiers: they’re all optimization problems fundamentally the same as OLS regression (or other, slightly fancier statistical techniques like logistic regression.)

As with voice controlled devices, you’ll understand the underlying tech a lot better if you replace the cool, fancy expectations (like the Enterprise’s computer) with a much more realistic example (a command prompt). Same thing here. Don’t believe the machine learning hype. We’re talking about adding trendlines to Excel charts. Yeah, it’s fancier than that, but that example will give you the right intuition about the kind of activity that’s going on.

Last thing: don’t get me wrong knocking on machine learning. I love me some machine learning. No, really, I do. As statistical tools the algorithms are great and certainly much more capable than an Excel trendline. This is just about how to get your intuition a little more in line with what they are in a philosophical sense.

Are Robots Coming to Take Your Job?

So we’ve laid some groundwork by explaining how voice control services and machine learning aren’t as cool as the hype would lead you to believe. Now it’s time to get to the main event and address the questions I started this post with: are we on the cusp of real AI that can replace you and take your job?

You could definitely be forgiven for thinking the answer is an obvious “yes”. After all, it was a really big deal when Deep Blue beat Gary Kasparov in 1997, and since then there’s been a litany of John Henry moments. So-called AI has won at Go and Jeopardy, for example. Impressive, right? Not really.

First, let me ask you this. If someone said that a computer beat the reigning world champion of competitive memorization… would you care? Like, at all? 

Because yes, competitive memorization (aka memory sport) is a thing. Players compete to see how fast they can memorize the sequence of a randomly shuffled deck of cards, for example. Thirteen seconds is a really good time. If someone bothered to build a computer to beat that (something any tinkerer could do in a long weekend with no more specialized equipment than a smartphone) we wouldn’t be impressed. We’d yawn. 

Memorizing the order of a deck of cards is a few bytes of data. Not really impressive for computers that store data by the terabyte and measure read and write speeds in gigabytes per second. Even the visual recognition part–while certainly tougher–is basically a solved problem. 

With a game like chess–where the rules are perfectly deterministic and the playspace is limited–it’s just not surprising or interesting for computers to beat humans. In one important sense of the word, chess is just a grandiose version of Tic-Tac-Toe. What I mean is that there are only a finite number of moves to make in either Tic-Tac-Toe or chess. The number of moves in Tic-Tac-Toe is very small, and so it is an easily solved game. That’s the basic plot of the WarGames and the reason nobody enjoys playing Tic-Tac-Toe after they learn the optimal strategy when they’re like seven years old. Chess is not solved yet, but that’s just because the number of moves is much larger. It’s only a matter of time until we brute-force the solution to chess. Given all this, it’s not surprising that computers do well at chess: it is the kind of thing computers are good at. Just like memorization is the kind of thing computers are good at.

Now, the success of computers at playing Go is much more impressive. This is a case where the one aspect of artificial intelligence with any genuine promise–machine learning–really comes to the fore. Machine learning is overhyped, but it’s not just hyped. 

On top of successfully learning to play Go better than a human, machine learning was also used to dramatically increase the power of automated language translation. So there’s some exciting stuff happening here, but Go is still a nice, clean system with orderly rules that is amenable to automation in ways that real life–or even other games, like Starcraft–are not.

So let’s talk about Starcraft for a moment.  I recently read an article that does a great job of providing a real-life example. It’s a PC Magazine article about the controversy over an AI that managed to defeat top-ranked human players in Starcraft II. Basically, a team created an AI (AlphaStar) to beat world-class Starcraft II players. Since Starcraft is a much more complex game (dozens of unit types, real-time interaction, etc.) this sounds really impressive. The problem is: they cheated. 

When a human plays Starcraft part of what they’re doing is looking at the screen and interpreting what they see. This is hard. So AlphaStar skipped it. Instead of building a system so that they could point a camera at the screen and use visual recognition to identify the units and terrain, they (1) built AlphaStar to only play on one map over and over again so the terrain never changed and (2) tapped into the Starcraft data to directly get at the exact location of all their units. Not only does this bypass the tricky visualization and interpretation problem, it also meant that AlphaStar always knew where every single unit was at every single point in time (while human players can only see what’s on the screen and have to scroll around the map). 

You could argue that Deep Blue didn’t use visual recognition either. The plays were fed into the computer directly. The difference is that human chess players use the same code to understand the game, so the playing field was even. Not so with AlphaStar. 

That’s why the “victory” of AlphaStar over world class Starcraft players was so controversial. The deck was stacked. The AI could see the entire board at the same time (which is not possible as a restriction of the way the game is played, not just human capacity) and only by playing on one map over and over again. If you moved AlphaStar to a different map, world class players could have easily beaten it. Practically anyone could have easily beaten it.  

So here’s the common theme between voice-command and AlphaStar: as soon as you take one step off the beaten path, they break. Just like a CLI, a VUI (like Alexa or Siri) breaks as soon as you enter a command it doesn’t perfectly expect. And AlphaStar goes from worldclass pro to bumbling child if you swap from a level it’s been trained on to one it hasn’t. 

The thing to realize is that his limitation isn’t just about how these programs perform today. It’s about the fundamental expectations we should have for them ever

Easy Problems and Hard Problems

This leads me to the underlying reason for all the hype around AI. It’s very, very difficult for non-experts to tell the difference between problems that are trivial and problems that are basically impossible. 

For a good overview of the concept, check out Range by David Epstein. He breaks the world into “kind problems” and “wicked problems”. Kind problems are problems like chess or playing Starcraft again and again on the same level with direct access to unit location. Wicked problems are problems like winning a live debate or playing Starcraft on a level you’ve never seen before, maybe with some new units added in for good measure.

If your job involves kind problems–if it’s repeatable with simple rules for success and failure–than a robot might steal your job. But if your job involves wicked problems–if you have to figure out a new approach to a novel situation on a regular basis–then your job is safe now and for the foreseeable future.

This doesn’t mean nobody should be worried. The story of technological progress has largely been one of automation. We used to need 95% or more of the human population to grow food just so we’d have enough to eat. Thanks to automation and labor-augmentation, that proportion is down to the single digits. Every other job that exists, other than subsistence farming, exists because of advances to farming technology (and other labor). In the long run: that’s great!

In the short run, it can be traumatic both individually and collectively. If you’ve invested decades of your life getting good at one the tasks that robots can do, then it’s devastating to suddenly be told your skills–all that effort and expertise–are obsolete. And when this happens to large numbers of people, the result is societal instability.

So it’s not that the problem doesn’t exist. It’s more that it’s not a new problem, and it’s one we should manage as opposed to “solve”. The reason for that is that the only way to solve the problem would be to halt forward progress. And, unless you think going back to subsistence farming or hunter-gathering sounds like a good idea (and nobody really believes that, no matter what they say), then we should look forward with optimism for the future developments that will free up more and more of our time and energy for work that isn’t automatable. 

But we do need to manage that progress to mitigate the personal and social costs of modernization. Because there are costs, and even if they are ultimately outweighed by the benefits, that doesn’t mean they just disappear.

I Want to be Right

Not long ago I was in a Facebook debate and my interlocutor accused me of just wanting to be right. 

Interesting accusation.

Of course I want to be right. Why else would we be having this argument? But, you see, he wasn’t accusing me of wanting to be right but of wanting to appear right. Those are two very different things. One of them is just about the best reason for debate and argument you can have. The other is just about the worst. 

Anyone who has spent a lot of time arguing on the Internet has asked themselves what the point of it all is. The most prominent theory is the speculator theory: you will never convince your opponent but you might convince the folks watching. There’s merit to that, but it also rests on a questionable assumption, which is that the default purpose is to win the argument by persuading the other person and (when that fails) we need to find some alternative. OK, but I question if we’ve landed on the right alternative.

I don’t think the primary importance of a debate is persuading speculators. The most important person for you to persuade in a debate is yourself.

It’s a truism these days that nobody changes their mind, and we all like to one-up each other with increasingly cynical takes on human irrationality and intractability. The list of cognitive biases on Wikipedia is getting so long that you start to wonder how humans manage to reason at all. Moral relativism and radical non-judgmentalism are grist for yet more “you won’t believe this” headlines, and of course there’s the holy grail of misanthropic cynicism:the argumentative theory. As Haidt summarizes one scholarly article on it:

Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments. That’s why they call it The Argumentative Theory of Reasoning. So, as they put it, “The evidence reviewed here shows not only that reasoning falls quite short of reliably delivering rational beliefs and rational decisions. It may even be, in a variety of cases, detrimental to rationality. Reasoning can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions. This explains the confirmation bias, motivated reasoning, and reason-based choice, among other things.

Jonathan Haidt in “Righteous Mind”

Reasoning was not designed to pursue truth.

Well, there you have it. Might as well just admit that Randall Munroe was right and all pack it in, then, right?

Not so fast.

This whole line of research has run away with itself. We’ve sped right past the point of dispassionate analysis and deep into sensationalization territory. Case in point: the backfire effect. 

According to RationalWiki “the effect is claimed to be that when, in the face of contradictory evidence, established beliefs do not change but actually get stronger.” The article goes on:

The backfire effect is an effect that was originally proposed by Brendan Nyhan and Jason Reifler in 2010 based on their research of a single survey item among conservatives… The effect was subsequently confirmed by other studies.

Entry on RationalWiki

If you’ve heard of it, it might be from a popular post by The Oatmeal. Take a minute to check it out. (I even linked to the clean version without all the profanity.)

Click the image to read the whole thing.

Wow. Humans are so irrational, that not only can you not convince them with facts, but if you present facts they believe the wrong stuff even more

Of course, it’s not really “humans” that are this bad at reasoning. It’s some humans. The original research was based on conservatives and the implicit subtext behind articles like the one on RationalWiki is that they are helplessly mired in irrational biases but we know how to conquer our biases, or at the very least make some small headway that separates us from the inferior masses. (Failing that, at least we’re raising awareness!) But I digress.

The important thing isn’t that this cynicism is always covertly at least a little one-sided, it’s that the original study has been really hard to replicate. From an article on Mashable:

[W]hat you should keep in mind while reading the cartoon is that the backfire effect can be hard to replicate in rigorous research. So hard, in fact, that a large-scale, peer-reviewed study presented last August at the American Political Science Association’s annual conference couldn’t reproduce the findings of the high-profile 2010 study that documented backfire effect.

Uh oh. Looks like the replication crisis–which has been just one part of the larger we-can’t-really-know-anything fad–has turned to bite the hand that feeds it. 

This whole post (the one I’m writing right now) is a bit weird for me, because when I started blogging my central focus was epistemic humility. And it’s still my driving concern. If I have a philosophical core, that’s it. And epistemic humility is all about the limits of what we (individually and collectively) can know. So, I never pictured myself being the one standing up and saying, “Hey, guys, you’ve taken this epistemic humility thing too far.” 

But that’s exactly what I’m saying.

Epistemic humility was never supposed to be a kind of “we can never know the truth for absolute certain so may as well give up” fatalism. Not for me, anyway. It was supposed to be about being humble in our pursuit of truth. Not in saying that the pursuit was doomed to fail so why bother trying.

I think even a lot of the doomsayers would agree with that. I quoted Jonathan Haidt on the argumentative theory earlier, and he’s one of my favorite writers. I’m pretty sure he’s not an epistemological nihilist. RationalWiki may get a little carried away with stuff like the backfire effect (they gave no notice on their site that other studies have failed to replicate the effect), but evidently they think there’s some benefit to telling people about it. Else, why bother having a wiki at all?

Taken to its extreme, epistemic humility is just as self-defeating as subjectivism. Subjectivism–the idea that truth is ultimately relative–is incoherent because if you say “all truth is relative” you’ve just made an objective claim. That’s the short version. For the longer version, read Thomas Nagel’s The Last Word

The same goes for all this breathless humans-are-incapable-of-changing-their-minds stuff. Nobody who does all the hard work of researching and writing and teaching can honestly believe that in their bones. At least, not if you think (as I do) that a person’s actions are the best measure of their actual beliefs, rather than their own (unreliable) self-assessments.

Here’s the thing, if you agree with the basic contours of epistemic humility–with most of the cognitive biases and even the argumentative hypothesis–you end up at a place where you think human belief is a reward-based activity like any other. We are not truth-seeking machines that automatically and objectively crunch sensory data to manufacture beliefs that are as true as possible given the input. Instead, we have instrumental beliefs. Beliefs that serve a purpose. A lot of the time that purpose is “make me feel good” as in “rationalize what I want to do already” or “help me fit in with this social clique”.

I know all this stuff, and my reaction is: so what?

So what if human belief is instrumental? Because you know what, you can choose to evaluate your beliefs by things like “does it match the evidence?” or “is it coherent with my other beliefs?” Even if all belief is ultimately instrumental, we still have the freedom to choose to make truth the metric of our beliefs. (Or, since we don’t have access to truth, surrogates like “conformance with evidence” and “logical consistency”.)

Now, this doesn’t make all those cognitive biases just go away. This doesn’t disprove the argumentative theory. Let’s say it’s true. Let’s say we evolved the capacity to reason to make convincing (rather than true) arguments. OK. Again I ask: so what? Who cares why we evolved the capacity, now that we have it we get to decide what to do with it. I’m pretty sure we did not evolve opposable thumbs for the purpose of texting on touch-screen phones. Yet here we are and they seem adequate to the task. 

What I’m saying is this: epistemic humility and the associated body of research tell us that humans don’t have to conform their beliefs to truth and that we are incapable of conforming our beliefs perfect to truth and that it’s hard to conform our beliefs even mostly to truth. OK. But nowhere is it written that we can make no progress at all. Nowhere is it written we cannot try or that–when we try earnestly–we are doomed to make absolutely no headway at all.

I want to be right. And I’m not apologizing for that. 

So how do Internet arguments come into this? One way that we become right–individually and collectively–is by fighting over things. It’s pretty similar to the theory behind our adversarial criminal justice system. Folks who grow up in common law countries (of which the US is one) might not realize that’s not the way all criminal justice systems work. The other major alternative is the inquisitorial system (which is used in countries like France and Italy).

In an inquisitorial system, the court is the one that conducts the investigation. In an adversarial system the court is supposed to be neutral territory where two opposing camps–the prosecution and the defense–lay out their case. That’s where the “adversarial” part comes in: the prosecutors and defenders are the adversaries. In theory, the truth arises from the conflict between the two sides. The court establishes rules of fair play (sharing evidence, not lying) and–within those bounds–the prosecutors’ and defenders’ job is not to present the truest argument but the best argument for their respective side. 

The analogy is not a perfect one, of course. For one thing, we also have a presumption of innocence in the criminal justice system because we’re not evaluating ideas we’re evaluating people. That presumption of innocence is crucial in a real criminal justice system, but it has no exact analogue in the court of ideas.

For another thing, we have a judge to oversee trials and enforce the rules. There’s no impartial judge when you have a debate with randos on the Internet. This is unfortunate, because it means that If we don’t police ourselves in our debates, then the whole process breaks down. There is no recourse.

When I say I want to be right, what am I saying, in this context? I’m saying that I want to know more at the end of a debate than I did at the start. That’s the goal. 

People like to say you never change anyone’s mind in a debate. What they really mean is that you never reverse someone’s mind in a debate. And, while that’s not literally true, it’s pretty close. It’s really, really rare for someone to go into a single debate as pro-life (or whatever) and come out as pro-choice (or whatever). I have never seen someone make a swing that dramatic in a single debate. I certainly never have.

But it would be absurd to say that I never “changed my mind” because of the debates I’ve had about abortion. I’ve changed my mind hundreds of time. I’ve abandoned bad arguments and adopted or invented new ones. I’ve learned all kinds of facts about law and history and biology that I didn’t know before. I’ve even changed my position many times. Just because the positions were different variations within the theme of pro-life doesn’t mean I’ve never “changed my mind”. If you expect people to walk in with one big, complex, set of ideas that are roughly aligned with a position (pro-life, pro-gun) and then walk out of a single conversation with whole new set of ideas that are aligned under the opposite position (pro-choice, anti-gun), then you’re setting that bar way too high.

But all of this only works if the folks having the argument follow the rules. And–without a judge to enforce them–that’s hard.

This is where the other kind of wanting to “be right” comes in. One of the most common things I see in a debate (whether I’m having it or not) is that folks want to avoid having to admit they were wrong

First, let me state emphatically that if you want to avoid admitting you were wrong you don’t actually care about being right in the sense that I mean it. Learning where you are wrong is just about the only way to become right! People who really want to “be right” embrace being wrong every time it happens because those are the stepping stones to truth. Every time you learn a belief or a position you took was wrong, you’re taking a step closer to being right.

But–going back to those folks who want to avoid appearing wrong–they don’t actually want to be right. They just want to appear right. They’re not worried about truth. They worried about prestige. Or ego. Or something else.

If you don’t care about being right and you only care about appearing right, then you don’t care about truth either. And these folks are toxic to the whole project of adversarial truth-seeking. Because they break the rules. 

What are the rules? Basic stuff like don’t lie, debate the issue not the person, etc. Maybe I’ll come up with a list. There’s a whole set of behaviors that can make your argument appear stronger while in fact all you’re doing is peeing in the pool for everyone who cares about truth. 

If you care about being right, then you will give your side of the debate your utmost. You’ll present the best evidence, use the tightest arguments, and throw in some rhetorical flourishes for good measure. But if you care about being right, then you will not break the rules to advance your argument (No lying!) and you also won’t just abandon your argument in midstream to switch to a new one that seems more promising. Anyone who does that–who swaps their claims mid-stream whenever they see one that shows a more promising temporary advantage–isn’t actually trying to be right. They’re trying to appear right. 

They’re not having an argument or a debate. They’re fighting for prestige or protecting their ego or doing something else that looks like an argument but isn’t actually one. 

I wrote this partially to vent. Partially to organize my feelings. But also to encourage folks not to give up hope, because if you believe that nobody cares about truth and changing minds is impossible then it becomes a self-fulfilling prophecy.

And you want to know the real danger of relativism and post-modernism and any other truth-adverse ideology? Once truth is off the table as the goal, the only thing remaining is power.

As long as people believe in truth, there is a fundamentally cooperative aspect to all arguments. Even if you passionately think someone is wrong, if you both believe in truth then there is a sense in which you’re playing the same game. There are rules. And, more than rules, there’s a common last resort you’re both appealing to. No matter how messy it gets and despite the fact that nobody ever has direct, flawless access to truth, even the bitterest ideological opponents have that shred of common ground: they both think they are right, which means they both thing “being right” is a thing you can, and should, strive to be.

But if you set that aside, then you sever the last thread between opponents and become nothing but enemies. If truth is not a viable recourse, all that is left is power. You have to destroy your opponent. Metaphorically at first. Literally if that fails. Nowhere does it say on the packaging of relativism “May lead to animosity and violence”. It’s supposed to do the opposite. It’s advertised as leading to tolerance and non-judgmentalism, but by taking truth off the table it does the opposite.

Humans are going to disagree. That’s inevitable. We will come into conflict. With truth as an option, there is no guarantee that the conflict will be non-violent, but it’s always an option. It can even be a conflict that exists in an environment of friendship, respect, and love. It’s possible for people who like and admire each other to have deep disagreements and to discuss them sharply but in a context of that mutual friendship. It’s not easy, but it’s possible. 

Take truth off the table, and that option disappears. This doesn’t mean we go straight from relativism to mutual annihilation, but it does mean the only thing left is radical partisanship where each side views the other as an alien “other”. Maybe that leads to violence, maybe not. But it can’t lead to friendship, love, and unity in the midst of disagreement.

So I’ll say it one more time: I want to be right.

I hope you do, too.

If that’s the case, then there’s a good chance we’ll get into some thundering arguments. We’ll say things we regret and offend each other. Nobody is a perfect, rational machine. Biases don’t go away and ego doesn’t disappear just because we are searching for truth. So we’ll make mistakes and, hopefully, we’ll also apologize and find common ground. We’ll change each other’s minds and teach each other things and grudgingly earn each other’s respect. Maybe we’ll learn to be friends long before we ever agree on anything.

Because if I care about being right and you care about being right, then we already have something deep inside of us that’s the same. And even if we disagree about every single other thing, we always will.

In Favor of Real Meritocracy

The meritocracy has come in for a lot of criticism recently, basically in the form of two arguments. 

There’s a book by Daniel Markovits called The Meritocracy Trap that basically argues that meritocracy makes everyone miserable and unequal by creating this horrific grind to get into the most elite colleges and then, after you get your elite degree, to grind away working 60 – 100 hours to maintain your position at the top of the corporate hierarchy. 

There was also a very interesting column by Ross Douthat that makes a separate but related point. According to Douthat, the WASP-y elite that dominated American society up until the early 20th century decided to “dissolve their own aristocracy” in favor of a meritocracy, but the meritocracy didn’t work out as planned because it sucks talent away from small locales (killing off the diverse regional cultures that we used to have) and because:

the meritocratic elite inevitably tends back toward aristocracy, because any definition of “merit” you choose will be easier for the children of these self-segregated meritocrats to achieve.

What Markovits and Douthat both admit without really admitting it is one simple fact: the meritocracy isn’t meritocratic.

Just to be clear, I’ll adopt Wikipedia’s definition of a meritocracy for this post:

Meritocracy is a political system in which economic goods and/or political power are vested in individual people on the basis of talent, effort, and achievement, rather than wealth or social class. Advancement in such a system is based on performance, as measured through examination or demonstrated achievement.

When people talk about meritocracy today, they’re almost always referring to the Ivy League and then–working forward and backward–to the kinds of feeder schools and programs that prepare kids to make it into the Ivy League and the types of high-powered jobs (and the culture surrounding them) that Ivy League students go onto after they graduate. 

My basic point is a pretty simple one: there’s nothing meritocratic about the Ivy League. The old WASP-y elite did not, as Douthat put it, “dissolve”. It just went into hiding. Americans like to pretend that we’re a classless society, but it’s a fiction. We do have class. And the nexus for class in the United States is the Ivy League. 

If Ivy League admission were really meritocratic, it would be based as much as possible on objective admission criteria. This is hard to do, because even when you pick something that is in a sense objective–like SAT scores–you can’t overcome the fact that wealthy parents can and will hire tutors to train their kids to artificially inflate their scores relative to the scores an equally bright, hard-working lower-class student can attain without all expensive tutoring and practice tests. 

Still, that’s nothing compared to the way that everything else that goes into college admissions–especially the litany of awards, clubs, and activities–tilts the game in favor of kids with parents who (1) know the unspoken rules of the game and (2) have cash to burn playing it. An expression I’ve heard before is that the Ivy League is basically privilege laundering racket. It has a facade of being meritocratic, but the game is rigged so that all it really does is perpetuate social class. “Legacy” admissions are just the tip of the iceberg in that regard.

What’s even more outrageous than the fiction of meritocratic admission to the Ivy League (or other elite, private schools) is the equally absurd fiction that students with Ivy League degrees have learned some objectively quantifiable skillset that students from, say, state schools have not. There’s no evidence for this. 

So students from outside the social elite face double discrimination: first, because they don’t have an equal chance to get into the Ivy Leagues and second, because then they can’t compete with Ivy League graduates on the job market. It doesn’t matter how hard you work or how much you learn, your Statue U degree is never going to stand out on a resume the way Harvard or Yale does.

There’s nothing meritocratic about that. And that’s the point. The Ivy League-based meritocracy is a lie.

So I empathize with criticisms of American meritocracy, but it’s not actually a meritocracy they’re criticizing. It’s a sham meritocracy that is, in fact, just a covert class system. 

The problem is that if we blame the meritocracy and seek to circumvent it, we’re actually going to make things worse. I saw a WaPo headline that said “No one likes the SAT. It’s still the fairest thing about admissions.” And that’s basically what I’m saying: “objective” scores can be gamed, but not nearly as much as the qualitative stuff. If you got rid of the SAT on college admissions you would make it less meritocratic and also less fair. At least with the SAT someone from outside the elite social classes has a chance to compete. Without that? Forget it.

Ideally, we should work to make our system a little more meritocratic by downplaying prestige signals like Ivy League degrees and emphasizing objective measurements more. But we’re never going to eradicate class entirely, and we shouldn’t go to radical measures to attempt it. Pretty soon, the medicine ends up worse than the disease if we go that route. That’s why you end up with absurd, totalitarian arguments that parents shouldn’t read to their children and that having an intact, loving, biological family is cheating. That way lies madness.

We should also stop pretending that our society is fully meritocratic. It’s not. And the denial is perverse. This is where Douthat was right on target:

[E]ven as it restratifies society, the meritocratic order also insists that everything its high-achievers have is justly earned… This spirit discourages inherited responsibility and cultural stewardship; it brushes away the disciplines of duty; it makes the past seem irrelevant, because everyone is supposed to come from the same nowhere and rule based on technique alone. As a consequence, meritocrats are often educated to be bad leaders, and bad people…

Like Douthat, I’m not calling for a return to WASP-y domination. (Also like Douthat, I’d be excluded from that club.) A diverse elite is better than a monocultural elite. But there’s one vital thing that the WASPy elite had going for it that any elite (and there’s always an elite) should reclaim:

the WASPs had at least one clear advantage over their presently-floundering successors: They knew who and what they were.

What Anti-Poverty Programs Actually Reduce Poverty?

According to the Tax Policy Center,

The earned income tax credit (EITC) provides substantial support to low- and moderate-income working parents, but very little support to workers without qualifying children (often called childless workers). Workers receive a credit equal to a percentage of their earnings up to a maximum credit. Both the credit rate and the maximum credit vary by family size, with larger credits available to families with more children. After the credit reaches its maximum, it remains flat until earnings reach the phaseout point. Thereafter, it declines with each additional dollar of income until no credit is available (figure 1).

By design, the EITC only benefits working families. Families with children receive a much larger credit than workers without qualifying children. (A qualifying child must meet requirements based on relationship, age, residency, and tax filing status.) In 2018, the maximum credit for families with one child is $3,461, while the maximum credit for families with three or more children is $6,431.

…Research shows that the EITC encourages single people and primary earners in married couples to work (Dickert, Houser, and Sholz 1995; Eissa and Liebman 1996; Meyer and Rosenbaum 2000, 2001). The credit, however, appears to have little effect on the number of hours they work once employed. Although the EITC phaseout could cause people to reduce their hours (because credits are lost for each additional dollar of eanings, which is effectively a surtax on earnings in the phaseout range), there is little empirical evidence of this happening (Meyer 2002).

The one group of people that may reduce hours of work in response to the EITC incentives is lower-earning spouses in a married couple (Eissa and Hoynes 2006). On balance, though, the increase in work resulting from the EITC dwarfs the decline in participation among second earners in married couples.

If the EITC were treated like earnings, it would have been the single most effective antipoverty program for working-age people, lifting about 5.8 million people out of poverty, including 3 million children (CBPP 2018).

The EITC is concentrated among the lowest earners, with almost all of the credit going to households in the bottom three quintiles of the income distribution (figure 2). (Each quinitle contains 20 percent of the population, ranked by household income.) Very few households in the fourth quinitle receive an EITC (fewer than 0.5 percent).

Recent evidence supports this view of the EITC. From a brand new article in Contemporary Economic Policy:

First, the evidence suggests that longer-run effects[1]”Our working definition of “longer run” in this study is 10 years” (pg. 2).[/ref] of the EITC are to increase employment and to reduce poverty and public assistance, as long as we rely on national as well as state variation in EITC policy. Second, tighter welfare time limits also appear to reduce poverty and public assistance in the longer run. We also find some evidence that higher minimum wages, in the longer run, may lead to declines in poverty and the share of families on public assistance, whereas higher welfare benefits appear to have adverse longer-run effects, although the evidence on minimum wages and welfare benefits—and especially the evidence on minimum wages—is not robust to using only more recent data, nor to other changes. In our view, the most robust relationships we find are consistent with the EITC having beneficial longer-run impacts in terms of reducing poverty and public assistance, whereas there is essentially no evidence that more generous welfare delivers such longer-run benefits, and some evidence that more generous welfare has adverse longer-run effects on poverty and reliance on public assistance—especially with regard to time limits (pg. 21).

Let’s stick with programs that work.

Do Tariffs Cancel Out the Benefits of Deregulation?

In June, the Council of Economic Advisers released a report on the economic effects of the Trump administration’s deregulation. They estimate “that after 5 to 10 years, this new approach to Federal regulation will have raised real incomes by $3,100 per household per year. Twenty notable Federal deregulatory actions alone will be saving American consumers and businesses about $220 billion per year after they go into full effect. They will increase real (after-inflation) incomes by about 1.3 percent” (pg. 1).

David Henderson (former senior economist in Reagan’s Council of Economic Advisers) writes, “Do the authors make a good case for their estimate? Yes…I wonder, though, what the numbers would look like if they included the negative effects on real income of increased restrictions on immigration and increased restrictions on trade with Iran. (I’m putting aside increased tariffs, which also hurt real U.S. income, because tariffs are generally categorized as taxes, not regulation.)”

But what if we did include the tariffs? A recent policy brief suggests that the current savings from deregulation will actually be cancelled out by the new tariffs. As the table shows below, the savings due to deregulation stack up to $46.5 billion as of June. However, the tariffs imposed between January 2017 and June 2019 rack up to a dead loss of $13.6 billion. By the end of 2019, however, the dead loss will rack up another $32.1 billion. If the currently planned tariffs are put into effect on top of the already existing ones, then we’re looking at a dead loss of up to $121.1 billion.

Maybe if economists start putting clap emojis in their work, people will finally get that tariffs aren’t good for the economy.

Demographics & Inequality: 2018 Edition

Every year, economist Mark Perry draws on Census Bureau reports to paint of picture of the demographics of inequality. Looking at 2018 data, he constructed the following table:

Once again, he concludes,

Household demographics, including the average number of earners per household and the marital status, age, and education of householders are all very highly correlated with American’s household income. Specifically, high-income households have a greater average number of income-earners than households in lower-income quintiles, and individuals in high-income households are far more likely than individuals in low-income households to be well-educated, married, working full-time, and in their prime earning years. In contrast, individuals in lower-income households are far more likely than their counterparts in higher-income households to be less-educated, working part-time, either very young (under 35 years) or very old (over 65 years), and living in single-parent or single households.

The good news about the Census Bureau is that the key demographic factors that explain differences in household income are not fixed over our lifetimes and are largely under our control (e.g., staying in school and graduating, getting and staying married, working full-time, etc.), which means that individuals and households are not destined to remain in a single income quintile forever. Fortunately, studies that track people over time find evidence of significant income mobility in America such that individuals and households move up and down the income quintiles over their lifetimes, as the key demographic variables highlighted above change, see related CD posts herehere and here. Those links highlight the research of social scientists Thomas Hirschl (Cornell) and Mark Rank (Washington University) showing that as a result of dynamic income mobility nearly 70% of Americans will be in the top income quintile for at least one year while almost one-third will be in the top quintile for ten years or more (see chart below).

What’s more, Perry points out elsewhere that the new data demonstrate that the middle class is shrinking…along with the lower class. Meanwhile, the percentage of high-income households has more than tripled since 1967:

In short, the percentage of middle and lower-income households has declined because they’ve been moving up.

The Paradox of Trade Liberalization

From a brand new study in the Journal of International Economics:

Using household survey data for 54 low and middle income countries harmonized with trade and tariff data, this paper offers a quantitative assessment of the income gains and inequality costs of trade liberalization and the potential trade-off between them.

A stylized yet comprehensive model that allows for a rich range of first-order effects on household consumption and income is used to quantify welfare gains or losses for households in different parts of the expenditure distribution. These welfare impacts are subsequently explored by deploying the Atkinson social welfare function that allows us to decompose inequality adjusted gains into aggregate gains and equality (distributional) gains.

Liberalization is estimated to lead to income gains in 45 countries in our study, and to income losses in 9 countries. The developing world as a whole would enjoy gains of about 1.9% of real household expenditures, on average. These income gains are negatively correlated with equality gains, such that liberalization typically entails a trade-off between average incomes and income inequality. In fact, such trade-offs arise in 45 out of 54 countries, and are primarily the result of trade exacerbating income inequality. By contrast, consumption gains tend to be more evenly spread across households.

While trade-offs are prevalent, our findings also suggest that liberalization would be welfare enhancing in the vast majority of countries in our study: in a large part of the developing world, the current structure of tariff protection is inducing sizable welfare losses. Explaining what drives these patterns is beyond the scope of this paper but an interesting avenue for future research (pg. 16).

I’m sure this offers a bit of a conundrum for those who have conflated concerns over inequality with caring for the poor.

Is Religious Faith a Global Force for Good?

Image result for family

According to a new report from the Institute for Family Studies and the Wheatley Institution, religion appears to be a net gain “in 11 countries in the Americas, Europe, and Oceania.” From the executive summary:

When it comes to relationship quality in heterosexual relationships, highly religious couples enjoy higher-quality relationships and more sexual satisfaction, compared to less/mixed religious couples and secular couples. For instance, women in highly religious relationships are about 50% more likely to report that they are strongly satisfied with their sexual relationship than their secular and less religious counterparts. Joint decision-making, however, is more common among men in shared secular relationships and women in highly religious relationships, compared to their peers in less/mixed religious couples.

When it comes to fertility, data from low-fertility countries in the Americas, East Asia, and Europe show that religion’s positive influence on fertility has become stronger in recent decades. Today, people ages 18-49 who attend religious services regularly have 0.27 more children than those who never, or practically never, attend. The report also indicates that marriage plays an important role in explaining religion’s continued positive influence on childbearing because religious men and women are more likely to marry compared to their more secular peers, and the married have more children than the unmarried.

When it comes to domestic violence, religious couples in heterosexual relationships do not have an advantage over secular couples or less/mixed religious couples. Measures of intimate partner violence (IPV)—which includes physical abuse, as well as sexual abuse, emotional abuse, and controlling behaviors—do not differ in a statistically significant way by religiosity. Slightly more than 20% of the men in our sample report perpetuating IPV, and a bit more than 20% of the women in our sample indicate that they have been victims of IPV in their relationship. Our results suggest, then, that religion is not protective against domestic violence for this sample of couples from the Americas, Europe, and Oceania. However, religion is not an increased risk factor for domestic violence in these countries, either.

The relationships between faith, feminism, and family outcomes are complex. The impact of gender ideology on the outcomes covered in this report, for instance, often varies by the religiosity of our respondents. When it comes to relationship quality, we find a J-Curve in overall relationship quality for women, such that women in shared secular, progressive relationships enjoy comparatively high levels of relationship quality, whereas women in the ideological and religious middle report lower levels of relationship quality, as do traditionalist women in secular relationships; but women in highly religious relationships, especially traditionalists, report the highest levels of relationship quality. For domestic violence, we find that progressive women in secular relationships report comparatively low levels of IPV compared to conservative women in less/mixed religious relationships. In sum, the impact of gender ideology on contemporary family life may vary a great deal by whether or not a couple is highly religious, nominally religious, or secular.

There’s also some useful data on family prayer and worldwide family structure, socioeconomic conditions, family satisfaction, and attitudes and norms. Check it out.