The Real Social Dilemma

The Social Dilemma is a newly available documentary on Netflix about the peril of social networks. 

The documentary does a decent job of introducing some of the ways social networks (Facebook, Twitter, Pinterest, etc.) are negatively impacting society. If this is your entry point to the topic, you could do worse.

But if you’re looking for really thorough analysis of what is going wrong or for possible solutions, then this documentary will leave you wanting more. Here are four specific topics–three small and one large–where The Social Dilemma fell short.

AI Isn’t That Impressive

I published a piece in June on Why I’m an AI Skeptic and a lot of what I wrote then applies here. Terms like “big data” and “machine learning” are overhyped, and non-experts don’t realize that these tools are only at their most impressive in a narrow range of circumstances. In most real-world cases, the results are dramatically less impressive. 

The reason this matters is that a lot of the oomph of The Social Dilemma comes from scaring people, and AI just isn’t actually that scary. 

Randall Munroe’s What If article is technically about a robot apocalypse, but the gist of it applies to AI as well.

I don’t fault the documentary for not going too deep into the details of machine learning. Without a background in statistics and computer science, it’s hard to get into the details. That’s fair. 

I do fault them for sensationalism, however. At one point Tristan Harris (one of the interviewees) makes a really interesting point that we shouldn’t be worried about when AI surpasses human strengths, but when it surpasses human weaknesses. We haven’t reached the point where AI is better than a human at the things humans are good at–creative thinking, language, etc. But we’ve already long since passed the point where AI is better than humans at things humans are bad at, such as memorizing and crunching huge data sets. If AI is deployed in ways that leverage human weaknesses, like our cognitive biases, then we should already be concerned. So far this is reasonable, or at least interesting.

But then his next slide (they’re showing a clip of a presentation he was giving) says something like: “Checkmate humanity.”

I don’t know if the sensationalism is in Tristan’s presentation or The Real Dilemma’s editing, but either way I had to roll my eyes.

All Inventions Manipulate Us

At another point, Tristan tries to illustrate how social media is fundamentally unlike other human inventions by contrasting it with a bicycle. “No one got upset when bicycles showed up,” he says. “No one said…. we’ve just ruined society. Bicycles are affecting society, they’re pulling people away from their kids. They’re ruining the fabric of democracy.”

Of course, this isn’t really true. Journalists have always sought sensationalism and fear as a way to sell their papers, and–as this humorous video shows–there was all kinds of panic around the introduction of bicycles.

Tristan’s real point, however, is that bicycles were were a passive invention. They don’t actively badger you to get you to go on bike rides. They just sit there, benignly waiting for you to decide to use them or not. In this view, you can divide human inventions into everything before social media (inanimate objects that obediently do our bidding) and after social media (animate objects that manipulate us into doing their bidding).

That dichotomy doesn’t hold up. 

First of all, every successful human invention changes behavior individually and collectively. If you own a bicycle, then the route you take to work may very well change. In a way, the bike does tell you where to go. 

To make this point more strongly, try to imagine what 21st century America would look like if the car had never been invented. No interstate highway system, no suburbs or strip malls, no car culture. For better and for worse, the mere existence of a tool like the car transformed who we are both individually and collectively. All inventions have cultural consequences like that, to a greater or less degree.

Second, social media is far from the first invention that explicitly sets out to manipulate people. If you believe the argumentative theory, then language and even rationality itself evolved primarily as ways for our primate ancestors to manipulate each other. It’s literally what we evolved to do, and we’ve never stopped.

Propaganda, disinformation campaigns, and psy-ops are one obvious category of examples with roots stretching back into prehistory. But, to bring things closer to social networks, all ad-supported broadcast media have basically the same business model: manipulate people to captivate their attention so that you can sell them ads. That’s how radio and TV got their commercial start: with the exact same mission statement as GMail, Google search, or Facebook. 

So much for the idea that you can divide human inventions into before and after social media. It turns out that all inventions influence the choices we make and plenty of them do so by design.

That’s not to say that nothing has changed, of course. The biggest difference between social networks and broadcast media is that your social networking feed is individualized

With mass media, companies had to either pick and choose their audience in broad strokes (Saturday morning for kids, prime time for families, late night for adults only) or try to address two audiences at once (inside jokes for the adults in animated family movies marketed to children). With social media, it’s kind of like you have a radio station or a TV studio that is geared just towards you.

Thus, social media does present some new challenges, but we’re talking about advancements and refinements to humanity’s oldest game–manipulating other humans–rather than some new and unprecedented development with no precursor or context. 

Consumerism is the Real Dilemma

The most interesting subject in the documentary, to me at least, was Jaron Lanier. When everyone else was repeating that cliché about “you’re the product, not the customer” he took it a step or two farther. It’s not that you are the product. It’s not even that your attention is the product. What’s really being sold by social media companies, Lanier pointed out, is the ability to incrementally manipulate human behavior.

This is an important point, but it raises a much bigger issue that the documentary never touched. 

This is the amount of money spent in the US on advertising as a percent of GDP over the last century:

Source: Wikipedia

It’s interesting to note that we spent a lot more (relative to the size of our economy) on advertising in the 1920s and 1930s than we do today. What do you think companies were buying for their advertising dollars in 1930 if not “the ability to incrementally manipulate human behavior”?

Because if advertising doesn’t manipulate human behavior then why spend the money? If you can’t manipulate human behavior with a billboard or a movie trailer or a radio spot, then nobody would ever spend money on any of those things.

This is the crux of my disagreement with The Social Dilemma. The poison isn’t social media. The poison is advertising. The danger of social media is just that (within the current business model) it’s a dramatically more effective method of delivering the poison.

Let me stipulate that advertising is not an unalloyed evil. There’s nothing intrinsically wrong with showing people a new product or service and trying to persuade them to pay you for it. The fundamental premise of a market economy is that voluntary exchange is mutually beneficial. It leaves both people better off. 

And you can’t have voluntary exchange without people knowing what’s available. Thus, advertising is necessary to human commerce and is a part of an ecosystem flourishing, mutually beneficial exchanges and healthy competition. You could not have modern society without advertising of some degree and type. 

That doesn’t mean the amount of advertising–or the kind of advertising–that we accept in our society is healthy. As with  basically everything, the difference between poison and medicine is found in the details of dosage and usage. 

There was a time, not too long ago, when the Second Industrial Revolution led to such dramatically increased levels of production that economists seriously theorized about ever shorter work weeks with more and more time spent pursuing art and leisure with our friends and families. Soon, we’d spend only ten hours a week working, and the rest developing our human potential.

And yet in the time since then, we’ve seen productivity skyrocket (we can make more and more stuff with the same amount of time) while hours worked have also remained roughly steady. The simplest reason for this? We’re addicted to consumption. Instead of holding production basically constant (and working fewer and fewer hours), we’ve tried to maximize consumption by keeping as busy as possible. This addiction to consumption, not necessarily having but to acquiring stuff, manifests in some really weird cultural anomalies that–if we witnessed them from an alien perspective–probably strike us as dysfunctional or even pathological.

I’ll start with a personal example: when I’m feeling a little down I can reliably get a jolt of euphoria from buying something. Doesn’t have to be much. Could be a gadget or a book I’ve wanted on Amazon. Could be just going through the drive-thru. Either way, clicking that button or handing over my credit card to the Chick-Fil-A worker is a tiny infusion of order and control in a life that can seem confusingly chaotic and complex. 

It’s so small that it’s almost subliminal, but every transaction is a flex. The benefit isn’t just the food or book you purchase. It’s the fact that you demonstrated the power of being able to purchase it. 

From a broader cultural perspective, let’s talk about unboxing videos. These are videos–you can find thousands upon thousands of them on YouTube–where someone gets a brand new gizmo and films a kind of ritualized process of unpacking it. 

This is distinct from a product review (a separate and more obviously useful genre). Some unboxing videos have little tidbits of assessment, but that’s beside the point. The emphasis is on the voyeuristic appeal of watching someone undress an expensive, virgin item. 

And yeah, I went with deliberately sexual language in that last sentence because it’s impossible not to see the parallels between brand newness and virginity, or between ornate and sophisticated product packaging and fashionable clothing, or between unboxing an item and unclothing a person. I’m not saying it’s literally sexual, but the parallels are too strong to ignore.

These do not strike me as the hallmarks of a healthy culture, and I haven’t even touched on the vast amounts of waste. Of course there’s the literal waste, both from all that aforementioned packaging and from replacing consumer goods (electronics, clothes, etc.) at an ever-faster pace. There’s also the opportunity cost, however. If you spend three or four or ten times more on a pair of shoes to get the right brand and style than you could on a pair of equally serviceable shoes without the right branding, well… isn’t that waste? You could have spent the money on something else or, better still, saved it or even worked less. 

This rampant consumerism isn’t making us objectively better off or happier. It’s impossible to separate consumerism from status, and status is a zero-sum game. For every winner, there must be a loser. And that means that, as a whole, status-seeking can never make us better off. We’re working ourselves to death to try and win a game that doesn’t improve our world. Why?

Advertising is the proximate cause. Somewhere along the way advertisers realized that instead of trying to persuade people directly that this product would serve some particular need, you could bypass the rational argument and appeal to subconscious desires and fears. Doing this allows for things like “brand loyalty.” It also detaches consumption from need. You can have enough physical objects, but you can you ever have enough contentment, or security, or joy, or peace? 

So car commercials (to take one example) might mention features, but most of the work is done by stoking your desires: for excitement if it’s a sports car, for prestige if it’s a luxury car, or for competence if it’s a pickup truck. Then those desires are associated with the make and model of the car and presto! The car purchase isn’t about the car anymore. It’s about your aspirations as a human being. 

The really sinister side-effect is that when you hand over the cash to buy whatever you’ve been persuaded to buy, what you’re actually hoping for is not a car or ice cream or a video game system. What you’re actually seeking is the fulfillment of a much deeper desire for belonging or safety or peace or contentment. Since no product can actually meet those deeper desires, advertising simultaneously stokes longing and redirects us away from avenues that could potentially fulfill it. We’re all like Dumbledore in the cave, drinking poison that only makes us thristier and thirstier.

One commercial will not have any discernible effect, of course, but life in 21st century America is a life saturated by these messages. 

And if you think it’s bad enough when the products sell you something external, what about all the products that promise to make you better? Skinnier, stronger, tanner, whatever. The whole outrage of fashion models photoshopped past biological possibility is just one corner of the overall edifice of an advertising ecosystem that is calculated to make us hungry and then sell us meals of thin air. 

I developed this theory that advertising fuels consumerism, which sabotages our happiness at an individual and social level, when I was a teenager in the 1990s. There was no social media back then.

So, getting back to The Social Dilemma, the problem isn’t that life was fine and dandy and then social networking came and destroyed everything. The problem is that we already lived in a sick, consumerist society where advertising inflamed desires and directed them away from any hope of fulfillment and then social media made it even worse

After all, everything that social media does has been done before. 

News feeds are tweaked to keep your scrolling endlessly? Radio stations have endlessly fiddled with their formulas for placing advertisements to keep you from changing that dial. TV shows were written around advertising breaks to make sure you waited for the action to continue. (Watch any old episode of Law and Order to see what I mean.) Social media does the same thing, it’s just better at it. (Partially through individualized feeds and AI algorithms, but also through effectively crowd-sourcing the job: every meme you post contributes to keeping your friends and family ensnared.)

Advertisements bypassing objective appeals to quality or function and appeal straight to your personal identity, your hopes, your fears? Again, this is old news. Consider the fact that you immediately picture in your mind different stereotypes for the kind of person who drives a Ford F-150, a Subaru Outback, or a Honda Civic. Old fashioned advertisements were already well on the way of fracturing society into “image tribes” that defined themselves and each other at least in part in terms of their consumption patterns. Social media just doubled down on that trend by allowing increasingly smaller and more homogeneous tribes to find and socialize with each other (and be targeted by advertisers). 

So the biggest thing that was missing from The Social Dilemma was the realization that social isn’t some strange new problem. It’s an old problem made worse. 

Solutions

The final shortcoming of The Social Dilemma is that there were no solutions offered. This is an odd gap because at least one potential solution is pretty obvious: stop relying on ad supported products and services. If you paid $5 / month for your Facebook account and that was their sole revenue stream (no ads allowed), then a lot of the perverse incentives around manipulating your feed would go away.

Another solution would be stricter privacy controls. As I mentioned above, the biggest differentiator between social media and older, broadcast media is individualization. I’ve read (can’t remember where) about the idea of privacy collectives: groups of consumers could band together, withhold their data from social media groups, and then dole it out in exchange for revenue (why shouldn’t you get paid for the advertisements you watch?) or just refuse to participate at all.

These solutions have drawbacks. It sounds nice to get paid for watching ads (nicer than the alternative, anyway) and to have control over your data, but there are some fundamental economic realities to consider. “Free” services like Facebook and Gmail and YouTube can never actually be free. Someone has to pay for the servers, the electricity, the bandwidth, the developers, and all of that. If advertisers don’t, then consumers will need to. Individuals can opt out and basically free-ride on the rest of us, but if everyone actually did it then the system would collapse. (That’s why I don’t use ad blockers, by the way. It violates the categorical imperative.)

And yeah, paying $5/month to Twitter (or whatever) would significantly change the incentives to manipulate your feed, but it wouldn’t actually make them go away. They’d still have every incentive to keep you as highly engaged as possible to make sure you never canceled your subscription and enlisted all your friends to sign up, too. 

Still, it would have been nice if The Social Dilemma had spent some time talking about specific possible solutions.

On the other hand, here’s an uncomfortable truth: there might not be any plausible solutions. Not the kind a Netflix documentary is willing to entertain, anyway.

In the prior section, I said “advertising is the proximate cause” of consumerism (emphasis added this time). I think there is a deeper cause, and advertising–the way it is done today–is only a symptom of that deeper cause.

When you stop trying to persuade people to buy your product directly–by appealing to their reason–and start trying to bypass their reason to appeal to subconscious desires you are effectively dehumanizing them. You are treating them as a thing to be manipulated. As a means to an end. Not as a person. Not as an end in itself. 

That’s the supply side: consumerism is a reflection of our willingness to tolerate treating each other as things. We don’t love others.

On the demand side, the emptier your life is, the more susceptible you become to this kind of advertising. Someone who actually feels belonging in their life on a consistent basis isn’t going to be easily manipulated into buying beer (or whatever) by appealing to that need. Why would they? The need is already being met.

That’s the demand side: consumerism is a reflection of how much meaning is missing from so many of our lives. We don’t love God (or, to be less overtly religious, feel a sense of duty and awe towards transcendent values).

As long as these underlying dysfunctions are in place, we will never successfully detoxify advertising through clever policies and incentives. There’s no conceivable way to reasonably enforce a law that says “advertising that objectifies consumers is illegal,” and any such law would violate the First Amendment in any case. 

The difficult reality is that social media is not intrinsically toxic any more than advertising is intrinsically toxic. What we’re witnessing is our cultural maladies amplified and reflected back through our technologies. They are not the problem. We are.

Therefore, the one and only way to detoxify our advertising and social media is to overthrow consumerism at the root. Not with creative policies or stringent laws and regulations, but with a fundamental change in our cultural values. 

We have the template for just such a revolution. The most innovative inheritance of the Christian tradition is the belief that, as children of God, every human life is individually and intrinsically valuable. An earnest embrace of this principle would make manipulative advertising unthinkable and intolerable. Christianity–like all great religions, but perhaps with particular emphasis–also teaches that a valuable life is found only in the service of others, service that would fill the emptiness in our lives and make us dramatically less susceptible to manipulation in the first place.

This is not an idealistic vision of Utopia. I am not talking about making society perfect. Only making it incrementally better. Consumerism is not binary. The sickness is a spectrum. Every step we could take away from our present state and towards a society more mindful of transcendent ideals (truth, beauty, and the sacred) and more dedicated to the love and service of our neighbors would bring a commensurate reduction in the sickness of manipulative advertising that results in tribalism, animosity, and social breakdown. 

There’s a word for what I’m talking about, and the word is: repentance. Consumerism, the underlying cause of toxic advertising that is the kernel of the destruction wrought by social media, is the cultural incarnation of our pride and selfishness. We can’t jury rig an economic or legal solution to a fundamentally spiritual problem. 

We need to renounce what we’re doing wrong, and learn–individually and collectively–to do better.

Cancel Culture Is Real

Cancel, Social, Society, Culture, Stop, Pull Back

Last month, Harper’s published an anti-cancel culture statement: A Letter on Justice and Open Debate.The letter was signed by a wide variety of writers and intellectuals, ranging from Noam Chomsky to J. K. Rowling. It was a kind of radical centrist manifesto, including major names like Jonathan Haidt and John McWhorter (two of my favorite writers) and also crossing lines to pick up folks like Matthew Yglesias (not one of my favorite writers, but I give him respect for putting his name to this letter.)

The letter kicked up a storm of controversy from the radical left, which basically boiled down to two major contentions. 

  1. There is no such thing as cancel culture.
  2. Everyone has limits on what speech they will tolerate, so there’s no difference between the social justice left and the liberal left other than where to draw the lines.

The “Profound Consequences” of Cancel Culture

The first contention was represented in pieces like this one from the Huffington Post: Don’t Fall For The ‘Cancel Culture’ Scam. In the piece, Michael Hobbes writes:

While the letter itself, published by the magazine Harper’s, doesn’t use the term, the statement represents a bleak apogee in the yearslong, increasingly contentious debate over “cancel culture.” The American left, we are told, is imposing an Orwellian set of restrictions on which views can be expressed in public. Institutions at every level are supposedly gripped by fears of social media mobs and dire professional consequences if their members express so much as a single statement of wrongthink.

This is false. Every statement of fact in the Harper’s letter is either wildly exaggerated or plainly untrue. More broadly, the controversy over “cancel culture” is a straightforward moral panic. While there are indeed real cases of ordinary Americans plucked from obscurity and harassed into unemployment, this rare, isolated phenomenon is being blown up far beyond its importance.

There is a kernel of truth to what Hobbes is saying, but it is only a kernel. Not that many ordinary Americans are getting “canceled”, and some of those who are cancelled are not entirely expunged from public life. They don’t all lose their jobs. 

But then, they don’t all have to lose their jobs for the rest of us to get the message, do they?

The basic analytical framework here is wrong. Hobbes assumes that the “profound consequences” of cancel culture have yet to be manifest. “Again and again,” he writes, “the decriers of “cancel culture” intimate that if left unchecked, the left’s increasing intolerance for dissent will result in profound consequences.”

The reason he can talk about hypothetical future consequences is that he’s thinking about the wrong consequences. Hobbes appears to think that the purpose of cancel culture is to cancel lots and lots of people. If we dont’ see hordes–thousands, maybe tens of thousands–of people canceled then there aren’t any “profound consequences”.

This is absurd. The mob doesn’t break kneecaps for the sake of breaking knee caps. They break knee caps to send a message to everyone else to pay up without resisting. Intimidation campaigns do not exist to make examples out of everyone. They make examples out of (a few) people in order to intimidate many more. 

Cancel culture is just such an intimidation campaign, and so the “profound consequences” aren’t the people who are canceled. The “profound consequences” are the people–not thousands or tens of thousands but millions–who hide their beliefs and stop speaking their minds because they’re afraid. 

And yes, I mean millions. Cato does polls on that topic, and they found that 58% of Americans had “political views they’re afraid to share” in 2017 and, as of just a month ago, that number has climbed to 62%. 

Gee, nearly two-thirds of Americans are afraid to speak their minds. How’s that for “profound consequences”?

Obviously Cato has a viewpoint here, but other studies are finding similar results. Politico did their own poll, and while it didn’t ask about self-censoring, it did ask what Americans think about cancel culture. According to the poll, 46% think it has gone “too far” while only 10% think it has gone “not far enough”. 

Moreover, these polls also reinforce something obvious: cancel culture is not just some general climate of acrimony. According to both the Cato and Politico polls, Republicans are much more likely to self-censor as a result of cancel culture (77% vs 52%)  and Democrats are much more likely to participate in the silencing (~50% of Democrats “have voiced their displeasure with a public figure on social media” vs. ~30% of Republicans).

Contrast these poll results with what Hobbes calls the “pitiful stakes” of cancel culture. He mocks low-grade intimidation like “New York Magazine published a panicked story about a guy being removed from a group email list.” Meanwhile, more than three quarters of Republicans are afraid to be honest about their own political beliefs. We don’t need to worry about hypothetical future profound consequences. They’re already here.

What Makes Cancel Culture Different

The second contention–which is that everyone has at least some speech they’d enthusiastically support canceling–is a more serious objection. After all: it’s true. All but the very most radical of free speech defenders will draw the line somewhere. If this is correct, then isn’t cancel culture just a redrawing of boundaries that have always been present?

To which I answer: no. There really is something new and different about cancel culture, and it’s not just the speed or ferocity of its adherents.

The difference goes back to a post I wrote a few months ago about the idea of an ideological demilitarized zone. I don’t think I clearly articulated my point in that post, so I’m going to reframe it (very briefly) in this one.

A normal, healthy person will draw a distinction between opinions they disagree with and actively oppose and opinions they disagree with that merit toleration or even consideration. That’s what I call the “demilitarized zone”: the collection of opinions that you think are wrong but also reasonable and defensible

Cancel culture has no DMZ.

Think I’m exaggerating? This is a post from a Facebook friend (someone I know IRL) just yesterday:

You can read the opinion of J. K. Rowling for yourself here. Agree or disagree, it is very, very hard for any reasonable person to come away thinking that Rowling has anything approaching personal animus towards anyone who is transgender for being transgender. (The kind of animus that might justify calling someone a “transphobic piece of sh-t” and trying to retconn her out of reality.) In the piece, she writes with empathy and compassion of the transgender community and states emphatically that, “I know transition will be a solution for some gender dysphoric people,” adding that:

Again and again I’ve been told to ‘just meet some trans people.’ I have: in addition to a few younger people, who were all adorable, I happen to know a self-described transsexual woman who’s older than I am and wonderful.

So here’s the difference between cancel culture and basically every other viewpoint on the political spectrum: they can acknowledge shades of grey and areas where reasonable people can see things differently, cancel culture can’t and won’t. Cancel culture is binary (ironically). You’re either 100% in conformity with the ideology or you’re “a —–phobic piece of sh-t”.

This is not incidental, by the way. Liberal traditions trace their roots back to the Enlightenment and include an assumption that truth exists as an objective category. As long as that’s the case–as long as there’s an objective reality out there–then there is a basis for discussion about it. There’s also room for mistaken beliefs about it. 

Cancel culture traces its roots back to critical theory, which rejects notions of reason and objective truth and sees instead only power. It’s not the case that people are disagreeing about a mutually accessible, external reality. Instead, all we have are subjective truth claims which can be maintained–not by appeal to evidence or logic–but only through the exercise of raw power.

Liberal traditions–be they on the left or on the right–view conflict through a lens that is philosophically compatible with humility, correction, cooperation, and compromise. That’s not to say that liberal traditions actually inhabit some kind of pluralist Utopia where no one plays dirty to win. It’s not like American politics (or politics anywhere) existed in some kind of genteel Garden of Eden until critical theory showed up. But no matter how acrimonious or dirty politics got before cancel culture, there was also the potential for cross-ideological discussion. Cancel culture doesn’t even have that.

This means that, while it’s possible for other viewpoints to coexist in a pluralist society, it is not possible for cancel culture to do the same. It isn’t a different variety of the same kind of thing. It’s a new kind of thing, a totalitarian ideology that has no self-limiting principle and views any and all dissent as an existential threat because it’s own truth claims are rooted solely in an appeal to power. For cancel culture, being right and winning are the same thing, and every single debate is a facet of the same existential struggle

So yes, all ideologies want to cancel something else. But only cancel culture wants to cancel everything else.

Last Thoughts

Lots of responders to the Harper’s letter pointed out that the signers were generally well-off elites. It seemed silly, if not outright hypocritical, for folks like that to whine about cancel culture, right?

My perspective is rather different. As someone who’s just an average Joe with no book deals, no massive social media following, no tenure, nor anything like that: I deeply appreciate someone with J. K. Rowling’s stature trading some of her vast hoard of social capital to keep the horizons of public discourse from narrowing ever farther. 

And that’s exactly why the social justice left hates her so much. They understand power, and they know how crippling it is to their cause to have someone like her demure from their rigid orthodoxy. Their concern isn’t alleviated because her dissent is gentle and reasonable. It’s worsened, because it makes it even harder to cancel her and underscores just how toxic their totalitarian ideology really is.

I believe in objective reality. I believe in truth. But I’m pragmatic enough to understand that power is real, too. And when someone like J. K. Rowling uses some of her power in defense of liberalism and intellectual diversity, I feel nothing but gratitude for the help.

We who want to defend the ideals of classical liberalism know just how much we could use it.

Note on Critics of Civilization

Shrapnel from a Unabomber attack. Found on Flickr.

Came across this article in my Facebook feed: Children of Ted. The lead in states:

Two decades after his last deadly act of ecoterrorism, the Unabomber has become an unlikely prophet to a new generation of acolytes.

I don’t have a ton of patience for this whole line of reasoning, but it’s trendy enough that I figure I ought to explain why it’s so silly.

Critics of industrialization are far from new, and obviously they have a point. As long as we don’t live in a literal utopia, there will be things wrong with our society. They are unlikely to get fixed without acknowledging them. What’s more, in any sufficiently complex system (and human society is pretty complex), any change is going to have both positive and negative effects, many of which will not be immediately apparent.

So if you want to point out that there are bad things in our society: yes, there are. If you want to point out that this or that particular advance has had deleterious side effects: yes, all changes do. But if you take the position that we would have been better off in a pre-modern, per-industrial, or even pre-agrarian society: you’re a hypocritical nut job.

I addressed this trendy argument when I reviewed Yuval Noah Harai’s Sapiens: A Brief History of Humankind. Quoting myself:

Harari is all-in for the hypothesis that the Agricultural Revolution was a colossal mistake. This is not a new idea. I’ve come across it several times, and when I did a quick Google search just now I found a 1987 article by Jared Diamond with the subtle title: The Worst Mistake in the History of the Human Race. Diamond’s argument then is as silly as Harari’s argument is now, and it boils down to this: life as a hunter-gatherer is easy. Farming is hard. Ergo, the Agricultural Revolution was a bad deal. If we’d all stuck around being hunter-gatherers we’d be happier.

There are multiple problems with this argument, and the one that I chose to focus on at the time is that it’s hedonistifc. Another observation one can make is that if being a hunter-gatherer is so great, nothing’s really stopping Diamond or Harai from living that way. I’m not saying it would be trivial, but for all the folks who sagely nod their head and agree with the books and articles that claim our illiterate ancestors had it so much better… how many are even seriously making the attempt?

The argument I want to make is slightly different than than the ones I’ve made before and is based on economics.

Three fundamental macroeconomic concepts are: production, consumption, and investment. Every year a society produces are certain amount of stuff (mining minerals, refining them, turning them into goods, growing crops, etc.) All of that stuff is eventually used in one of two ways: either it’s consumed (you eat the crops) or invested (you plant the seeds instead of eating them).

From a material standpoint, the biggest change in human history has been the dramatic rise in per-capita production over the last few centuries, especially during the Industrial Revolution. This is often seen as a triumph of science, but that is mostly wrong. Virtually none of the important inventions of the Industrial Revolution were produced by scientists or even my lay persons attempting to apply scientific principles. They were almost uniformly invented by self-taught tinkerers who were experimenting with practical rather than theoretical innovations.

Another way to see this is to observe that many of the “inventions” of the Industrial Revolution had been discovered many times in the past. A good example of this is the steam engine. In “Destiny Disrupted,” Tamim Ansary observes:

Often, we speak of great inventions as if they make their own case merely by existing. But in fact, people don’t start building and using a device simply because it’s clever. The technological breakthrough represented by an invention is only one ingredient in its success. The social context is what really determines whether it will take. The steam engine provides a case in point. What could be more useful? What could be more obviously world-changing? Yet the steam engine was invented in the Muslim world over three centuries before it popped up in the West, and in the Muslim world it didn’t change much of anything. The steam engine invented there was used to power a spit so that a whole sheep might be roasted efficiently at a rich man’s banquet. (A description of this device appears in a 1551 book by the Turkish engineer Taqi al-Din.) After the spit, however, no other application for the device occurred to anyone, so it was forgotten.

Ansary understands that the key ingredient in whether or not an invention takes off (like the steam engine in Western Europe in the 18th century) or dies stillborn (like the steam engine in the 15th century Islamic world) is the social context around it.

Unfortunately, Ansary mostly buys into the same absurd notion that I’m debunking, which is that all this progress is a huge mistake. According to him, the Chinese could have invented mechanized industry in the 10th century, but the benevolent Chinese state had the foresight to see that this would take away jobs from its peasant class and, being benevolent, opted instead to keep the Chinese work force employed.

This is absurd. First, because there’s no chance that the Chinese state (or anyone) could have foreseen the success and conseqeunces of mechanized industry in the 10th century and made policy based on it even if they’d wanted to. Second, because the idea that it’s better to keep society inefficient rather than risk unemployment is, in the long run, disastrous.

According to Ansary, the reason that steam engines, mechanized industry, etc. all took place in the West was misanthropic callousness:

Of course, this process [modernization] left countless artisans and craftspeople out of work, but this is where 19th century Europe differed from 10th century China. In Europe, those who had the means to install industrial machinery had no particular responsibility for those whose livelihood would be destroyed by a sudden abundance of cheap machine-made goods. Nor were the folks they affected down-stream–their kinfolk or fellow tribesmen–just strangers who they had never met and would never know by name. What’s more, it was somebody else’s job to deal with the social disruptions caused by widespread unemployment, not theirs. Going ahead with industrialization didn’t signify some moral flaw in them, it merely reflected the way this particular society was compartmentalized. The Industrial Revolution could take place only where certain social preconditions existed and in Europe at that time they happened to exist.

Not a particular moral flaw in the individual actors, Ansary concedes, but still a society that was wantonly reckless and unconcerned with the fate of its poor relative to the enlightened empires that foresaw the Industrial Revolution from end-to-end and declined for the sake of their humble worker class.

The point is that when a society has the right incentives (I’d argue that we need individual liberty via private property and a restrained state alongside compartmentalization) individual innovations are harnessed, incorporated, and built upon in a snowball effect that leads to ever and ever greater productivity. A lot of the productivity comes from the cool new machines, but not all of it.

You see, once you have a few machines that give that initial boost to productivity, you free up people in your society to do other things. When per-capita production is very, very low, everyone has to be a farmer. You can have a tiny minority doing rudimentary crafts, but the vast majority of your people need to work day-in and day-out just to provide enough food for the whole population not to starve to death.

When per-capita production is higher, fewer and few people need to do work creating the basic rudiments (food and clothes) and this frees people up to specialize. And specialization is the second half of the secret (along with new machines) that leads to the virtuous cycle of modernization. New tools boost productivity, this frees up new workers to try doing new things, and some of those new things include making even more new tools.

I’m giving you the happy side of the story. Some people go from being farmers to being inventors. I do not mean to deny but simply to balance the unhappy side of the story, which is what some people go from being skilled workers to being menial labors if a machine renders their skills obsolete. That also happens, although it’s worth noting that the threat to modernization is generally not to the very poorest. Americans like to finger-wag at “sweatshops”, but if your alternative is subsistence farming, then even sweatshops may very well look appealing. Which is why so many of the very poorest keep migrating from farms to cities (in China) and why the opposition to modernization never comes from the poorest classes (who have little to lose) but from the precarious members of the middle class (who do).

So my high-level story of modernization has a couple of key points.

  1. If you want a high standard of living for a society, you need a high level of per capita production.
  2. You get a high level of per capita production through a positive feedback loop between technological innovation and specialization. (This might be asymptotic.)
  3. The benefits of this positive feedback loop include high-end stuff (like modern medicine) and also things we take for granted. And I don’t mean electricity (although, that, too) but also literacy.
  4. The costs of this positive feedback loop include the constant threat of obsolescence for at least some workers, along with greater capacity to destroy on an industrial scale (either the environment or each other).

So the fundamental question you have to ask is whether you want to try and figure out how to manage the costs so that you can enjoy the benefits, or whether the whole project isn’t worth it and we should just give up and start mailing bombs to each other until it all comes crashing down.

The part that really frustrates me the most, that part that spurred me to write this today, is that folks like Ted Kaczynski (the original Unabomber) or John Jacobi (the first of his acolytes profiles in the New York Mag story) are only even possible in a modern, industrialized society.

They are literate, educated denizens of a society that produces so much stuff that lots of its members can survive basically without producing much of all. We live in an age of super abundance, and it turns out that abundance creates it’s own variety of problems. Obesity is one. Another, apparently, is a certain class of thought that advocates social suicide.

Because that’s what we’re talking about. As much as Diamond and Harai are just toying with the notion because it sells books and makes them look edgy, folks like John Jacobi or Ted Kaczynski would–if they had their way–bring about a world without any of the things that make their elitist theorizing possible in the first place.

It is a great tragedy of human nature that the hard-fought victories of yesterday’s heroic pioneers and risk-takers are casually dismissed by the following generation who don’t even realize that their apparent radicalism is just another symptom of super-abundance.

They will never succeed in reducing humanity to a pre-industrial state but they–and others who lack the capacity to appreciate what they’ve been given–can make enough trouble that the rising generation will, we hope, have a more constructive, aspirational, and less-suicidal frame of mind.

Hold Up Your Light

You’ve probably seen something like this meme in your own social media network feeds. 

I’m gonna do two things to this meme. First: debunk it. Not because it’s all that notable, but because it’s a pretty typical example of something scary and nasty in our society. And that’s what we’re going to get to second: zooming out from this particular specimen to the whole species.

This meme has the appearance of being some kind of insight or realization into American politics in the context of an important current event (the pandemic), but all of that is just a front. There is no analysis and there is no insight. It’s just a pretext to deliver the punchline: conservatives are selfish and bad. 

You can think of the pseudo-argument as being like the outer coating on a virus. The sole purpose is to penetrate the cell membrane to deliver a payload. It’s a means to an end, nothing more. 

Which means the meme, if you ignore the candy coating, is just a cleverly packaged insult. 

You see, conservatives don’t object to pandemic regulations because they would rather watch their neighbors die than shoulder a trivial inconvenience. They object to pandemic regulations (when they do; I think the existence of objections is exaggerated) because Americans in general and conservatives in particular have an anti-authoritarian streak a mile wide. Anti-authoritarianism is part of who we are. It’s not always reasonable or mature, but then again, it’s not a bad reflex to have, all things considered.

One of the really clever things about the packaging around this insult is that it’s kind of self-fulfilling. It accuses conservatives of being stubborn while it also insults them. What happens to people who are already being a little stubborn if you start insulting them? In most cases, they get more stubborn. Which means every time a conservative gets mad about this meme, a liberal spreading it can think, “Yeah, see? I knew I was right.”

Oh, and if incidentally it happens to actually discourage mask use? Oh well. That’s just collateral damage. Because people who spread memes like this care more about winning political battles than epidemiological ones. 

Liberals who share this meme are guaranteed to get what they really want: that little frisson of superiority. Because they care. They are willing to sacrifice. They are reasonable. So reasonable that they are happy to titillate their own feeling of superiority even if it has the accidental side effect of, you know, undermining compliance with those rules they care so much about. 

I’m being a little cynical here, but only a little. This meme is just one example of countless millions that all have the same basic function: stir controversy. And yes, there are conservatives analogs to this liberal meme that do the exact same thing. I don’t see as many of them because I’m quicker to mute fellow conservatives who aggravate me than liberals. 

Why did we get here?

You can blame the Russians, if you like. The KGB meddled with American politics as much as they could for decades before the fall of the USSR and Putin was around for that. Why would the FSB (contemporary successor) have given up the old hobby? But the KGB wasn’t ever any good at it, and I’m skeptical that the FSB has cracked the code. I’m sure their efforts don’t help, but I also don’t think they’re largely to blame. 

We’re doing this to ourselves.

The Internet runs on ads, and that means the currency of the Internet is attention. You are not the customer. You are the commodity. That’s not just true of Facebook and it’s not just a slogan. It’s the underlying reality of the Internet, and it sets the incentives that every content producer has to contend with if they want to survive.

The way to harvest attention is through engagement. Every content producer out there wants to hijack your attention by getting you engaged in what they’re telling you. There are a lot of ways to do this. Clickbait headlines hook your curiosity, attractive models wearing little clothes snag  your libido, and so on. But the king of engagement seems to be outrage, and there’s an insidious reason why.

Other attention grabbers work on only a select audience at a time. Other than bisexuals, attractive male models will grab one half of the audience and attractive female models will grab the other half, but you have to pick either / or. 

But outrage lets you engage two audiences with one piece of content. That’s what a meme like this one does, and it’s why it’s so successful. It infuriates conservatives while at the same time titillating liberals. (Again: I could just as easily find a conservative liberal that does the opposite.) 

When you realize that this meme is actually targeting conservatives and liberals, you also realize that the logical deficiency of the argument isn’t a bug. It’s a feature. It’s just another provocation, the way that some memes intentionally misspell words just to squeeze out a few more interactions, a few more clicks, a few more shares. If you react to this meme with an angry rant, you’re still reacting to this meme. That means you’ve already lost, because you’ve given away your attention. 

A lot of the most dangerous things in our environment aren’t trying to hurt us. Disease and natural disasters don’t have any intentions. And even the evils we do to each other are often byproducts of misaligned incentives. There just aren’t that many people out there who really like hurting other people. Most of us don’t enjoy that at all. So the conventional image of evil–mustache-twirling super-villians who want to murder and torture–is kind of a distraction. The real damage isn’t going to come from the tiny population of people who want to cause harm. It’s going to come from the much, much, much larger population of people who don’t have any particular desire to do harm, but who aren’t really that concerned with avoiding it, either. These people will wreck the world faster than anyone else because none of them are doing that much damage on their own and because none of them are motivated by malice. That makes it easier for them to rationalize their individual contribution to an environment that, in the aggregate, becomes extremely toxic. 

At this point, I’d really, really like everyone reading this to take a break and read Scott Alexander’s short story, “Sort by Controversial“. Go ahead. I’ll wait.

Back? OK, good, let’s wrap this up. The meme above is a scissor (that’s from Alexander’s story, if you thought you could skip reading it). The meme works by presenting liberals with an obviously true statement and conservatives with an obviously false statement. For liberals: You should tolerate minor inconveniences to save your neighbors. For conservatives: You should do whatever the government tells you to do without question.

That’s the actual mechanism behind scissors. It’s why half the people think it’s obviously true and the other half think it’s obviously false. They’re not actually reacting to the same issue. But they are reacting to the same meme. And so they fight, and–since they both know their position is obvious–the disagreement rapidly devolves. 

The reality is that most people agree on most issues. You can’t really find a scissor where half the population thinks one thing and half the population thinks the other because there’s too much overlap. But you can present two halves of the population with subtly different messages at the same time such that one half viscerally hates what they hear and the other half passionately loves what they hear, and–as often as not–they won’t talk to each other long enough to realize that their not actually fighting over the same proposition. 

This is how you destroy a society.

The truth is that it would be better, in a lot of ways, if there were someone out there who was doing this to us. If it was the FSB or China or terrorists or even a scary AI (like a nerdier version of Skynet) there would be some chance they could be opposed and–better still–a common foe to unite against.

But there isn’t. Not really. There’s no conspiracy. There’s no enemy. There’s just perverse incentives and human nature. There’s just us. We’re doing this to ourselves.

That doesn’t necessarily mean we’re doomed, but it does mean there’s no easy or quick solution. I don’t have any brilliant ideas at all other than some basic ones. Start off with: do no harm. Don’t share memes like this. To be on the safe side, maybe just don’t share political memes at all. I’m not saying we should have a law. Just that, individually and of our own free will, we should collectively maybe not

As a followup: talk to people you disagree with. You don’t have to do it all the time, but look for opportunities to disagree with people in ways that are reasonable and compassionate. When you do get into fights–and you will–try to reach out afterwards and patch up relationships. Try to build and maintain bridges. 

Also: Resist the urge to adopt a warfare mentality. War is a common metaphor–and there’s a reason it works–but if you buy into that way of thinking it’s really hard not to get sucked into a cycle of endless mutual radicalization. If you want a Christian way of thinking about it, go with Ephesians 6:12

For our struggle is not against enemies of blood and flesh, but against the rulers, against the authorities, against the cosmic powers of this present darkness, against the spiritual forces of evil in the heavenly places.

There are enemies, but the people in your social network are not them. Not even when they’re wrong. Those people are your brothers and your sisters. You want to win them over, not win over them.

Lastly: cultivate all your in-person friendships. Especially the random ones. The coworkers you didn’t pick? The family members you didn’t get to vote on? The neighbors who happen to live next door to you? Pay attention to those little relationships. They are important because they’re random. When you build relationships with people who share your interests and perspectives you’re missing out on one of the most fundamental and essential aspects of human nature: you can relate to anyone. Building relationships with people who just happen to be in your life is probably the single most important way we can repair our society, because that’s what society is. It’s not the collection of people you chose that defines our social networks, it’s the extent to which we can form attachments to people we didn’t choose. 

What are the politics of your coworkers and family and neighbors? Who cares. Don’t let politics define all your relationships, positive or negative. Find space outside politics, and cherish it. 

Times are dark. They may yet get darker, and none of us can change that individually.

But by looking for the good in the people who are randomly in your life, you can hold up a light.

So do it.

Why I’m An AI Skeptic

There are lots of people who are convinced that we’re a few short years away from economic apocalypse as robots take over all of our jobs. Here’s an example from Business Insider:

Top computer scientists in the US warned over the weekend that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies

These fears are based on hype. The capabilities of real-world AI-like systems are far, far from what non-experts expect from those devices, and the gap between where we are and where people expect to be is vast and–in the short-term at least–mostly insurmountable. 

Let’s take a look at where the hype comes from, why it’s wrong, and what to expect instead. For starters, we’ll take all those voice-controlled devices (Alexa, Siri, Google Assistant) and put them in their proper context.

Voice Controls Are a Misleading Gimmick

Prologue: Voice Controls are Frustrating

A little while back I was changing my daughter’s diaper and thought, hey: my hands are occupied but I’d like to listen to my audiobook. I said, “Alexa, resume playing Ghost Rider on Audible.” Sure enough: Alexa not only started playing my audiobook, but the track picked up exactly where I’d left off on my iPhone a few hours previously. Neat!

There was one problem: I listen to my audiobooks at double speed, and Alexa was playing it at normal speed. So I said, “Alexe, double playbook speed.” Uh-oh. Not only did Alexa not increase the playback speed, but it did that annoying thing where it starts prattling on endlessly about irrelevant search results that have nothing to do with your request. I tried five or six different varieties of the command and none of them worked, so I finally said, “Alexa, shut up.” 

This is my most common command to Alexa. And also Siri. And also the Google Assistant. I hate them all.

They’re supposed to make life easier but, as a general rule, they do the exact opposite. When we got our new TV I connected it to Alexa because: why not? It was kind of neat to turn it on using a voice command, but it really wasn’t that useful because voice commands didn’t work for things like switching video inputs so you still had to find the remote anyway and because the voice command to turn it off never worked, even when the volume was pretty low. 

Then one day the TV stopped working with Alexa. Why? Who knows. I have half-heartedly tried to fix it six or seven times over the last year to no avail. I spent more time setting up and unsuccessfully debugging the connection than I ever saved. 

This isn’t a one-off exception; it’s the rule. Same thing happened with a security camera I use as a baby monitor. For a few weeks it worked with Alexa until it didn’t. I got that one working again, but then it broke again and I gave up. Watching on the Alexa screen wasn’t ever really more useful than watching on my phone anyway.  

So what’s up? Why is all this nifty voice-activated stuff so disappointing?

If you’re like me, you were probably really excited by all this voice-activation stuff when it first started to come out because it reminded you of Star Trek: The Next Generation. And if you’re like me, you also got really annoyed and jaded after actually trying to use some of this stuff when you realized it’s all basically an inconvenient, expensive, privacy-smashing gimmick.  

Before we get into that, let me give y’all one absolutely vital caveat. The one true and good application of voice control technology is accessibility. For folks who are blind or can’t use keyboards or mice or other standard input devices, this technology is not a gimmick at all. It’s potentially life-transforming. I don’t want any of my cynicism to take away from that really, really important exception.

But that’s not how this stuff is being packaged and marketed to the broad audience, and it’s that–the explicit and implicit promises and all the predictions people make based on top of them–that I want to address.

CLI vs. GUI

To put voice command gimmicks in their proper context, you have to go back to the beginning of popular user interfaces, and the first of those was the CLI: Command Line Interface. A CLI is a screen, a keyboard, and a system that allows you to type commands and see feedback. If you’re tech savvy then you’ve used the command line (AKA terminal) on Mac or Unix machines. If you’re not, then you’ve probably still seen the Windows command prompt at some point. All of these are different kinds of CLI. 

In the early days of the PC (note: I’m not going back to the ancient days of punch cards, etc.) the CLI was all you had. Eventually this changed with the advent of the GUI: graphical user interface.

The GUI required new technology (the mouse), better hardware (to handle the graphics) and also a whole new way of thinking about the user interaction with the computer. Instead of thinking about commands, the GUI emphasizes objects. In particular, the GUI has used a kind of visual metaphor from the very beginning. The most common of these are icons, but it goes deeper than that. Buttons to click, a “desktop” as a flat surface to organize things, etc. 

Even though you can actually do a lot of the same things in either a CLI or a GUI (like moving or renaming files), the whole interaction paradigm is different. You have concepts like clicking, double-clicking, right-clicking, dragging-and-dropping in the GUI that just don’t have any analog in the CLI.

It’s easy to think of the GUI as superior to the CLI since it came later and is what most people use most of the time, but that’s not really the case. Some things are much better suited to a GUI, including some really obvious ones like photo and video editing. But there are still plenty of tasks that make more sense in a CLI, especially related to installing and maintaining computer systems. 

The biggest difference between a GUI and a CLI is feedback. When you interact with a GUI you get constant, immediate feedback to all of your actions. This in turn aids in discoverability. What this means is that you really don’t need much training to use a GUI. By moving the mouse around on the screen, you can fairly easily see what commands are available, for example. This means you don’t need to memorize how to execute tasks in a GUI. You can memorize the shortcuts for copy and paste, but you can also click on “Edit” and find them there. (And if you forget they’re under the edit menu, you can click File, View, etc. until you find them.)

The feedback and discoverability of the GUI is what has made it the dominant interaction paradigm. It’s much easier to get started and much more forgiving of memory lapses. 

Enter the VUI

When you see commercials of attractive, well-dressed people interacting with voice assistants, the most impressive thing is that they use normal-sounding commands. The interactions sound conversational. This is what sets the (false) expectation that interacting with Siri is going to be like interacting with the computer on board the Enterprise (NCC 1701-D). This way leads frustration and madness, however. A better way to think of voice control is as a third user interface paradigm, the VUI or voice user interface.

There is one really cool aspect of a VUI, and that’s the ability of the computer to transcribe spoken words to written text. That’s the magic. 

However, once you account for that you realize that the rest of the VUI experience is basically a CLI… without a screen. Which means: without feedback and discoverability.

Those two traits that make the GUI so successful for everyday life are conspicuously absent from a VUI. Just like when interacting with a CLI, using a VUI successfully means that you have to memorize a bunch of commands and then invoke them just so. There is a little more leeway with a VUI than a CLI, but not much. And that leeway is at least partially offset by the fact that when you type in a command at the terminal, you can pause and re-read it to see if you got it all right before you hit enter and commit. You can’t even do that with a VUI. Once you open your  mouth and start talking, your commands are being executed (or, more often than not: failing to execute) on the fly. 

This is all bad enough, but in addition to basically being 1970s tech (except for the transcription part), the VUI faces the additional hurdle of being held up against an unrealistic expectation because it sounds like natural speech. 

No one sits down in front of a terminal window and expects to be able to type in a sentence or two of plain English and get the computer to do their bidding. Here I am, asking Bash what time it is. It doesn’t go well:

Even non-technical folks understand that you have to have a whole skillset to be able to interact with a computer using the CLI. That’s why the command line is so intimidating for so many folks.

But the thing is, if you ask Siri (or whatever), “What time is it?” you’ll get an answer. This gives the impression that–unlike a CLI–interacting with a VUI won’t require any special training. Which is to say: that a VUI is intelligent enough to understand you.

It’s not, and it doesn’t. 

A VUI is much closer to a CLI than a GUI, and our expectations for it should be set at the 1970s level instead of, like with a GUI, more around the 1990s. Aside from the transcription side of things, and with a few exceptions for special cases, a VUI is a big step backwards in useability. 

AI vs. Machine Learning

Machine Learning Algorithms are Glorified Excel Trendlines

When we zoom out to get a larger view of the tech landscape, we find basically the same thing: mismatched expectations and gimmicks that can fool people into thinking our technology is much more advanced than it really is.

As one example of this, consider the field of machine learning, which is yet another giant buzzword. Ostensibly, machine learning is a subset of artificial intelligence (the Grand High Tech Buzzword). Specifically, it’s the part related to learning. 

This is another misleading concept, though. The word “learning” carries an awful lot of hidden baggage. A better way to think of machine learning is just: statistics. 

If you’ve worked with Excel at all, you probably know that you can insert trendlines into charts. Without going into too much detail, an Excel trendline is an application of the simplest and most commonly used form of statistical analysis: ordinary least-squares regression. There are tons of guides out there to explain the concept to you, my point is just that nobody thinks the ability to click “show trendline” on an Excel chart means the computer is “learning” anything. There’s no “artificial intelligence” at play here, just a fairly simple set of steps to solve a minimization problem. 

Although the bundle of algorithms available for data scientists conducting machine learning are much broader and more interesting, they’re the same kind of thing. Random forests, support vector machines, naive Bayesian classifiers: they’re all optimization problems fundamentally the same as OLS regression (or other, slightly fancier statistical techniques like logistic regression.)

As with voice controlled devices, you’ll understand the underlying tech a lot better if you replace the cool, fancy expectations (like the Enterprise’s computer) with a much more realistic example (a command prompt). Same thing here. Don’t believe the machine learning hype. We’re talking about adding trendlines to Excel charts. Yeah, it’s fancier than that, but that example will give you the right intuition about the kind of activity that’s going on.

Last thing: don’t get me wrong knocking on machine learning. I love me some machine learning. No, really, I do. As statistical tools the algorithms are great and certainly much more capable than an Excel trendline. This is just about how to get your intuition a little more in line with what they are in a philosophical sense.

Are Robots Coming to Take Your Job?

So we’ve laid some groundwork by explaining how voice control services and machine learning aren’t as cool as the hype would lead you to believe. Now it’s time to get to the main event and address the questions I started this post with: are we on the cusp of real AI that can replace you and take your job?

You could definitely be forgiven for thinking the answer is an obvious “yes”. After all, it was a really big deal when Deep Blue beat Gary Kasparov in 1997, and since then there’s been a litany of John Henry moments. So-called AI has won at Go and Jeopardy, for example. Impressive, right? Not really.

First, let me ask you this. If someone said that a computer beat the reigning world champion of competitive memorization… would you care? Like, at all? 

Because yes, competitive memorization (aka memory sport) is a thing. Players compete to see how fast they can memorize the sequence of a randomly shuffled deck of cards, for example. Thirteen seconds is a really good time. If someone bothered to build a computer to beat that (something any tinkerer could do in a long weekend with no more specialized equipment than a smartphone) we wouldn’t be impressed. We’d yawn. 

Memorizing the order of a deck of cards is a few bytes of data. Not really impressive for computers that store data by the terabyte and measure read and write speeds in gigabytes per second. Even the visual recognition part–while certainly tougher–is basically a solved problem. 

With a game like chess–where the rules are perfectly deterministic and the playspace is limited–it’s just not surprising or interesting for computers to beat humans. In one important sense of the word, chess is just a grandiose version of Tic-Tac-Toe. What I mean is that there are only a finite number of moves to make in either Tic-Tac-Toe or chess. The number of moves in Tic-Tac-Toe is very small, and so it is an easily solved game. That’s the basic plot of the WarGames and the reason nobody enjoys playing Tic-Tac-Toe after they learn the optimal strategy when they’re like seven years old. Chess is not solved yet, but that’s just because the number of moves is much larger. It’s only a matter of time until we brute-force the solution to chess. Given all this, it’s not surprising that computers do well at chess: it is the kind of thing computers are good at. Just like memorization is the kind of thing computers are good at.

Now, the success of computers at playing Go is much more impressive. This is a case where the one aspect of artificial intelligence with any genuine promise–machine learning–really comes to the fore. Machine learning is overhyped, but it’s not just hyped. 

On top of successfully learning to play Go better than a human, machine learning was also used to dramatically increase the power of automated language translation. So there’s some exciting stuff happening here, but Go is still a nice, clean system with orderly rules that is amenable to automation in ways that real life–or even other games, like Starcraft–are not.

So let’s talk about Starcraft for a moment.  I recently read an article that does a great job of providing a real-life example. It’s a PC Magazine article about the controversy over an AI that managed to defeat top-ranked human players in Starcraft II. Basically, a team created an AI (AlphaStar) to beat world-class Starcraft II players. Since Starcraft is a much more complex game (dozens of unit types, real-time interaction, etc.) this sounds really impressive. The problem is: they cheated. 

When a human plays Starcraft part of what they’re doing is looking at the screen and interpreting what they see. This is hard. So AlphaStar skipped it. Instead of building a system so that they could point a camera at the screen and use visual recognition to identify the units and terrain, they (1) built AlphaStar to only play on one map over and over again so the terrain never changed and (2) tapped into the Starcraft data to directly get at the exact location of all their units. Not only does this bypass the tricky visualization and interpretation problem, it also meant that AlphaStar always knew where every single unit was at every single point in time (while human players can only see what’s on the screen and have to scroll around the map). 

You could argue that Deep Blue didn’t use visual recognition either. The plays were fed into the computer directly. The difference is that human chess players use the same code to understand the game, so the playing field was even. Not so with AlphaStar. 

That’s why the “victory” of AlphaStar over world class Starcraft players was so controversial. The deck was stacked. The AI could see the entire board at the same time (which is not possible as a restriction of the way the game is played, not just human capacity) and only by playing on one map over and over again. If you moved AlphaStar to a different map, world class players could have easily beaten it. Practically anyone could have easily beaten it.  

So here’s the common theme between voice-command and AlphaStar: as soon as you take one step off the beaten path, they break. Just like a CLI, a VUI (like Alexa or Siri) breaks as soon as you enter a command it doesn’t perfectly expect. And AlphaStar goes from worldclass pro to bumbling child if you swap from a level it’s been trained on to one it hasn’t. 

The thing to realize is that his limitation isn’t just about how these programs perform today. It’s about the fundamental expectations we should have for them ever

Easy Problems and Hard Problems

This leads me to the underlying reason for all the hype around AI. It’s very, very difficult for non-experts to tell the difference between problems that are trivial and problems that are basically impossible. 

For a good overview of the concept, check out Range by David Epstein. He breaks the world into “kind problems” and “wicked problems”. Kind problems are problems like chess or playing Starcraft again and again on the same level with direct access to unit location. Wicked problems are problems like winning a live debate or playing Starcraft on a level you’ve never seen before, maybe with some new units added in for good measure.

If your job involves kind problems–if it’s repeatable with simple rules for success and failure–than a robot might steal your job. But if your job involves wicked problems–if you have to figure out a new approach to a novel situation on a regular basis–then your job is safe now and for the foreseeable future.

This doesn’t mean nobody should be worried. The story of technological progress has largely been one of automation. We used to need 95% or more of the human population to grow food just so we’d have enough to eat. Thanks to automation and labor-augmentation, that proportion is down to the single digits. Every other job that exists, other than subsistence farming, exists because of advances to farming technology (and other labor). In the long run: that’s great!

In the short run, it can be traumatic both individually and collectively. If you’ve invested decades of your life getting good at one the tasks that robots can do, then it’s devastating to suddenly be told your skills–all that effort and expertise–are obsolete. And when this happens to large numbers of people, the result is societal instability.

So it’s not that the problem doesn’t exist. It’s more that it’s not a new problem, and it’s one we should manage as opposed to “solve”. The reason for that is that the only way to solve the problem would be to halt forward progress. And, unless you think going back to subsistence farming or hunter-gathering sounds like a good idea (and nobody really believes that, no matter what they say), then we should look forward with optimism for the future developments that will free up more and more of our time and energy for work that isn’t automatable. 

But we do need to manage that progress to mitigate the personal and social costs of modernization. Because there are costs, and even if they are ultimately outweighed by the benefits, that doesn’t mean they just disappear.

I Want to be Right

Not long ago I was in a Facebook debate and my interlocutor accused me of just wanting to be right. 

Interesting accusation.

Of course I want to be right. Why else would we be having this argument? But, you see, he wasn’t accusing me of wanting to be right but of wanting to appear right. Those are two very different things. One of them is just about the best reason for debate and argument you can have. The other is just about the worst. 

Anyone who has spent a lot of time arguing on the Internet has asked themselves what the point of it all is. The most prominent theory is the speculator theory: you will never convince your opponent but you might convince the folks watching. There’s merit to that, but it also rests on a questionable assumption, which is that the default purpose is to win the argument by persuading the other person and (when that fails) we need to find some alternative. OK, but I question if we’ve landed on the right alternative.

I don’t think the primary importance of a debate is persuading speculators. The most important person for you to persuade in a debate is yourself.

It’s a truism these days that nobody changes their mind, and we all like to one-up each other with increasingly cynical takes on human irrationality and intractability. The list of cognitive biases on Wikipedia is getting so long that you start to wonder how humans manage to reason at all. Moral relativism and radical non-judgmentalism are grist for yet more “you won’t believe this” headlines, and of course there’s the holy grail of misanthropic cynicism:the argumentative theory. As Haidt summarizes one scholarly article on it:

Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments. That’s why they call it The Argumentative Theory of Reasoning. So, as they put it, “The evidence reviewed here shows not only that reasoning falls quite short of reliably delivering rational beliefs and rational decisions. It may even be, in a variety of cases, detrimental to rationality. Reasoning can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions. This explains the confirmation bias, motivated reasoning, and reason-based choice, among other things.

Jonathan Haidt in “Righteous Mind”

Reasoning was not designed to pursue truth.

Well, there you have it. Might as well just admit that Randall Munroe was right and all pack it in, then, right?

Not so fast.

This whole line of research has run away with itself. We’ve sped right past the point of dispassionate analysis and deep into sensationalization territory. Case in point: the backfire effect. 

According to RationalWiki “the effect is claimed to be that when, in the face of contradictory evidence, established beliefs do not change but actually get stronger.” The article goes on:

The backfire effect is an effect that was originally proposed by Brendan Nyhan and Jason Reifler in 2010 based on their research of a single survey item among conservatives… The effect was subsequently confirmed by other studies.

Entry on RationalWiki

If you’ve heard of it, it might be from a popular post by The Oatmeal. Take a minute to check it out. (I even linked to the clean version without all the profanity.)

Click the image to read the whole thing.

Wow. Humans are so irrational, that not only can you not convince them with facts, but if you present facts they believe the wrong stuff even more

Of course, it’s not really “humans” that are this bad at reasoning. It’s some humans. The original research was based on conservatives and the implicit subtext behind articles like the one on RationalWiki is that they are helplessly mired in irrational biases but we know how to conquer our biases, or at the very least make some small headway that separates us from the inferior masses. (Failing that, at least we’re raising awareness!) But I digress.

The important thing isn’t that this cynicism is always covertly at least a little one-sided, it’s that the original study has been really hard to replicate. From an article on Mashable:

[W]hat you should keep in mind while reading the cartoon is that the backfire effect can be hard to replicate in rigorous research. So hard, in fact, that a large-scale, peer-reviewed study presented last August at the American Political Science Association’s annual conference couldn’t reproduce the findings of the high-profile 2010 study that documented backfire effect.

Uh oh. Looks like the replication crisis–which has been just one part of the larger we-can’t-really-know-anything fad–has turned to bite the hand that feeds it. 

This whole post (the one I’m writing right now) is a bit weird for me, because when I started blogging my central focus was epistemic humility. And it’s still my driving concern. If I have a philosophical core, that’s it. And epistemic humility is all about the limits of what we (individually and collectively) can know. So, I never pictured myself being the one standing up and saying, “Hey, guys, you’ve taken this epistemic humility thing too far.” 

But that’s exactly what I’m saying.

Epistemic humility was never supposed to be a kind of “we can never know the truth for absolute certain so may as well give up” fatalism. Not for me, anyway. It was supposed to be about being humble in our pursuit of truth. Not in saying that the pursuit was doomed to fail so why bother trying.

I think even a lot of the doomsayers would agree with that. I quoted Jonathan Haidt on the argumentative theory earlier, and he’s one of my favorite writers. I’m pretty sure he’s not an epistemological nihilist. RationalWiki may get a little carried away with stuff like the backfire effect (they gave no notice on their site that other studies have failed to replicate the effect), but evidently they think there’s some benefit to telling people about it. Else, why bother having a wiki at all?

Taken to its extreme, epistemic humility is just as self-defeating as subjectivism. Subjectivism–the idea that truth is ultimately relative–is incoherent because if you say “all truth is relative” you’ve just made an objective claim. That’s the short version. For the longer version, read Thomas Nagel’s The Last Word

The same goes for all this breathless humans-are-incapable-of-changing-their-minds stuff. Nobody who does all the hard work of researching and writing and teaching can honestly believe that in their bones. At least, not if you think (as I do) that a person’s actions are the best measure of their actual beliefs, rather than their own (unreliable) self-assessments.

Here’s the thing, if you agree with the basic contours of epistemic humility–with most of the cognitive biases and even the argumentative hypothesis–you end up at a place where you think human belief is a reward-based activity like any other. We are not truth-seeking machines that automatically and objectively crunch sensory data to manufacture beliefs that are as true as possible given the input. Instead, we have instrumental beliefs. Beliefs that serve a purpose. A lot of the time that purpose is “make me feel good” as in “rationalize what I want to do already” or “help me fit in with this social clique”.

I know all this stuff, and my reaction is: so what?

So what if human belief is instrumental? Because you know what, you can choose to evaluate your beliefs by things like “does it match the evidence?” or “is it coherent with my other beliefs?” Even if all belief is ultimately instrumental, we still have the freedom to choose to make truth the metric of our beliefs. (Or, since we don’t have access to truth, surrogates like “conformance with evidence” and “logical consistency”.)

Now, this doesn’t make all those cognitive biases just go away. This doesn’t disprove the argumentative theory. Let’s say it’s true. Let’s say we evolved the capacity to reason to make convincing (rather than true) arguments. OK. Again I ask: so what? Who cares why we evolved the capacity, now that we have it we get to decide what to do with it. I’m pretty sure we did not evolve opposable thumbs for the purpose of texting on touch-screen phones. Yet here we are and they seem adequate to the task. 

What I’m saying is this: epistemic humility and the associated body of research tell us that humans don’t have to conform their beliefs to truth and that we are incapable of conforming our beliefs perfect to truth and that it’s hard to conform our beliefs even mostly to truth. OK. But nowhere is it written that we can make no progress at all. Nowhere is it written we cannot try or that–when we try earnestly–we are doomed to make absolutely no headway at all.

I want to be right. And I’m not apologizing for that. 

So how do Internet arguments come into this? One way that we become right–individually and collectively–is by fighting over things. It’s pretty similar to the theory behind our adversarial criminal justice system. Folks who grow up in common law countries (of which the US is one) might not realize that’s not the way all criminal justice systems work. The other major alternative is the inquisitorial system (which is used in countries like France and Italy).

In an inquisitorial system, the court is the one that conducts the investigation. In an adversarial system the court is supposed to be neutral territory where two opposing camps–the prosecution and the defense–lay out their case. That’s where the “adversarial” part comes in: the prosecutors and defenders are the adversaries. In theory, the truth arises from the conflict between the two sides. The court establishes rules of fair play (sharing evidence, not lying) and–within those bounds–the prosecutors’ and defenders’ job is not to present the truest argument but the best argument for their respective side. 

The analogy is not a perfect one, of course. For one thing, we also have a presumption of innocence in the criminal justice system because we’re not evaluating ideas we’re evaluating people. That presumption of innocence is crucial in a real criminal justice system, but it has no exact analogue in the court of ideas.

For another thing, we have a judge to oversee trials and enforce the rules. There’s no impartial judge when you have a debate with randos on the Internet. This is unfortunate, because it means that If we don’t police ourselves in our debates, then the whole process breaks down. There is no recourse.

When I say I want to be right, what am I saying, in this context? I’m saying that I want to know more at the end of a debate than I did at the start. That’s the goal. 

People like to say you never change anyone’s mind in a debate. What they really mean is that you never reverse someone’s mind in a debate. And, while that’s not literally true, it’s pretty close. It’s really, really rare for someone to go into a single debate as pro-life (or whatever) and come out as pro-choice (or whatever). I have never seen someone make a swing that dramatic in a single debate. I certainly never have.

But it would be absurd to say that I never “changed my mind” because of the debates I’ve had about abortion. I’ve changed my mind hundreds of time. I’ve abandoned bad arguments and adopted or invented new ones. I’ve learned all kinds of facts about law and history and biology that I didn’t know before. I’ve even changed my position many times. Just because the positions were different variations within the theme of pro-life doesn’t mean I’ve never “changed my mind”. If you expect people to walk in with one big, complex, set of ideas that are roughly aligned with a position (pro-life, pro-gun) and then walk out of a single conversation with whole new set of ideas that are aligned under the opposite position (pro-choice, anti-gun), then you’re setting that bar way too high.

But all of this only works if the folks having the argument follow the rules. And–without a judge to enforce them–that’s hard.

This is where the other kind of wanting to “be right” comes in. One of the most common things I see in a debate (whether I’m having it or not) is that folks want to avoid having to admit they were wrong

First, let me state emphatically that if you want to avoid admitting you were wrong you don’t actually care about being right in the sense that I mean it. Learning where you are wrong is just about the only way to become right! People who really want to “be right” embrace being wrong every time it happens because those are the stepping stones to truth. Every time you learn a belief or a position you took was wrong, you’re taking a step closer to being right.

But–going back to those folks who want to avoid appearing wrong–they don’t actually want to be right. They just want to appear right. They’re not worried about truth. They worried about prestige. Or ego. Or something else.

If you don’t care about being right and you only care about appearing right, then you don’t care about truth either. And these folks are toxic to the whole project of adversarial truth-seeking. Because they break the rules. 

What are the rules? Basic stuff like don’t lie, debate the issue not the person, etc. Maybe I’ll come up with a list. There’s a whole set of behaviors that can make your argument appear stronger while in fact all you’re doing is peeing in the pool for everyone who cares about truth. 

If you care about being right, then you will give your side of the debate your utmost. You’ll present the best evidence, use the tightest arguments, and throw in some rhetorical flourishes for good measure. But if you care about being right, then you will not break the rules to advance your argument (No lying!) and you also won’t just abandon your argument in midstream to switch to a new one that seems more promising. Anyone who does that–who swaps their claims mid-stream whenever they see one that shows a more promising temporary advantage–isn’t actually trying to be right. They’re trying to appear right. 

They’re not having an argument or a debate. They’re fighting for prestige or protecting their ego or doing something else that looks like an argument but isn’t actually one. 

I wrote this partially to vent. Partially to organize my feelings. But also to encourage folks not to give up hope, because if you believe that nobody cares about truth and changing minds is impossible then it becomes a self-fulfilling prophecy.

And you want to know the real danger of relativism and post-modernism and any other truth-adverse ideology? Once truth is off the table as the goal, the only thing remaining is power.

As long as people believe in truth, there is a fundamentally cooperative aspect to all arguments. Even if you passionately think someone is wrong, if you both believe in truth then there is a sense in which you’re playing the same game. There are rules. And, more than rules, there’s a common last resort you’re both appealing to. No matter how messy it gets and despite the fact that nobody ever has direct, flawless access to truth, even the bitterest ideological opponents have that shred of common ground: they both think they are right, which means they both thing “being right” is a thing you can, and should, strive to be.

But if you set that aside, then you sever the last thread between opponents and become nothing but enemies. If truth is not a viable recourse, all that is left is power. You have to destroy your opponent. Metaphorically at first. Literally if that fails. Nowhere does it say on the packaging of relativism “May lead to animosity and violence”. It’s supposed to do the opposite. It’s advertised as leading to tolerance and non-judgmentalism, but by taking truth off the table it does the opposite.

Humans are going to disagree. That’s inevitable. We will come into conflict. With truth as an option, there is no guarantee that the conflict will be non-violent, but it’s always an option. It can even be a conflict that exists in an environment of friendship, respect, and love. It’s possible for people who like and admire each other to have deep disagreements and to discuss them sharply but in a context of that mutual friendship. It’s not easy, but it’s possible. 

Take truth off the table, and that option disappears. This doesn’t mean we go straight from relativism to mutual annihilation, but it does mean the only thing left is radical partisanship where each side views the other as an alien “other”. Maybe that leads to violence, maybe not. But it can’t lead to friendship, love, and unity in the midst of disagreement.

So I’ll say it one more time: I want to be right.

I hope you do, too.

If that’s the case, then there’s a good chance we’ll get into some thundering arguments. We’ll say things we regret and offend each other. Nobody is a perfect, rational machine. Biases don’t go away and ego doesn’t disappear just because we are searching for truth. So we’ll make mistakes and, hopefully, we’ll also apologize and find common ground. We’ll change each other’s minds and teach each other things and grudgingly earn each other’s respect. Maybe we’ll learn to be friends long before we ever agree on anything.

Because if I care about being right and you care about being right, then we already have something deep inside of us that’s the same. And even if we disagree about every single other thing, we always will.

In Favor of Real Meritocracy

The meritocracy has come in for a lot of criticism recently, basically in the form of two arguments. 

There’s a book by Daniel Markovits called The Meritocracy Trap that basically argues that meritocracy makes everyone miserable and unequal by creating this horrific grind to get into the most elite colleges and then, after you get your elite degree, to grind away working 60 – 100 hours to maintain your position at the top of the corporate hierarchy. 

There was also a very interesting column by Ross Douthat that makes a separate but related point. According to Douthat, the WASP-y elite that dominated American society up until the early 20th century decided to “dissolve their own aristocracy” in favor of a meritocracy, but the meritocracy didn’t work out as planned because it sucks talent away from small locales (killing off the diverse regional cultures that we used to have) and because:

the meritocratic elite inevitably tends back toward aristocracy, because any definition of “merit” you choose will be easier for the children of these self-segregated meritocrats to achieve.

What Markovits and Douthat both admit without really admitting it is one simple fact: the meritocracy isn’t meritocratic.

Just to be clear, I’ll adopt Wikipedia’s definition of a meritocracy for this post:

Meritocracy is a political system in which economic goods and/or political power are vested in individual people on the basis of talent, effort, and achievement, rather than wealth or social class. Advancement in such a system is based on performance, as measured through examination or demonstrated achievement.

When people talk about meritocracy today, they’re almost always referring to the Ivy League and then–working forward and backward–to the kinds of feeder schools and programs that prepare kids to make it into the Ivy League and the types of high-powered jobs (and the culture surrounding them) that Ivy League students go onto after they graduate. 

My basic point is a pretty simple one: there’s nothing meritocratic about the Ivy League. The old WASP-y elite did not, as Douthat put it, “dissolve”. It just went into hiding. Americans like to pretend that we’re a classless society, but it’s a fiction. We do have class. And the nexus for class in the United States is the Ivy League. 

If Ivy League admission were really meritocratic, it would be based as much as possible on objective admission criteria. This is hard to do, because even when you pick something that is in a sense objective–like SAT scores–you can’t overcome the fact that wealthy parents can and will hire tutors to train their kids to artificially inflate their scores relative to the scores an equally bright, hard-working lower-class student can attain without all expensive tutoring and practice tests. 

Still, that’s nothing compared to the way that everything else that goes into college admissions–especially the litany of awards, clubs, and activities–tilts the game in favor of kids with parents who (1) know the unspoken rules of the game and (2) have cash to burn playing it. An expression I’ve heard before is that the Ivy League is basically privilege laundering racket. It has a facade of being meritocratic, but the game is rigged so that all it really does is perpetuate social class. “Legacy” admissions are just the tip of the iceberg in that regard.

What’s even more outrageous than the fiction of meritocratic admission to the Ivy League (or other elite, private schools) is the equally absurd fiction that students with Ivy League degrees have learned some objectively quantifiable skillset that students from, say, state schools have not. There’s no evidence for this. 

So students from outside the social elite face double discrimination: first, because they don’t have an equal chance to get into the Ivy Leagues and second, because then they can’t compete with Ivy League graduates on the job market. It doesn’t matter how hard you work or how much you learn, your Statue U degree is never going to stand out on a resume the way Harvard or Yale does.

There’s nothing meritocratic about that. And that’s the point. The Ivy League-based meritocracy is a lie.

So I empathize with criticisms of American meritocracy, but it’s not actually a meritocracy they’re criticizing. It’s a sham meritocracy that is, in fact, just a covert class system. 

The problem is that if we blame the meritocracy and seek to circumvent it, we’re actually going to make things worse. I saw a WaPo headline that said “No one likes the SAT. It’s still the fairest thing about admissions.” And that’s basically what I’m saying: “objective” scores can be gamed, but not nearly as much as the qualitative stuff. If you got rid of the SAT on college admissions you would make it less meritocratic and also less fair. At least with the SAT someone from outside the elite social classes has a chance to compete. Without that? Forget it.

Ideally, we should work to make our system a little more meritocratic by downplaying prestige signals like Ivy League degrees and emphasizing objective measurements more. But we’re never going to eradicate class entirely, and we shouldn’t go to radical measures to attempt it. Pretty soon, the medicine ends up worse than the disease if we go that route. That’s why you end up with absurd, totalitarian arguments that parents shouldn’t read to their children and that having an intact, loving, biological family is cheating. That way lies madness.

We should also stop pretending that our society is fully meritocratic. It’s not. And the denial is perverse. This is where Douthat was right on target:

[E]ven as it restratifies society, the meritocratic order also insists that everything its high-achievers have is justly earned… This spirit discourages inherited responsibility and cultural stewardship; it brushes away the disciplines of duty; it makes the past seem irrelevant, because everyone is supposed to come from the same nowhere and rule based on technique alone. As a consequence, meritocrats are often educated to be bad leaders, and bad people…

Like Douthat, I’m not calling for a return to WASP-y domination. (Also like Douthat, I’d be excluded from that club.) A diverse elite is better than a monocultural elite. But there’s one vital thing that the WASPy elite had going for it that any elite (and there’s always an elite) should reclaim:

the WASPs had at least one clear advantage over their presently-floundering successors: They knew who and what they were.

What Anti-Poverty Programs Actually Reduce Poverty?

According to the Tax Policy Center,

The earned income tax credit (EITC) provides substantial support to low- and moderate-income working parents, but very little support to workers without qualifying children (often called childless workers). Workers receive a credit equal to a percentage of their earnings up to a maximum credit. Both the credit rate and the maximum credit vary by family size, with larger credits available to families with more children. After the credit reaches its maximum, it remains flat until earnings reach the phaseout point. Thereafter, it declines with each additional dollar of income until no credit is available (figure 1).

By design, the EITC only benefits working families. Families with children receive a much larger credit than workers without qualifying children. (A qualifying child must meet requirements based on relationship, age, residency, and tax filing status.) In 2018, the maximum credit for families with one child is $3,461, while the maximum credit for families with three or more children is $6,431.

…Research shows that the EITC encourages single people and primary earners in married couples to work (Dickert, Houser, and Sholz 1995; Eissa and Liebman 1996; Meyer and Rosenbaum 2000, 2001). The credit, however, appears to have little effect on the number of hours they work once employed. Although the EITC phaseout could cause people to reduce their hours (because credits are lost for each additional dollar of eanings, which is effectively a surtax on earnings in the phaseout range), there is little empirical evidence of this happening (Meyer 2002).

The one group of people that may reduce hours of work in response to the EITC incentives is lower-earning spouses in a married couple (Eissa and Hoynes 2006). On balance, though, the increase in work resulting from the EITC dwarfs the decline in participation among second earners in married couples.

If the EITC were treated like earnings, it would have been the single most effective antipoverty program for working-age people, lifting about 5.8 million people out of poverty, including 3 million children (CBPP 2018).

The EITC is concentrated among the lowest earners, with almost all of the credit going to households in the bottom three quintiles of the income distribution (figure 2). (Each quinitle contains 20 percent of the population, ranked by household income.) Very few households in the fourth quinitle receive an EITC (fewer than 0.5 percent).

Recent evidence supports this view of the EITC. From a brand new article in Contemporary Economic Policy:

First, the evidence suggests that longer-run effects[1]”Our working definition of “longer run” in this study is 10 years” (pg. 2).[/ref] of the EITC are to increase employment and to reduce poverty and public assistance, as long as we rely on national as well as state variation in EITC policy. Second, tighter welfare time limits also appear to reduce poverty and public assistance in the longer run. We also find some evidence that higher minimum wages, in the longer run, may lead to declines in poverty and the share of families on public assistance, whereas higher welfare benefits appear to have adverse longer-run effects, although the evidence on minimum wages and welfare benefits—and especially the evidence on minimum wages—is not robust to using only more recent data, nor to other changes. In our view, the most robust relationships we find are consistent with the EITC having beneficial longer-run impacts in terms of reducing poverty and public assistance, whereas there is essentially no evidence that more generous welfare delivers such longer-run benefits, and some evidence that more generous welfare has adverse longer-run effects on poverty and reliance on public assistance—especially with regard to time limits (pg. 21).

Let’s stick with programs that work.

Do Tariffs Cancel Out the Benefits of Deregulation?

In June, the Council of Economic Advisers released a report on the economic effects of the Trump administration’s deregulation. They estimate “that after 5 to 10 years, this new approach to Federal regulation will have raised real incomes by $3,100 per household per year. Twenty notable Federal deregulatory actions alone will be saving American consumers and businesses about $220 billion per year after they go into full effect. They will increase real (after-inflation) incomes by about 1.3 percent” (pg. 1).

David Henderson (former senior economist in Reagan’s Council of Economic Advisers) writes, “Do the authors make a good case for their estimate? Yes…I wonder, though, what the numbers would look like if they included the negative effects on real income of increased restrictions on immigration and increased restrictions on trade with Iran. (I’m putting aside increased tariffs, which also hurt real U.S. income, because tariffs are generally categorized as taxes, not regulation.)”

But what if we did include the tariffs? A recent policy brief suggests that the current savings from deregulation will actually be cancelled out by the new tariffs. As the table shows below, the savings due to deregulation stack up to $46.5 billion as of June. However, the tariffs imposed between January 2017 and June 2019 rack up to a dead loss of $13.6 billion. By the end of 2019, however, the dead loss will rack up another $32.1 billion. If the currently planned tariffs are put into effect on top of the already existing ones, then we’re looking at a dead loss of up to $121.1 billion.

Maybe if economists start putting clap emojis in their work, people will finally get that tariffs aren’t good for the economy.