There’s good reason to believe that adding nutritional requirements to government food programs is a better use of money and leads to better health outcomes for the people in said programs.
WIC (Women, Infants, and Children) is a state-run program that helps low-income women and children purchase healthy food. WIC has specific guidelines for the quantities and types of food recipients can purchase, all of which have to meet certain health standards. In this program there is no way to purchase soda, candy, pizza, baked sweets, ice cream, etc. SNAP (Supplemental Nutrition Assistance Program, often referred to as “food stamps”) is a federally-funded program helping low-income people purchase almost any food.
The USDA explains that SNAP is for purchasing any food or food product for home consumption and that this definition includes “soft drinks, candy, cookies, snack crackers, and ice cream” and similar items. Data suggest these types of purchases make up at least 17% of SNAP spending . In 2017, about 42 million people used SNAP at an average of $125.79 per person per month, meaning the government spent about $11.3 billion that year buying junk food for low-income people. What are the arguments for spending so much on junk rather than using those funds to ensure low-income people have high quality food?
Opponents of SNAP nutritional requirements give many reasons for why nutritional requirements are not feasible or effective: we can’t come up with clear standards for what is “healthy,” it would be too complicated and costly to implement such standards, restrictions wouldn’t stop people from buying unhealthy food with their own money, and people in higher income brackets purchase similar amounts of unhealthy food.
Yet WIC has managed to define what constitutes healthy food and implement a program based on those boundaries. In fact the USDA describes WIC as “one of the nation’s most successful and cost-effective nutrition intervention programs.” There is evidence to suggest people participating in WIC (especially children) have better nutrition and health outcomes than their peers. Conversely, there is evidence to suggest people who receive SNAP benefits have worse nutrition than income-eligible people who don’t participate in SNAP. For example:
Comparing July to December in 2008 and 2011, increases were observed in breastfeeding initiation (72.2-77.5%); delaying introduction of solid foods until after 4 months of age (90.1-93.8%); daily fruit (87.0-91.6%), vegetable (78.1-80.8%), and whole grain consumption (59.0-64.4%) by children aged 1-4 years; and switches from whole milk to low-/nonfat milk by children aged 2-4 years (66.4-69.4%). In 1-year-old children, the proportion ≥95th percentile weight-for-recumbent length decreased from 15.1 to 14.2%; the proportion of children 2- to 4-year-old with body mass index (BMI) ≥95th percentile decreased from 14.6 to 14.2%.
The prevalence of obesity among young children from low-income families participating in WIC in U.S. states and territories was 14.5% in 2014. This estimate was higher than the national estimate (8.9%) among all U.S. children in a slightly different age group (2–5 years) based on data from the 2011–2014 National Health and Nutrition Examination Survey (7). Since 2010, statistically significant downward trends in obesity prevalence among WIC young children have been observed overall, in all five racial/ethnic groups, and in 34 of the 56 WIC state agencies, suggesting that prevention initiatives are making progress, potentially by impacting the estimated excess of calories eaten versus energy expended for this vulnerable group (8).
Child SNAP recipients consume more sugary beverages, processed meats, and high-fat dairy products, but fewer nuts, seeds, and legumes than income-eligible nonparticipants. Similarly, adult SNAP recipients consume more fruit juice, potatoes, red meat, and sugary beverages, but fewer whole grains than income-eligible nonparticipants. In another study, SNAP participants had lower dietary quality scores overall, and consumed significantly fewer fruits, vegetables, seafood, and plant proteins, but significantly more added sugar than income-eligible nonparticipants.
The study specifically compares SNAP nutrition to WIC nutrition:
In one study comparing the grocery store purchases of SNAP and WIC households in New England, SNAP households purchased more than double the amount of sugary beverages per month (399 ounces) than WIC households (169 ounces), 72% of which were paid for with SNAP dollars. In a 3-month study, new SNAP participants significantly increased their consumption of refined grains compared with low-income people who did not join. In a study of Hispanic Texan women, SNAP participants consumed 26% more sugary beverages and 38% more sweets and desserts than low-income nonparticipants.
Furthermore, most of the people who use SNAP believe the program should not allow recipients to purchase unhealthy food:
54% of SNAP participants supported removing sugary drinks from SNAP eligibility. In another survey of 522 SNAP stakeholders, 78% of respondents agreed that soda, and 74% agreed that “foods of low nutritional value” such as candy and sugar-sweetened fruit drinks should not be eligible for purchase with benefits. Seventy-seven percent of respondents believed that SNAP benefits should be consistent with the DGAs [Dietary Guidelines for Americans], and 54% thought that SNAP should be reformulated into a defined food package similar to WIC.
I want to live in a society where people are healthy and no one goes hungry. SNAP can and should serve both goals.
The Social Dilemma is a newly available documentary on Netflix about the peril of social networks.
The documentary does a decent job of introducing some of the ways social networks (Facebook, Twitter, Pinterest, etc.) are negatively impacting society. If this is your entry point to the topic, you could do worse.
But if you’re looking for really thorough analysis of what is going wrong or for possible solutions, then this documentary will leave you wanting more. Here are four specific topics–three small and one large–where The Social Dilemma fell short.
AI Isn’t That Impressive
I published a piece in June on Why I’m an AI Skeptic and a lot of what I wrote then applies here. Terms like “big data” and “machine learning” are overhyped, and non-experts don’t realize that these tools are only at their most impressive in a narrow range of circumstances. In most real-world cases, the results are dramatically less impressive.
The reason this matters is that a lot of the oomph of The Social Dilemma comes from scaring people, and AI just isn’t actually that scary.
I don’t fault the documentary for not going too deep into the details of machine learning. Without a background in statistics and computer science, it’s hard to get into the details. That’s fair.
I do fault them for sensationalism, however. At one point Tristan Harris (one of the interviewees) makes a really interesting point that we shouldn’t be worried about when AI surpasses human strengths, but when it surpasses human weaknesses. We haven’t reached the point where AI is better than a human at the things humans are good at–creative thinking, language, etc. But we’ve already long since passed the point where AI is better than humans at things humans are bad at, such as memorizing and crunching huge data sets. If AI is deployed in ways that leverage human weaknesses, like our cognitive biases, then we should already be concerned. So far this is reasonable, or at least interesting.
But then his next slide (they’re showing a clip of a presentation he was giving) says something like: “Checkmate humanity.”
I don’t know if the sensationalism is in Tristan’s presentation or The Real Dilemma’s editing, but either way I had to roll my eyes.
All Inventions Manipulate Us
At another point, Tristan tries to illustrate how social media is fundamentally unlike other human inventions by contrasting it with a bicycle. “No one got upset when bicycles showed up,” he says. “No one said…. we’ve just ruined society. Bicycles are affecting society, they’re pulling people away from their kids. They’re ruining the fabric of democracy.”
Of course, this isn’t really true. Journalists have always sought sensationalism and fear as a way to sell their papers, and–as this humorous video shows–there was all kinds of panic around the introduction of bicycles.
Tristan’s real point, however, is that bicycles werewere a passive invention. They don’t actively badger you to get you to go on bike rides. They just sit there, benignly waiting for you to decide to use them or not. In this view, you can divide human inventions into everything before social media (inanimate objects that obediently do our bidding) and after social media (animate objects that manipulate us into doing their bidding).
That dichotomy doesn’t hold up.
First of all, every successful human invention changes behavior individually and collectively. If you own a bicycle, then the route you take to work may very well change. In a way, the bike does tell you where to go.
To make this point more strongly, try to imagine what 21st century America would look like if the car had never been invented. No interstate highway system, no suburbs or strip malls, no car culture. For better and for worse, the mere existence of a tool like the car transformed who we are both individually and collectively. All inventions have cultural consequences like that, to a greater or less degree.
Second, social media is far from the first invention that explicitly sets out to manipulate people. If you believe the argumentative theory, then language and even rationality itself evolved primarily as ways for our primate ancestors to manipulate each other. It’s literally what we evolved to do, and we’ve never stopped.
Propaganda, disinformation campaigns, and psy-ops are one obvious category of examples with roots stretching back into prehistory. But, to bring things closer to social networks, all ad-supported broadcast media have basically the same business model: manipulate people to captivate their attention so that you can sell them ads. That’s how radio and TV got their commercial start: with the exact same mission statement as GMail, Google search, or Facebook.
So much for the idea that you can divide human inventions into before and after social media. It turns out that all inventions influence the choices we make and plenty of them do so by design.
That’s not to say that nothing has changed, of course. The biggest difference between social networks and broadcast media is that your social networking feed is individualized.
With mass media, companies had to either pick and choose their audience in broad strokes (Saturday morning for kids, prime time for families, late night for adults only) or try to address two audiences at once (inside jokes for the adults in animated family movies marketed to children). With social media, it’s kind of like you have a radio station or a TV studio that is geared just towards you.
Thus, social media does present some new challenges, but we’re talking about advancements and refinements to humanity’s oldest game–manipulating other humans–rather than some new and unprecedented development with no precursor or context.
Consumerism is the Real Dilemma
The most interesting subject in the documentary, to me at least, was Jaron Lanier. When everyone else was repeating that cliché about “you’re the product, not the customer” he took it a step or two farther. It’s not that you are the product. It’s not even that your attention is the product. What’s really being sold by social media companies, Lanier pointed out, is the ability to incrementally manipulate human behavior.
This is an important point, but it raises a much bigger issue that the documentary never touched.
This is the amount of money spent in the US on advertising as a percent of GDP over the last century:
It’s interesting to note that we spent a lot more (relative to the size of our economy) on advertising in the 1920s and 1930s than we do today. What do you think companies were buying for their advertising dollars in 1930 if not “the ability to incrementally manipulate human behavior”?
Because if advertising doesn’t manipulate human behavior then why spend the money? If you can’t manipulate human behavior with a billboard or a movie trailer or a radio spot, then nobody would ever spend money on any of those things.
This is the crux of my disagreement with The Social Dilemma. The poison isn’t social media. The poison is advertising. The danger of social media is just that (within the current business model) it’s a dramatically more effective method of delivering the poison.
Let me stipulate that advertising is not an unalloyed evil. There’s nothing intrinsically wrong with showing people a new product or service and trying to persuade them to pay you for it. The fundamental premise of a market economy is that voluntary exchange is mutually beneficial. It leaves both people better off.
And you can’t have voluntary exchange without people knowing what’s available. Thus, advertising is necessary to human commerce and is a part of an ecosystem flourishing, mutually beneficial exchanges and healthy competition. You could not have modern society without advertising of some degree and type.
That doesn’t mean the amount of advertising–or the kind of advertising–that we accept in our society is healthy. As with basically everything, the difference between poison and medicine is found in the details of dosage and usage.
There was a time, not too long ago, when the Second Industrial Revolution led to such dramatically increased levels of production that economists seriously theorized about ever shorter work weeks with more and more time spent pursuing art and leisure with our friends and families. Soon, we’d spend only ten hours a week working, and the rest developing our human potential.
And yet in the time since then, we’ve seen productivity skyrocket (we can make more and more stuff with the same amount of time) while hours worked have also remained roughly steady. The simplest reason for this? We’re addicted to consumption. Instead of holding production basically constant (and working fewer and fewer hours), we’ve tried to maximize consumption by keeping as busy as possible. This addiction to consumption, not necessarily having but to acquiring stuff, manifests in some really weird cultural anomalies that–if we witnessed them from an alien perspective–probably strike us as dysfunctional or even pathological.
I’ll start with a personal example: when I’m feeling a little down I can reliably get a jolt of euphoria from buying something. Doesn’t have to be much. Could be a gadget or a book I’ve wanted on Amazon. Could be just going through the drive-thru. Either way, clicking that button or handing over my credit card to the Chick-Fil-A worker is a tiny infusion of order and control in a life that can seem confusingly chaotic and complex.
It’s so small that it’s almost subliminal, but every transaction is a flex. The benefit isn’t just the food or book you purchase. It’s the fact that you demonstrated the power of being able to purchase it.
From a broader cultural perspective, let’s talk about unboxing videos. These are videos–you can find thousands upon thousands of them on YouTube–where someone gets a brand new gizmo and films a kind of ritualized process of unpacking it.
This is distinct from a product review (a separate and more obviously useful genre). Some unboxing videos have little tidbits of assessment, but that’s beside the point. The emphasis is on the voyeuristic appeal of watching someone undress an expensive, virgin item.
And yeah, I went with deliberately sexual language in that last sentence because it’s impossible not to see the parallels between brand newness and virginity, or between ornate and sophisticated product packaging and fashionable clothing, or between unboxing an item and unclothing a person. I’m not saying it’s literally sexual, but the parallels are too strong to ignore.
These do not strike me as the hallmarks of a healthy culture, and I haven’t even touched on the vast amounts of waste. Of course there’s the literal waste, both from all that aforementioned packaging and from replacing consumer goods (electronics, clothes, etc.) at an ever-faster pace. There’s also the opportunity cost, however. If you spend three or four or ten times more on a pair of shoes to get the right brand and style than you could on a pair of equally serviceable shoes without the right branding, well… isn’t that waste? You could have spent the money on something else or, better still, saved it or even worked less.
This rampant consumerism isn’t making us objectively better off or happier. It’s impossible to separate consumerism from status, and status is a zero-sum game. For every winner, there must be a loser. And that means that, as a whole, status-seeking can never make us better off. We’re working ourselves to death to try and win a game that doesn’t improve our world. Why?
Advertising is the proximate cause. Somewhere along the way advertisers realized that instead of trying to persuade people directly that this product would serve some particular need, you could bypass the rational argument and appeal to subconscious desires and fears. Doing this allows for things like “brand loyalty.” It also detaches consumption from need. You can have enough physical objects, but you can you ever have enough contentment, or security, or joy, or peace?
So car commercials (to take one example) might mention features, but most of the work is done by stoking your desires: for excitement if it’s a sports car, for prestige if it’s a luxury car, or for competence if it’s a pickup truck. Then those desires are associated with the make and model of the car and presto! The car purchase isn’t about the car anymore. It’s about your aspirations as a human being.
The really sinister side-effect is that when you hand over the cash to buy whatever you’ve been persuaded to buy, what you’re actually hoping for is not a car or ice cream or a video game system. What you’re actually seeking is the fulfillment of a much deeper desire for belonging or safety or peace or contentment. Since no product can actually meet those deeper desires, advertising simultaneously stokes longing and redirects us away from avenues that could potentially fulfill it. We’re all like Dumbledore in the cave, drinking poison that only makes us thristier and thirstier.
One commercial will not have any discernible effect, of course, but life in 21st century America is a life saturated by these messages.
And if you think it’s bad enough when the products sell you something external, what about all the products that promise to make you better? Skinnier, stronger, tanner, whatever. The whole outrage of fashion models photoshopped past biological possibility is just one corner of the overall edifice of an advertising ecosystem that is calculated to make us hungry and then sell us meals of thin air.
I developed this theory that advertising fuels consumerism, which sabotages our happiness at an individual and social level, when I was a teenager in the 1990s. There was no social media back then.
So, getting back to The Social Dilemma, the problem isn’t that life was fine and dandy and then social networking came and destroyed everything. The problem is that we already lived in a sick, consumerist society where advertising inflamed desires and directed them away from any hope of fulfillment and then social media made it even worse.
After all, everything that social media does has been done before.
News feeds are tweaked to keep your scrolling endlessly? Radio stations have endlessly fiddled with their formulas for placing advertisements to keep you from changing that dial. TV shows were written around advertising breaks to make sure you waited for the action to continue. (Watch any old episode of Law and Order to see what I mean.) Social media does the same thing, it’s just better at it. (Partially through individualized feeds and AI algorithms, but also through effectively crowd-sourcing the job: every meme you post contributes to keeping your friends and family ensnared.)
Advertisements bypassing objective appeals to quality or function and appeal straight to your personal identity, your hopes, your fears? Again, this is old news. Consider the fact that you immediately picture in your mind different stereotypes for the kind of person who drives a Ford F-150, a Subaru Outback, or a Honda Civic. Old fashioned advertisements were already well on the way of fracturing society into “image tribes” that defined themselves and each other at least in part in terms of their consumption patterns. Social media just doubled down on that trend by allowing increasingly smaller and more homogeneous tribes to find and socialize with each other (and be targeted by advertisers).
So the biggest thing that was missing from The Social Dilemma was the realization that social isn’t some strange new problem. It’s an old problem made worse.
The final shortcoming of The Social Dilemma is that there were no solutions offered. This is an odd gap because at least one potential solution is pretty obvious: stop relying on ad supported products and services. If you paid $5 / month for your Facebook account and that was their sole revenue stream (no ads allowed), then a lot of the perverse incentives around manipulating your feed would go away.
Another solution would be stricter privacy controls. As I mentioned above, the biggest differentiator between social media and older, broadcast media is individualization. I’ve read (can’t remember where) about the idea of privacy collectives: groups of consumers could band together, withhold their data from social media groups, and then dole it out in exchange for revenue (why shouldn’t you get paid for the advertisements you watch?) or just refuse to participate at all.
These solutions have drawbacks. It sounds nice to get paid for watching ads (nicer than the alternative, anyway) and to have control over your data, but there are some fundamental economic realities to consider. “Free” services like Facebook and Gmail and YouTube can never actually be free. Someone has to pay for the servers, the electricity, the bandwidth, the developers, and all of that. If advertisers don’t, then consumers will need to. Individuals can opt out and basically free-ride on the rest of us, but if everyone actually did it then the system would collapse. (That’s why I don’t use ad blockers, by the way. It violates the categorical imperative.)
And yeah, paying $5/month to Twitter (or whatever) would significantly change the incentives to manipulate your feed, but it wouldn’t actually make them go away. They’d still have every incentive to keep you as highly engaged as possible to make sure you never canceled your subscription and enlisted all your friends to sign up, too.
Still, it would have been nice if The Social Dilemma had spent some time talking about specific possible solutions.
On the other hand, here’s an uncomfortable truth: there might not be any plausible solutions. Not the kind a Netflix documentary is willing to entertain, anyway.
In the prior section, I said “advertising is the proximate cause” of consumerism (emphasis added this time). I think there is a deeper cause, and advertising–the way it is done today–is only a symptom of that deeper cause.
When you stop trying to persuade people to buy your product directly–by appealing to their reason–and start trying to bypass their reason to appeal to subconscious desires you are effectively dehumanizing them. You are treating them as a thing to be manipulated. As a means to an end. Not as a person. Not as an end in itself.
That’s the supply side: consumerism is a reflection of our willingness to tolerate treating each other as things. We don’t love others.
On the demand side, the emptier your life is, the more susceptible you become to this kind of advertising. Someone who actually feels belonging in their life on a consistent basis isn’t going to be easily manipulated into buying beer (or whatever) by appealing to that need. Why would they? The need is already being met.
That’s the demand side: consumerism is a reflection of how much meaning is missing from so many of our lives. We don’t love God (or, to be less overtly religious, feel a sense of duty and awe towards transcendent values).
As long as these underlying dysfunctions are in place, we will never successfully detoxify advertising through clever policies and incentives. There’s no conceivable way to reasonably enforce a law that says “advertising that objectifies consumers is illegal,” and any such law would violate the First Amendment in any case.
The difficult reality is that social media is not intrinsically toxic any more than advertising is intrinsically toxic. What we’re witnessing is our cultural maladies amplified and reflected back through our technologies. They are not the problem. We are.
Therefore, the one and only way to detoxify our advertising and social media is to overthrow consumerism at the root. Not with creative policies or stringent laws and regulations, but with a fundamental change in our cultural values.
We have the template for just such a revolution. The most innovative inheritance of the Christian tradition is the belief that, as children of God, every human life is individually and intrinsically valuable. An earnest embrace of this principle would make manipulative advertising unthinkable and intolerable. Christianity–like all great religions, but perhaps with particular emphasis–also teaches that a valuable life is found only in the service of others, service that would fill the emptiness in our lives and make us dramatically less susceptible to manipulation in the first place.
This is not an idealistic vision of Utopia. I am not talking about making society perfect. Only making it incrementally better. Consumerism is not binary. The sickness is a spectrum. Every step we could take away from our present state and towards a society more mindful of transcendent ideals (truth, beauty, and the sacred) and more dedicated to the love and service of our neighbors would bring a commensurate reduction in the sickness of manipulative advertising that results in tribalism, animosity, and social breakdown.
There’s a word for what I’m talking about, and the word is: repentance. Consumerism, the underlying cause of toxic advertising that is the kernel of the destruction wrought by social media, is the cultural incarnation of our pride and selfishness. We can’t jury rig an economic or legal solution to a fundamentally spiritual problem.
We need to renounce what we’re doing wrong, and learn–individually and collectively–to do better.
Last month, Harper’s published an anti-cancel culture statement: A Letter on Justice and Open Debate.The letter was signed by a wide variety of writers and intellectuals, ranging from Noam Chomsky to J. K. Rowling. It was a kind of radical centrist manifesto, including major names like Jonathan Haidt and John McWhorter (two of my favorite writers) and also crossing lines to pick up folks like Matthew Yglesias (not one of my favorite writers, but I give him respect for putting his name to this letter.)
The letter kicked up a storm of controversy from the radical left, which basically boiled down to two major contentions.
There is no such thing as cancel culture.
Everyone has limits on what speech they will tolerate, so there’s no difference between the social justice left and the liberal left other than where to draw the lines.
While the letter itself, published by the magazine Harper’s, doesn’t use the term, the statement represents a bleak apogee in the yearslong, increasingly contentious debate over “cancel culture.” The American left, we are told, is imposing an Orwellian set of restrictions on which views can be expressed in public. Institutions at every level are supposedly gripped by fears of social media mobs and dire professional consequences if their members express so much as a single statement of wrongthink.
This is false. Every statement of fact in the Harper’s letter is either wildly exaggerated or plainly untrue. More broadly, the controversy over “cancel culture” is a straightforward moral panic. While there are indeed real cases of ordinary Americans plucked from obscurity and harassed into unemployment, this rare, isolated phenomenon is being blown up far beyond its importance.
There is a kernel of truth to what Hobbes is saying, but it is only a kernel. Not that many ordinary Americans are getting “canceled”, and some of those who are cancelled are not entirely expunged from public life. They don’t all lose their jobs.
But then, they don’t all have to lose their jobs for the rest of us to get the message, do they?
The basic analytical framework here is wrong. Hobbes assumes that the “profound consequences” of cancel culture have yet to be manifest. “Again and again,” he writes, “the decriers of “cancel culture” intimate that if left unchecked, the left’s increasing intolerance for dissent will result in profound consequences.”
The reason he can talk about hypothetical future consequences is that he’s thinking about the wrong consequences. Hobbes appears to think that the purpose of cancel culture is to cancel lots and lots of people. If we dont’ see hordes–thousands, maybe tens of thousands–of people canceled then there aren’t any “profound consequences”.
This is absurd. The mob doesn’t break kneecaps for the sake of breaking knee caps. They break knee caps to send a message to everyone else to pay up without resisting. Intimidation campaigns do not exist to make examples out of everyone. They make examples out of (a few) people in order to intimidate many more.
Cancel culture is just such an intimidation campaign, and so the “profound consequences” aren’t the people who are canceled. The “profound consequences” are the people–not thousands or tens of thousands but millions–who hide their beliefs and stop speaking their minds because they’re afraid.
And yes, I mean millions. Cato does polls on that topic, and they found that 58% of Americans had “political views they’re afraid to share” in 2017 and, as of just a month ago, that number has climbed to 62%.
Gee, nearly two-thirds of Americans are afraid to speak their minds. How’s that for “profound consequences”?
Obviously Cato has a viewpoint here, but other studies are finding similar results. Politico did their own poll, and while it didn’t ask about self-censoring, it did ask what Americans think about cancel culture. According to the poll, 46% think it has gone “too far” while only 10% think it has gone “not far enough”.
Moreover, these polls also reinforce something obvious: cancel culture is not just some general climate of acrimony. According to both the Cato and Politico polls, Republicans are much more likely to self-censor as a result of cancel culture (77% vs 52%) and Democrats are much more likely to participate in the silencing (~50% of Democrats “have voiced their displeasure with a public figure on social media” vs. ~30% of Republicans).
Contrast these poll results with what Hobbes calls the “pitiful stakes” of cancel culture. He mocks low-grade intimidation like “New York Magazine published a panicked story about a guy being removed from a group email list.” Meanwhile, more than three quarters of Republicans are afraid to be honest about their own political beliefs. We don’t need to worry about hypothetical future profound consequences. They’re already here.
What Makes Cancel Culture Different
The second contention–which is that everyone has at least some speech they’d enthusiastically support canceling–is a more serious objection. After all: it’s true. All but the very most radical of free speech defenders will draw the line somewhere. If this is correct, then isn’t cancel culture just a redrawing of boundaries that have always been present?
To which I answer: no. There really is something new and different about cancel culture, and it’s not just the speed or ferocity of its adherents.
The difference goes back to a post I wrote a few months ago about the idea of an ideological demilitarized zone. I don’t think I clearly articulated my point in that post, so I’m going to reframe it (very briefly) in this one.
A normal, healthy person will draw a distinction between opinions they disagree with and actively oppose and opinions they disagree with that merit toleration or even consideration. That’s what I call the “demilitarized zone”: the collection of opinions that you think are wrong but also reasonable and defensible.
Cancel culture has no DMZ.
Think I’m exaggerating? This is a post from a Facebook friend (someone I know IRL) just yesterday:
You can read the opinion of J. K. Rowling for yourself here. Agree or disagree, it is very, very hard for any reasonable person to come away thinking that Rowling has anything approaching personal animus towards anyone who is transgender for being transgender. (The kind of animus that might justify calling someone a “transphobic piece of sh-t” and trying to retconn her out of reality.) In the piece, she writes with empathy and compassion of the transgender community and states emphatically that, “I know transition will be a solution for some gender dysphoric people,” adding that:
Again and again I’ve been told to ‘just meet some trans people.’ I have: in addition to a few younger people, who were all adorable, I happen to know a self-described transsexual woman who’s older than I am and wonderful.
So here’s the difference between cancel culture and basically every other viewpoint on the political spectrum: they can acknowledge shades of grey and areas where reasonable people can see things differently, cancel culture can’t and won’t. Cancel culture is binary (ironically). You’re either 100% in conformity with the ideology or you’re “a —–phobic piece of sh-t”.
This is not incidental, by the way. Liberal traditions trace their roots back to the Enlightenment and include an assumption that truth exists as an objective category. As long as that’s the case–as long as there’s an objective reality out there–then there is a basis for discussion about it. There’s also room for mistaken beliefs about it.
Cancel culture traces its roots back to critical theory, which rejects notions of reason and objective truth and sees instead only power. It’s not the case that people are disagreeing about a mutually accessible, external reality. Instead, all we have are subjective truth claims which can be maintained–not by appeal to evidence or logic–but only through the exercise of raw power.
Liberal traditions–be they on the left or on the right–view conflict through a lens that is philosophically compatible with humility, correction, cooperation, and compromise. That’s not to say that liberal traditions actually inhabit some kind of pluralist Utopia where no one plays dirty to win. It’s not like American politics (or politics anywhere) existed in some kind of genteel Garden of Eden until critical theory showed up. But no matter how acrimonious or dirty politics got before cancel culture, there was also the potential for cross-ideological discussion. Cancel culture doesn’t even have that.
This means that, while it’s possible for other viewpoints to coexist in a pluralist society, it is not possible for cancel culture to do the same. It isn’t a different variety of the same kind of thing. It’s a new kind of thing, a totalitarian ideology that has no self-limiting principle and views any and all dissent as an existential threat because it’s own truth claims are rooted solely in an appeal to power. For cancel culture, being right and winning are the same thing, and every single debate is a facet of the same existential struggle
So yes, all ideologies want to cancel something else. But only cancel culture wants to cancel everything else.
Lots of responders to the Harper’s letter pointed out that the signers were generally well-off elites. It seemed silly, if not outright hypocritical, for folks like that to whine about cancel culture, right?
My perspective is rather different. As someone who’s just an average Joe with no book deals, no massive social media following, no tenure, nor anything like that: I deeply appreciate someone with J. K. Rowling’s stature trading some of her vast hoard of social capital to keep the horizons of public discourse from narrowing ever farther.
And that’s exactly why the social justice left hates her so much. They understand power, and they know how crippling it is to their cause to have someone like her demure from their rigid orthodoxy. Their concern isn’t alleviated because her dissent is gentle and reasonable. It’s worsened, because it makes it even harder to cancel her and underscores just how toxic their totalitarian ideology really is.
I believe in objective reality. I believe in truth. But I’m pragmatic enough to understand that power is real, too. And when someone like J. K. Rowling uses some of her power in defense of liberalism and intellectual diversity, I feel nothing but gratitude for the help.
We who want to defend the ideals of classical liberalism know just how much we could use it.
Came across this article in my Facebook feed: Children of Ted. The lead in states:
Two decades after his last deadly act of ecoterrorism, the Unabomber has become an unlikely prophet to a new generation of acolytes.
I don’t have a ton of patience for this whole line of reasoning, but it’s trendy enough that I figure I ought to explain why it’s so silly.
Critics of industrialization are far from new, and obviously they have a point. As long as we don’t live in a literal utopia, there will be things wrong with our society. They are unlikely to get fixed without acknowledging them. What’s more, in any sufficiently complex system (and human society is pretty complex), any change is going to have both positive and negative effects, many of which will not be immediately apparent.
So if you want to point out that there are bad things in our society: yes, there are. If you want to point out that this or that particular advance has had deleterious side effects: yes, all changes do. But if you take the position that we would have been better off in a pre-modern, per-industrial, or even pre-agrarian society: you’re a hypocritical nut job.
Harari is all-in for the hypothesis that the Agricultural Revolution was a colossal mistake. This is not a new idea. I’ve come across it several times, and when I did a quick Google search just now I found a 1987 article by Jared Diamond with the subtle title: The Worst Mistake in the History of the Human Race. Diamond’s argument then is as silly as Harari’s argument is now, and it boils down to this: life as a hunter-gatherer is easy. Farming is hard. Ergo, the Agricultural Revolution was a bad deal. If we’d all stuck around being hunter-gatherers we’d be happier.
There are multiple problems with this argument, and the one that I chose to focus on at the time is that it’s hedonistifc. Another observation one can make is that if being a hunter-gatherer is so great, nothing’s really stopping Diamond or Harai from living that way. I’m not saying it would be trivial, but for all the folks who sagely nod their head and agree with the books and articles that claim our illiterate ancestors had it so much better… how many are even seriously making the attempt?
The argument I want to make is slightly different than than the ones I’ve made before and is based on economics.
Three fundamental macroeconomic concepts are: production, consumption, and investment. Every year a society produces are certain amount of stuff (mining minerals, refining them, turning them into goods, growing crops, etc.) All of that stuff is eventually used in one of two ways: either it’s consumed (you eat the crops) or invested (you plant the seeds instead of eating them).
From a material standpoint, the biggest change in human history has been the dramatic rise in per-capita production over the last few centuries, especially during the Industrial Revolution. This is often seen as a triumph of science, but that is mostly wrong. Virtually none of the important inventions of the Industrial Revolution were produced by scientists or even my lay persons attempting to apply scientific principles. They were almost uniformly invented by self-taught tinkerers who were experimenting with practical rather than theoretical innovations.
Another way to see this is to observe that many of the “inventions” of the Industrial Revolution had been discovered many times in the past. A good example of this is the steam engine. In “Destiny Disrupted,” Tamim Ansary observes:
Often, we speak of great inventions as if they make their own case merely by existing. But in fact, people don’t start building and using a device simply because it’s clever. The technological breakthrough represented by an invention is only one ingredient in its success. The social context is what really determines whether it will take. The steam engine provides a case in point. What could be more useful? What could be more obviously world-changing? Yet the steam engine was invented in the Muslim world over three centuries before it popped up in the West, and in the Muslim world it didn’t change much of anything. The steam engine invented there was used to power a spit so that a whole sheep might be roasted efficiently at a rich man’s banquet. (A description of this device appears in a 1551 book by the Turkish engineer Taqi al-Din.) After the spit, however, no other application for the device occurred to anyone, so it was forgotten.
Ansary understands that the key ingredient in whether or not an invention takes off (like the steam engine in Western Europe in the 18th century) or dies stillborn (like the steam engine in the 15th century Islamic world) is the social context around it.
Unfortunately, Ansary mostly buys into the same absurd notion that I’m debunking, which is that all this progress is a huge mistake. According to him, the Chinese could have invented mechanized industry in the 10th century, but the benevolent Chinese state had the foresight to see that this would take away jobs from its peasant class and, being benevolent, opted instead to keep the Chinese work force employed.
This is absurd. First, because there’s no chance that the Chinese state (or anyone) could have foreseen the success and conseqeunces of mechanized industry in the 10th century and made policy based on it even if they’d wanted to. Second, because the idea that it’s better to keep society inefficient rather than risk unemployment is, in the long run, disastrous.
According to Ansary, the reason that steam engines, mechanized industry, etc. all took place in the West was misanthropic callousness:
Of course, this process [modernization] left countless artisans and craftspeople out of work, but this is where 19th century Europe differed from 10th century China. In Europe, those who had the means to install industrial machinery had no particular responsibility for those whose livelihood would be destroyed by a sudden abundance of cheap machine-made goods. Nor were the folks they affected down-stream–their kinfolk or fellow tribesmen–just strangers who they had never met and would never know by name. What’s more, it was somebody else’s job to deal with the social disruptions caused by widespread unemployment, not theirs. Going ahead with industrialization didn’t signify some moral flaw in them, it merely reflected the way this particular society was compartmentalized. The Industrial Revolution could take place only where certain social preconditions existed and in Europe at that time they happened to exist.
Not a particular moral flaw in the individual actors, Ansary concedes, but still a society that was wantonly reckless and unconcerned with the fate of its poor relative to the enlightened empires that foresaw the Industrial Revolution from end-to-end and declined for the sake of their humble worker class.
The point is that when a society has the right incentives (I’d argue that we need individual liberty via private property and a restrained state alongside compartmentalization) individual innovations are harnessed, incorporated, and built upon in a snowball effect that leads to ever and ever greater productivity. A lot of the productivity comes from the cool new machines, but not all of it.
You see, once you have a few machines that give that initial boost to productivity, you free up people in your society to do other things. When per-capita production is very, very low, everyone has to be a farmer. You can have a tiny minority doing rudimentary crafts, but the vast majority of your people need to work day-in and day-out just to provide enough food for the whole population not to starve to death.
When per-capita production is higher, fewer and few people need to do work creating the basic rudiments (food and clothes) and this frees people up to specialize. And specialization is the second half of the secret (along with new machines) that leads to the virtuous cycle of modernization. New tools boost productivity, this frees up new workers to try doing new things, and some of those new things include making even more new tools.
I’m giving you the happy side of the story. Some people go from being farmers to being inventors. I do not mean to deny but simply to balance the unhappy side of the story, which is what some people go from being skilled workers to being menial labors if a machine renders their skills obsolete. That also happens, although it’s worth noting that the threat to modernization is generally not to the very poorest. Americans like to finger-wag at “sweatshops”, but if your alternative is subsistence farming, then even sweatshops may very well look appealing. Which is why so many of the very poorest keep migrating from farms to cities (in China) and why the opposition to modernization never comes from the poorest classes (who have little to lose) but from the precarious members of the middle class (who do).
So my high-level story of modernization has a couple of key points.
If you want a high standard of living for a society, you need a high level of per capita production.
You get a high level of per capita production through a positive feedback loop between technological innovation and specialization. (This might be asymptotic.)
The benefits of this positive feedback loop include high-end stuff (like modern medicine) and also things we take for granted. And I don’t mean electricity (although, that, too) but also literacy.
The costs of this positive feedback loop include the constant threat of obsolescence for at least some workers, along with greater capacity to destroy on an industrial scale (either the environment or each other).
So the fundamental question you have to ask is whether you want to try and figure out how to manage the costs so that you can enjoy the benefits, or whether the whole project isn’t worth it and we should just give up and start mailing bombs to each other until it all comes crashing down.
The part that really frustrates me the most, that part that spurred me to write this today, is that folks like Ted Kaczynski (the original Unabomber) or John Jacobi (the first of his acolytes profiles in the New York Mag story) are only even possible in a modern, industrialized society.
They are literate, educated denizens of a society that produces so much stuff that lots of its members can survive basically without producing much of all. We live in an age of super abundance, and it turns out that abundance creates it’s own variety of problems. Obesity is one. Another, apparently, is a certain class of thought that advocates social suicide.
Because that’s what we’re talking about. As much as Diamond and Harai are just toying with the notion because it sells books and makes them look edgy, folks like John Jacobi or Ted Kaczynski would–if they had their way–bring about a world without any of the things that make their elitist theorizing possible in the first place.
It is a great tragedy of human nature that the hard-fought victories of yesterday’s heroic pioneers and risk-takers are casually dismissed by the following generation who don’t even realize that their apparent radicalism is just another symptom of super-abundance.
They will never succeed in reducing humanity to a pre-industrial state but they–and others who lack the capacity to appreciate what they’ve been given–can make enough trouble that the rising generation will, we hope, have a more constructive, aspirational, and less-suicidal frame of mind.
The meritocracy has come in for a lot of criticism recently, basically in the form of two arguments.
There’s a book by Daniel Markovits called The Meritocracy Trap that basically argues that meritocracy makes everyone miserable and unequal by creating this horrific grind to get into the most elite colleges and then, after you get your elite degree, to grind away working 60 – 100 hours to maintain your position at the top of the corporate hierarchy.
There was also a very interesting column by Ross Douthat that makes a separate but related point. According to Douthat, the WASP-y elite that dominated American society up until the early 20th century decided to “dissolve their own aristocracy” in favor of a meritocracy, but the meritocracy didn’t work out as planned because it sucks talent away from small locales (killing off the diverse regional cultures that we used to have) and because:
the meritocratic elite inevitably tends back toward aristocracy, because any definition of “merit” you choose will be easier for the children of these self-segregated meritocrats to achieve.
What Markovits and Douthat both admit without really admitting it is one simple fact: the meritocracy isn’t meritocratic.
Meritocracy is a political system in which economic goods and/or political power are vested in individual people on the basis of talent, effort, and achievement, rather than wealth or social class. Advancement in such a system is based on performance, as measured through examination or demonstrated achievement.
When people talk about meritocracy today, they’re almost always referring to the Ivy League and then–working forward and backward–to the kinds of feeder schools and programs that prepare kids to make it into the Ivy League and the types of high-powered jobs (and the culture surrounding them) that Ivy League students go onto after they graduate.
My basic point is a pretty simple one: there’s nothing meritocratic about the Ivy League. The old WASP-y elite did not, as Douthat put it, “dissolve”. It just went into hiding. Americans like to pretend that we’re a classless society, but it’s a fiction. We do have class. And the nexus for class in the United States is the Ivy League.
If Ivy League admission were really meritocratic, it would be based as much as possible on objective admission criteria. This is hard to do, because even when you pick something that is in a sense objective–like SAT scores–you can’t overcome the fact that wealthy parents can and will hire tutors to train their kids to artificially inflate their scores relative to the scores an equally bright, hard-working lower-class student can attain without all expensive tutoring and practice tests.
Still, that’s nothing compared to the way that everything else that goes into college admissions–especially the litany of awards, clubs, and activities–tilts the game in favor of kids with parents who (1) know the unspoken rules of the game and (2) have cash to burn playing it. An expression I’ve heard before is that the Ivy League is basically privilege laundering racket. It has a facade of being meritocratic, but the game is rigged so that all it really does is perpetuate social class. “Legacy” admissions are just the tip of the iceberg in that regard.
What’s even more outrageous than the fiction of meritocratic admission to the Ivy League (or other elite, private schools) is the equally absurd fiction that students with Ivy League degrees have learned some objectively quantifiable skillset that students from, say, state schools have not. There’s no evidence for this.
So students from outside the social elite face double discrimination: first, because they don’t have an equal chance to get into the Ivy Leagues and second, because then they can’t compete with Ivy League graduates on the job market. It doesn’t matter how hard you work or how much you learn, your Statue U degree is never going to stand out on a resume the way Harvard or Yale does.
There’s nothing meritocratic about that. And that’s the point. The Ivy League-based meritocracy is a lie.
So I empathize with criticisms of American meritocracy, but it’s not actually a meritocracy they’re criticizing. It’s a sham meritocracy that is, in fact, just a covert class system.
The problem is that if we blame the meritocracy and seek to circumvent it, we’re actually going to make things worse. I saw a WaPo headline that said “No one likes the SAT. It’s still the fairest thing about admissions.” And that’s basically what I’m saying: “objective” scores can be gamed, but not nearly as much as the qualitative stuff. If you got rid of the SAT on college admissions you would make it less meritocratic and also less fair. At least with the SAT someone from outside the elite social classes has a chance to compete. Without that? Forget it.
Ideally, we should work to make our system a little more meritocratic by downplaying prestige signals like Ivy League degrees and emphasizing objective measurements more. But we’re never going to eradicate class entirely, and we shouldn’t go to radical measures to attempt it. Pretty soon, the medicine ends up worse than the disease if we go that route. That’s why you end up with absurd, totalitarian arguments that parents shouldn’t read to their children and that having an intact, loving, biological family is cheating. That way lies madness.
We should also stop pretending that our society is fully meritocratic. It’s not. And the denial is perverse. This is where Douthat was right on target:
[E]ven as it restratifies society, the meritocratic order also insists that everything its high-achievers have is justly earned… This spirit discourages inherited responsibility and cultural stewardship; it brushes away the disciplines of duty; it makes the past seem irrelevant, because everyone is supposed to come from the same nowhere and rule based on technique alone. As a consequence, meritocrats are often educated to be bad leaders, and bad people…
Like Douthat, I’m not calling for a return to WASP-y domination. (Also like Douthat, I’d be excluded from that club.) A diverse elite is better than a monocultural elite. But there’s one vital thing that the WASPy elite had going for it that any elite (and there’s always an elite) should reclaim:
the WASPs had at least one clear advantage over their presently-floundering successors: They knew who and what they were.
The earned income tax credit (EITC) provides substantial support to low- and moderate-income working parents, but very little support to workers without qualifying children (often called childless workers). Workers receive a credit equal to a percentage of their earnings up to a maximum credit. Both the credit rate and the maximum credit vary by family size, with larger credits available to families with more children. After the credit reaches its maximum, it remains flat until earnings reach the phaseout point. Thereafter, it declines with each additional dollar of income until no credit is available (figure 1).
By design, the EITC only benefits working families. Families with children receive a much larger credit than workers without qualifying children. (A qualifying child must meet requirements based on relationship, age, residency, and tax filing status.) In 2018, the maximum credit for families with one child is $3,461, while the maximum credit for families with three or more children is $6,431.
…Research shows that the EITC encourages single people and primary earners in married couples to work (Dickert, Houser, and Sholz 1995; Eissa and Liebman 1996; Meyer and Rosenbaum 2000, 2001). The credit, however, appears to have little effect on the number of hours they work once employed. Although the EITC phaseout could cause people to reduce their hours (because credits are lost for each additional dollar of eanings, which is effectively a surtax on earnings in the phaseout range), there is little empirical evidence of this happening (Meyer 2002).
The one group of people that may reduce hours of work in response to the EITC incentives is lower-earning spouses in a married couple (Eissa and Hoynes 2006). On balance, though, the increase in work resulting from the EITC dwarfs the decline in participation among second earners in married couples.
If the EITC were treated like earnings, it would have been the single most effective antipoverty program for working-age people, lifting about 5.8 million people out of poverty, including 3 million children (CBPP 2018).
The EITC is concentrated among the lowest earners, with almost all of the credit going to households in the bottom three quintiles of the income distribution (figure 2). (Each quinitle contains 20 percent of the population, ranked by household income.) Very few households in the fourth quinitle receive an EITC (fewer than 0.5 percent).
Recent evidence supports this view of the EITC. From a brand new article in Contemporary Economic Policy:
First, the evidence suggests that longer-run effects”Our working definition of “longer run” in this study is 10 years” (pg. 2).[/ref] of the EITC are to increase employment and to reduce poverty and public assistance, as long as we rely on national as well as state variation in EITC policy. Second, tighter welfare time limits also appear to reduce poverty and public assistance in the longer run. We also find some evidence that higher minimum wages, in the longer run, may lead to declines in poverty and the share of families on public assistance, whereas higher welfare benefits appear to have adverse longer-run effects, although the evidence on minimum wages and welfare benefits—and especially the evidence on minimum wages—is not robust to using only more recent data, nor to other changes. In our view, the most robust relationships we find are consistent with the EITC having beneficial longer-run impacts in terms of reducing poverty and public assistance, whereas there is essentially no evidence that more generous welfare delivers such longer-run benefits, and some evidence that more generous welfare has adverse longer-run effects on poverty and reliance on public assistance—especially with regard to time limits (pg. 21).
Every year, economist Mark Perry draws on Census Bureau reports to paint of picture of the demographics of inequality. Looking at 2018 data, he constructed the following table:
Once again, he concludes,
Household demographics, including the average number of earners per household and the marital status, age, and education of householders are all very highly correlated with American’s household income. Specifically, high-income households have a greater average number of income-earners than households in lower-income quintiles, and individuals in high-income households are far more likely than individuals in low-income households to be well-educated, married, working full-time, and in their prime earning years. In contrast, individuals in lower-income households are far more likely than their counterparts in higher-income households to be less-educated, working part-time, either very young (under 35 years) or very old (over 65 years), and living in single-parent or single households.
The good news about the Census Bureau is that the key demographic factors that explain differences in household income are not fixed over our lifetimes and are largely under our control (e.g., staying in school and graduating, getting and staying married, working full-time, etc.), which means that individuals and households are not destined to remain in a single income quintile forever. Fortunately, studies that track people over time find evidence of significant income mobility in America such that individuals and households move up and down the income quintiles over their lifetimes, as the key demographic variables highlighted above change, see related CD posts here, here and here. Those links highlight the research of social scientists Thomas Hirschl (Cornell) and Mark Rank (Washington University) showing that as a result of dynamic income mobility nearly 70% of Americans will be in the top income quintile for at least one year while almost one-third will be in the top quintile for ten years or more (see chart below).
What’s more, Perry points out elsewhere that the new data demonstrate that the middle class is shrinking…along with the lower class. Meanwhile, the percentage of high-income households has more than tripled since 1967:
In short, the percentage of middle and lower-income households has declined because they’ve been moving up.
According to a new report from the Institute for Family Studies and the Wheatley Institution, religion appears to be a net gain “in 11 countries in the Americas, Europe, and Oceania.” From the executive summary:
When it comes to relationship quality in heterosexual relationships, highly religious couples enjoy higher-quality relationships and more sexual satisfaction, compared to less/mixed religious couples and secular couples. For instance, women in highly religious relationships are about 50% more likely to report that they are strongly satisfied with their sexual relationship than their secular and less religious counterparts. Joint decision-making, however, is more common among men in shared secular relationships and women in highly religious relationships, compared to their peers in less/mixed religious couples.
When it comes to fertility, data from low-fertility countries in the Americas, East Asia, and Europe show that religion’s positive influence on fertility has become stronger in recent decades. Today, people ages 18-49 who attend religious services regularly have 0.27 more children than those who never, or practically never, attend. The report also indicates that marriage plays an important role in explaining religion’s continued positive influence on childbearing because religious men and women are more likely to marry compared to their more secular peers, and the married have more children than the unmarried.
When it comes to domestic violence, religious couples in heterosexual relationships do not have an advantage over secular couples or less/mixed religious couples. Measures of intimate partner violence (IPV)—which includes physical abuse, as well as sexual abuse, emotional abuse, and controlling behaviors—do not differ in a statistically significant way by religiosity. Slightly more than 20% of the men in our sample report perpetuating IPV, and a bit more than 20% of the women in our sample indicate that they have been victims of IPV in their relationship. Our results suggest, then, that religion is not protective against domestic violence for this sample of couples from the Americas, Europe, and Oceania. However, religion is not an increased risk factor for domestic violence in these countries, either.
The relationships between faith, feminism, and family outcomes are complex. The impact of gender ideology on the outcomes covered in this report, for instance, often varies by the religiosity of our respondents. When it comes to relationship quality, we find a J-Curve in overall relationship quality for women, such that women in shared secular, progressive relationships enjoy comparatively high levels of relationship quality, whereas women in the ideological and religious middle report lower levels of relationship quality, as do traditionalist women in secular relationships; but women in highly religious relationships, especially traditionalists, report the highest levels of relationship quality. For domestic violence, we find that progressive women in secular relationships report comparatively low levels of IPV compared to conservative women in less/mixed religious relationships. In sum, the impact of gender ideology on contemporary family life may vary a great deal by whether or not a couple is highly religious, nominally religious, or secular.
There’s also some useful data on family prayer and worldwide family structure, socioeconomic conditions, family satisfaction, and attitudes and norms. Check it out.
What would happen if foreign direct investment (FDI) simply disappeared? Or, more specifically, what would “a hypothetical world without outward and inward FDI from and to low- and lower-middle-income countries” look like? A brand new study tries to quantify this hypothetical. They find,
On average, the gains from FDI in the poorer countries in the world amount to 7% of world’s trade in 2011, the year of our counterfactual analysis. Second, all countries lose from the counterfactual elimination of FDI in the poorer countries. Third, the impact is heterogeneous. Poorer countries lose the most, but the impact varies widely even within this group – some lose over 50% and some very little. The impact on countries in the rest of the world is significant as well. Some countries lose a lot (e.g. Luxembourg, Singapore, and Ireland) while others (such as India, Ecuador, and Dominican Republic) lose less. Pakistan and Sri Lanka actually see an increase in their total exports due to the elimination of FDI.
Figure 1 Percentage change in total exports from eliminating outward and inward FDI to and from low- and lower-middle-income countries
On average, the gains from FDI amount to 6% of world’s welfare in 2011. Further, all countries in the world have benefited from FDI, but the effects are very heterogeneous. The directly affected low- and lower-middle-income countries see welfare changes up to over 50% (Morocco and Nigeria), while some of the remaining 68 countries, such as Ecuador, Turkmenistan, and Dominican Republic are hardly affected. A higher country-specific production share of FDI leads to larger welfare losses, all else equal. Intuitively, a larger importance of FDI in production leads to larger welfare losses when restricting FDI. A larger net log FDI position leads to larger welfare losses. Intuitively, if a country has more inward than outward FDI, restricting FDI will lead to larger welfare losses, as FDI is complementary to other production factors and therefore overall income increases more than FDI payments.
Figure 2 Welfare effects of eliminating outward and inward FDI to and from low- and lower-middle-income countries (%)
The authors conclude, “Overall, the analysis reveals that FDI is indeed an important component of the modern world economic system. The results suggest positive payoffs to policies designed to facilitate FDI, particularly those concerning protection of intellectual property.”
We use two sources of data—the Current Population Survey (CPS) and the Survey of Income and Program Participation (SIPP)—to explore the differences in occupational licensing between natives and immigrants. Each dataset provides unique advantages, allowing us to paint a clearer picture of how occupational licensing differs between natives and immigrants than would be possible by using either dataset alone.
Though the CPS and SIPP differ in some key ways, where comparable our results are quite similar between the two datasets. We find that immigrants are significantly less likely to have an occupational license than natives; this gap is larger for men than for women and is especially large for the highest education level. The wage premium from having a license may not differ between natives and immigrants when controlling for English language ability, suggesting that though immigrants are less likely to have a license, they seem to benefit at least as much as natives from having one. Licensed workers tend to work more hours per week than otherwise similar unlicensed workers, so the wage premium understates the earnings premium.
Using the CPS, we find that the native/immigrant licensing gap declines with years since migration, consistent with immigrants assimilating toward natives. We also find large differences in licensing rates by region of origin; in particular, women from the Caribbean, Southeast Asia, and Africa have a higher probability of having a license than otherwise similar natives.
Using the SIPP, we find that a lack of English language proficiency lowers the probability that an immigrant has a license, even when controlling for other individual characteristics such as education level. Utilizing the richer set of occupational licensing questions available in the SIPP, we find no evidence to suggest that license characteristics differ between natives and immigrants, and thus we find no evidence that natives and immigrants are acquiring different types of licenses.
Our results suggest that occupational licensing disproportionately affects immigrants, especially male immigrants, those lacking English proficiency, and the most educated group. Indeed, insofar as occupational licensing helps to protect incumbent (largely native) workers in an occupation from competition, it is unsurprising that immigrants are particularly impacted (pg. 18-19).
They also find, “Skill-based immigration would favor immigrants with high levels of education. Our results indicate that it is precisely this group that exhibits the largest licensing attainment gap with natives. Increasing the flow of immigrants from this education level may lead to substantial occupational mismatch for this group of immigrants if they face difficulty in acquiring licenses needed to work in their pre-migration occupations” (pg. 20).
Regressive regulations like this are low-hanging fruit that can easily be changed.
King’s College political theorist Adam Tebble was recently interviewed about his latest paper on epistemic liberalism and open borders. Explaining epistemic liberalism, he says,
Epistemic liberalism is a tradition of thought that places questions about knowledge, complexity and social learning at the heart of debates in political philosophy, initially with regard to debates about economic organisation and distributive justice. Key thinkers in this tradition are Karl Popper, Michael Polanyi and of course Austrian School economists such as Friedrich Hayek, although there is also something to be said for including David Hume and John Stuart Mill on the list, given what they have to say about justice in extended or ‘large’ societies and about our liberty to engage in ‘experiments of living’ respectively.
I pick up where these authors, and particularly Hayek, leave off by claiming that epistemic considerations are not just crucial to debates about distributive justice, but also to more fundamental questions about the status of the background norms and conceptions of the good that inform the economic choices that we, either as self-interested individuals or as other-regarding pursuers of collective projects, may make. Thus, in Epistemic liberalism: a defence I seek to build upon Hayek’s claim about the existence of an economic knowledge problem – where the knowledge relevant to our deciding what to do with resources is for a variety of reasons uncentralisable – to claim that there also exists a more profound cultural knowledge problem.
How does this relate to open borders?
In contrast to much of the literature on migration and justice, and especially in contrast to that which defends a more liberal position, the argument I make in favour of more open borders focuses not upon the interests of immigrants or of the already-resident, but upon those whom migrants leave behind in their countries of origin. In this sense my argument represents something of a breakthrough, for it seeks to claim the interests of those left behind for those arguing in favour of the more liberal approach, rather than leaving them to be appealed to in arguments against it, most notably by writers on brain-drain. My argument, then, can be read as a response to brain-drain critiques of more open borders and to scepticism about freedom of movement in general.
There is some very interesting work in this area, particularly on social remittances and their effects by authors such as Kathleen Newland and Peggy Levitt. Both their work and studies by others in development economics do show how, through visits home, via regular communication, or both, immigrants also remit the values of their adopted nations to those they have left behind. Indeed, there is evidence to suggest that not only the relatives of immigrants, but those who live near to them, are also impacted by this phenomenon.
What’s more, “open borders not only enable migrants to assist those left behind in ways that alternative cross border resource transfer mechanisms cannot, but also assist governments to do the same, via a process of what I call ‘state signalling’.”