DR Editor in GBR: The Economic and Moral Case for Good Management

Image result for graziadio business review

I’m excited to announce that my article “The Great Escape from Global Poverty: The Economic and Moral Case for Good Management” was published in the latest issue of Pepperdine University’s Graziadio Business Review. From the introduction:

Poverty has been a moral issue at the center of philosophical, theological, and social thought for millennia. However, over the last two centuries, much of the world has experienced what Nobel economist Angus Deaton calls “the great escape” from economic deprivation. As a 2013 issue of The Economist explained, one of the main targets of the United Nations Millennium Development Goals (MDG) was to halve extreme poverty between 1990 and 2015. That goal was accomplished years ahead of schedule and the credit largely lies with one thing: “The MDGs may have helped marginally, by creating a yardstick for measuring progress, and by focusing minds on the evil of poverty. Most of the credit, however, must go to capitalism and free trade, for they enable economies to grow—and it was growth, principally, that has eased destitution.”

If this economic narrative is to be believed, then managing well is even more important in the fight against poverty. Research over the last decade finds that management—the day-in, day-out processes of everyday business—matters. As this article will show, economic growth has lifted billions of people worldwide out of extreme poverty via pro-growth policies (especially trade, property rights, and moderate government size). Good management, in turn, plays a significant part in this growth by increasing total factor productivity (TFP) and could therefore be considered a pro-growth policy. In short, those in management positions have the potential to improve the well-being of the global poor by learning to manage well.

Check it out.

Silence During the Sacrament

This post is part of the General Conference Odyssey.

We have lost the art of fine distinctions, of exceptions, and of subtlety. We live in a world of brutally simplistic extremes. And the funny thing is, a lot of us think that these are the days of nuance and sophistication. So, here’s what a reactionary, ultra-conservative Mormonism had to say about family structure in the 1970s:

Families usually consist of a father, mother, and children, but this is not always the case. Sometimes there is not a mother or a father, and sometimes no children. Often there is one person living alone. In years gone by, our family was larger, but now it consists of only two.

There is no reasonable doubt that Elder Hunter had in mind a single archetype of the family: mom, dad, kids. There is also not reasonable doubt that he well understood that gap between the Platonic ideal of the Family and the mortal reality of families.

Back then, we could walk and chew gum at the same time, apparently. I miss those days. For an example, here is a paragraph-sized sermon from the same talk:

There was quiet meditation, the silence broken only by the voice of a tiny babe whose mother quickly held him close. Anything that breaks the silence during this sacred ordinance seems out of place; but surely the sound of a little one would not displease the Lord. He, too, had been cradled by a loving mother at the beginning of a mortal life that commenced in Bethlehem and ended on the cross of Calvary.

The capacity to understand a general principle—that we should be quiet during the Sacrament—and also fully appreciate a valid exception to it—a baby’s cries—without detracting either from the generality of the principle or the validity of the exception is a capacity that is very much felt through its absence.

Check out the other posts from the General Conference Odyssey this week and join our Facebook group to follow along!

The “Everyone is Racist” Quagmire

This is swamp in southern Louisiana. Technically a swamp is not a mire, but it turns out that pictures of actual mires are pretty, and that’s not what I’m going for. CC BY-SA 3.0

UPDATE: Although this post was published on August 24, 2017, it was written weeks ago. Notably, before Charlottesville. I’ll be writing a followup in light of recent events for the near future.

Despite the fact that overt, explicit racism is widely rejected and condemned within the United States, racially disparate outcomes remain endemic. One particular blatant example of this is the racially unequal justice system we have in this country. In super-short terms, blacks and whites use illegal drugs at about the same rates, but black people are more likely to be arrested, charged with more serious crimes, and serve longer sentences than whites.

Contemporary definitions of racism–of which there are basically two–attempt to explain why America continues to be a place with racially unfair outcomes even though overt racism has long since been marginalized. The first contemporary definition of racism is about systematic racism. According to this definition, prejudice is a feeling of animus against a person/people based on their race, discrimination is unequal treatment stemming from prejudice, and racism is an attribute of social systems and institutions where prejudice and discrimination have become ingrained. Accordingly, America can be a white supremacy without any white supremacists, because the overt prejudices of the past have been absorbed into our institutions (like the criminal justice system) and have taken on a life of their own. If the system is racially biased, then even racially unbiased people are not enough to get racial justice. It would be like playing a game with loaded dice. Even if the other players are 100% honest, their dice are still loaded, and so the outcome is still not fair. If you accuse them of cheating, they will be defensive because–in a sense–they are playing the game honestly. But as long as their dice are loaded (and yours are not), the game is still rigged.

Second, we have the idea of implicit racism. This is the idea that even people who really and sincerely believe that they are not racist may harbor unconscious racial prejudice. This is based on implicit-association tests and the theory that tribalism is basically hard-coded in human beings. These two findings–the empirical results of implicit-association tests and theories about the innateness of human tribalism–are not necessarily connected, but they come together in a phrase you’ve almost certainly heard by now: “everyone is racist.”

Thus, racial injustice can remain without overt racism because (in the case of systemic racism) racism is now located inside of institutions instead of inside of people and/or because (in the case of implicit racism) racism is now located inside people’s unconscious minds instead of their conscious minds.

So far, so good. Both of the new definitions (which are not mutually exclusive) provide promising avenues to understand ongoing racial disparity in the United States and seek to redress it. But this is where we run into a serious problem. As promising as these avenues might be, they certainly take us onto more ambiguous and complex territory than civil rights struggles of the past. The more overt racial injustice is, the simpler it is. Slavery and Jim Crow are not nuanced issues. But now we’re talking about how to fight racism in a world where nobody is racist anymore (at least not consciously). And just when things start to get tricky, the problem of perverse incentives rears its ugly head.

Perverse incentives are “incentives that [have] an unintended and undesirable result which is contrary to the interests of the incentive makers.” In the fight against racial injustice there are basically two kinds of perverse incentive: institution and personal.

Institutional perverse incentives arise whenever you have an institution with a mission statement to eliminate something. The problem is that if the institution ever truly succeeds then it is essentially committing suicide and everyone who works for that institution has to go find not only a new job, but a new calling and sense of identity.

If conspiracy theories are your thing, then it’s not hard to spin lots of them based on this insight. Instead of fighting poverty, maybe government agencies perpetuate poverty in order to enlarge their budgets, expand their workforces, and enhance their prestige. But you don’t have to go that far. In practice, it’s far more likely that an institution dedicated to ending something will have two simple characteristics. First, it will exaggerate the threat. Second, it will be studiously uninterested in finding truly effective policies to combat the threat.

An agency that does this will successfully satisfy the economic and psychological self-interest of the people who who work for it. Economically, the bigger the threat the bigger the institution to oppose it. This is true regardless of whether we’re talking about a government agency arguing for a bigger slide of taxpayer revenue or a non-profit appealing for donations. Psychologically, the bigger the threat the easier it is for the people who work in the institution to feel good about themselves and not think too hard about whether or not they are really picking the most effective tools to eliminate whatever they’re supposed to be eliminating. In short: institutions that oppose a thing will gradually come to be hysterical and ineffectual because that’s in the best interest of the people who run those institutions.

This may sound all very hypothetical, so let me give you a specific example: the Southern Poverty Law Center. Politico Magazine recently came out with a very long article titled Has a Civil Rights Stalwart Lost Its Way? which makes a lot of sense when you keep the perils of perverse institutional incentives in mind as you’re reading it.  The article points out that the SPLC has “built itself into a civil rights behemoth with a glossy headquarters and a nine-figure endowment, inviting charges that it oversells the threats posed by Klansmen and neo-Nazis to keep donations flowing in from wealthy liberals.” It also notes that the election of Trump–while ostensibly bad for anti-racism efforts in the US–is unquestionably great for the SPLC, “giving the group the kind of potent foil it hasn’t had since the Klan.” So no, this isn’t just hypothetical theorizing. It’s what is happening already, to one of America’s most legendary anti-racism institutions.

The second set of perverse incentives are personal and basically class-based. Both the systematic and implicit definitions of racism evolved on elite college campuses, and the anti-racist theories that are based on these definitions are correspondingly unlikely to successfully reflect the interests and concerns of the genuinely underprivileged. They may be about the underprivileged, but they are adapted to–and serve the interests of–elites.

Consider first the case of a hypothetical young black man with a solidly middle- or upper-class background. Henry Louis Gates, Jr. observed that “the most ironic outcome of the Civil Rights movement has been the creation of a new black middle class which is increasingly separate from the black underclass,” and a 2007 PEW found that nearly 40 percent of blacks felt that “a widening gulf between the values of middle class and poor blacks” which meant that “blacks can no longer be thought of as a single race.” Thus, this young man faces a sense of double alienation: alienation from lower-class blacks and alienation from upper-class whites. Placing emphasis exclusively on the racial component of social analysis obscures the gulf between lower- and upper-class blacks and offers a sense of racial solidarity and wholeness. At the same time, it denies the actual privilege enjoyed by this person (after all, his neighborhood is not crime ridden and his schools are high-functioning) and therefore eases any sense of conflict or guilt at his comparative fortune.

The case is simpler for a hypothetical young white man with a privileged background, but (since this person enjoys even more advantages) the need for some kind of absolution is even more acute. Propounding the new definitions of racism allows low-cost access to that absolution. For an example of how this works, consider how the hilarious blog-turned-book Stuff White People Like discussed white people’s love of “Awareness.” Stuff White People Like notes that “an interesting fact about white people is that they firmly believe that all of the world’s problems can be solved through “awareness.” Meaning the process of making other people aware of problems, and then magically someone else like the government will fix it.” The entry goes on: “This belief allows them to feel that sweet self-satisfaction without actually having to solve anything or face any difficult challenges.” Finally:

What makes this even more appealing for white people is that you can raise “awareness” through expensive dinners, parties, marathons, selling t-shirts, fashion shows, concerts, eating at restaurants and bracelets.  In other words, white people just have to keep doing stuff they like, EXCEPT now they can feel better about making a difference.

The apotheosis of this awareness-raising fad is the ritual of “privilege-checking” in which whites, men, heterosexuals, and the cisgendered publicly acknowledge their privilege for the sake of feeling good about publicly acknowledging their privilege. In biting commentary for the Daily Beast, John McWhorter noted that:

The White Privilege 101 course seems almost designed to turn black people’s minds from what political activism actually entails. For example, it’s a safe bet that most black people are more interested in there being adequate public transportation from their neighborhood to where they need to work than that white people attend encounter group sessions where they learn how lucky they are to have cars. It’s a safe bet that most black people are more interested in whether their kids learn anything at their school than whether white people are reminded that their kids probably go to a better school.

So we’re at a time when the complexity of racial injustice in the United States calls for new and nuanced definitions and theories of racism at precisely the time when–due to the past successes–the temptation to exaggerate racism and ignore effective anti-racism policies is also rising. The result? You might have a Facebook friend who will pontificate about how “everyone is racist” one day, and then post an image like this one the next:

So, you know, “we’re all racist” and also “if you’re racist, you deserve to die.” No mixed messages there, or anything.

Speaking of implicit bias, by the way, the actual results of Project Implicit’s testing are that nearly a third of white people have no racial preference or even a bias in favor of blacks. Once again, this doesn’t prove that racial justice has arrived and we can all go home. That’s absolutely not my point. It’s just another illustration that simplistic narratives about white supremacy don’t work as well in a post-slavery, post-Jim Crow world. The serious problems that remain are not as brutally self-evident as white people explicitly stating that the white race is superior.

Just to toss another complicating factor out there, researchers compared implicit bias to actual outcomes in undergraduate college admissions and found that despite the presence of anti-black implicit bias, the actual results of the admission process were skewed in favor of blacks rather than against them:

When making multiple admissions decisions for an academic honor society, participants from undergraduate and online samples had a more relaxed acceptance criterion for Black than White candidates, even though participants possessed implicit and explicit preferences for Whites over Blacks. This pro-Black criterion bias persisted among subsamples that wanted to be unbiased and believed they were unbiased. It also persisted even when participants were given warning of the bias or incentives to perform accurately.

If implicit bias can coexist with outcomes that are biased in the opposite direction, then what exactly are we measuring when we measure implicit bias, anyway?

I believe that both of the new definitions of racism have merit. The idea that institutional inertia can perpetuate racist outcomes long after the original racial animus has disappeared is reasonable theoretically and certainly seems to explain (in part, at least) the racially unequal outcomes in our criminal justice system. The idea that people divide into tribes and treat the outgroup more poorly–and that racial categories make for particularly potent tribal groups–is equally compelling. But the temptation to over-simplify, exagerate, and then coopt racial analysis for institutional and personal benefit is a genuine threat. As long as it’s possible to cash-in on anti-racism–financially and politically–then our progress towards racial justice will be impeded.

I am, generally speaking, a conservative. I don’t, by and large, share the worldview or policy proscriptions of those on the American left. But I do care about racial justice in the United States. I believe that the current discussion–or lack therefore–is significantly hampered by the temptation to profit from it. And I figure hey: maybe by speaking up I can contribute in a small way to shifting the conversation on race away from the left-right political axis and all the toxicity and perverse incentives that come with it.

More Economic Illiteracy from Journalists

I’ve lamented about this before. Funny enough, it was largely about the same source: The Guardian. A recent piece suggests that “neoliberalism” is responsible for, in the words of Forbes‘ Tim Worstall, the destruction of “everything that is good and holy about society.” This is based on a new IMF study that reviews the following:

Our assessment of the agenda is confined to the effects of two policies: removing restrictions on the movement of capital across a country’s borders (so-called capital account liberalization); and fiscal consolidation, sometimes called “austerity,” which is shorthand for policies to reduce fiscal deficits and debt levels. An assessment of these specific policies (rather than the broad neoliberal agenda) reaches three disquieting conclusions:

•The benefits in terms of increased growth seem fairly difficult to establish when looking at a broad group of countries.­

•The costs in terms of increased inequality are prominent. Such costs epitomize the trade-off between the growth and equity effects of some aspects of the neoliberal agenda.­

•Increased inequality in turn hurts the level and sustainability of growth. Even if growth is the sole or main purpose of the neoliberal agenda, advocates of that agenda still need to pay attention to the distributional effects.­

In other words, it worries about financial openness and austerity. However, The Guardian describes it as such:

Three senior economists at the IMF, an organisation not known for its incaution, published a paper questioning the benefits of neoliberalism. In so doing, they helped put to rest the idea that the word is nothing more than a political slur, or a term without any analytic power. The paper gently called out a “neoliberal agenda” for pushing deregulation on economies around the world, for forcing open national markets to trade and capital, and for demanding that governments shrink themselves via austerity or privatisation. The authors cited statistical evidence for the spread of neoliberal policies since 1980, and their correlation with anaemic growth, boom-and-bust cycles and inequality.

Unfortunately for the author, that’s not quite accurate. The IMF researchers actually say,

There is much to cheer in the neoliberal agenda. The expansion of global trade has rescued millions from abject poverty. Foreign direct investment has often been a way to transfer technology and know-how to developing economies. Privatization of state-owned enterprises has in many instances led to more efficient provision of services and lowered the fiscal burden on governments.­

Perhaps The Guardian author needs to be reminded that the IMF came out against protectionism last year in the midst of anti-trade rhetoric from politicians. Similarly, it released a report around the same time extolling the benefits of trade. Furthermore, the new IMF study qualifies its concerns:

The link between financial openness and economic growth is complex. Some capital inflows, such as foreign direct investment—which may include a transfer of technology or human capital—do seem to boost long-term growth. But the impact of other flows—such as portfolio investment and banking and especially hot, or speculative, debt inflows—seem neither to boost growth nor allow the country to better share risks with its trading partners (Dell’Ariccia and others, 2008; Ostry, Prati, and Spilimbergo, 2009). This suggests that the growth and risk-sharing benefits of capital flows depend on which type of flow is being considered; it may also depend on the nature of supporting institutions and policies.­

…In sum, the benefits of some policies that are an important part of the neoliberal agenda appear to have been somewhat overplayed. In the case of financial openness, some capital flows, such as foreign direct investment, do appear to confer the benefits claimed for them. But for others, particularly short-term capital flows, the benefits to growth are difficult to reap, whereas the risks, in terms of greater volatility and increased risk of crisis, loom large.­

This doesn’t strike me as a denunciation of “neoliberalism.” I’m going to follow Worstall’s lead on this one and refer to Max Roser’s work.

Roser explains,

The distribution of incomes is shown at 3 points in time:

  • In 1800 only few countries achieved economic growth. The chart shows that the majority of the world lived in poverty with an income similar to the poorest countries in today. Our entry on global extreme poverty shows that at the beginning of the 19th century the huge majority – more than 80% – of the world lived in material conditions that we would refer to as extreme poverty today.

  • In the year 1975, 175 years later, the world has changed – it became very unequal. The world income distribution has become bimodal. It has the two-humped shape of a camel. One hump below the international poverty line and a second hump at considerably higher incomes – the world was divided into a poor developing world and a more than 10-times richer developed world.

  • Over the following 4 decades the world income distribution has again changed dramatically. The poorer countries, especially in South-East Asia, have caught up. The two-humped “camel shape” has changed into a one-humped “dromedar shape”. World income inequality has declined. And not only is the world more equal again, the distribution has also shifted to the right – the incomes of the world’s poorest citizens have increased and poverty has fallen faster than ever before in human history.

That’s right: global inequality is decreasing. The World Bank reports,

Globally, there has been a long-term secular rise in interpersonal inequality. Figure 4.3 shows the global Gini index since 1820, when relevant data first became available. The industrial revolution led to a worldwide divergence in incomes across countries, as today’s advanced economies began pulling away from others. However, the figure also shows that, in the late 1980s and early 1990s, the global Gini index began to fall. This coincided with a period of rapid globalization and substantial growth in populous poor countries, such as China and India.

…Global inequality has diminished for the first time since the industrial revolution. The global Gini index rose steadily by around 15 Gini points between the 1820s and the early 1990s, but has declined since then (see figure 4.3). While the various methodologies and inequality measures show disagreement over the precise timing and magnitude of the decline, the decline since the middle of the last decade is confirmed across multiple sources and appears robust. The estimates
presented in figure 4.5 show a narrowing in global inequality between 1988 and 2013. The Gini index of the global distribution (represented by the blue line) fell from 69.7 in 1988 to 62.5 in 2013, most markedly since 2008 (when the global Gini index was 66.8). Additional exercises confirm that these results are reasonably robust, despite the errors to which the data are typically subject (pg. 76, 81).

Harvard’s Andrei Shleifer has shown that between 1980 and 2005, world per capita income grew about 2% per year. During these 2.5 decades, serious hindrances on economic freedom declined, including the world median inflation rate, the population-weighted world average of top marginal income tax rates, and the world average tariff rates. “In the Age of Milton Friedman,” summarizes Shleifer, “the world economy expanded greatly, the quality of life improved sharply for billions of people, and dire poverty was substantially scaled back. All this while the world embraced free market reforms” (pg. 126).

Go away Guardian.

Integrity of our Leaders

This post is part of the General Conference Odyssey.

As I read the General Conference talks, there are a couple of pet issues I keep in the back of my mind that I’m interested in learning more about. One of those, and this might be a little bit of an odd one, is the question of how Mormons should vote. I have a hunch that we worry more than we should about politics and ideology and not nearly enough about the character of our leaders.

I get that it’s not easy to get an accurate feel for a person’s moral character from afar. It’s not like we know public figures the way we know the people of our daily lives.  But then again, we don’t always know the people in our daily lives as well as we think we do either.

So, while an accurate assessment of a politician’s moral character might be impossible on a case-by-case basis, I do think that we ought to have pretty high standards for the behavior of our elected officials, and be extremely unforgiving when they fail to live up to those standards. Forgiveness is great for the people in your lives, but turning a blind eye to corruption in our leaders does nothing but foster a corrupt environment that brings out the worst in the people who have the most power.

At least, that’s the hunch. And I feel like it’s something I’ve picked up from Mormon leaders. Is it? Well, yeah. My list of quotes to support this notion keeps growing as I read these talks, and Elder Tanner provided yet another one in his talk from the Saturday morning session of this General Conference:

We need to be governed by men and women who are undivided in honorable purpose, whose votes and decisions are not for sale to the highest bidder. We need as our elected and appointed officials those whose characters are unsullied, whose lives are morally clean and open, who are not devious, selfish, or weak. We need men and women of courage and honest convictions, who will stand always ready to be counted for their integrity and not compromise for expediency, lust for power, or greed; and we need a people who will appreciate and support representatives of this caliber.

Being cynical about the moral caliber of our leaders is trite and counterproductive. Accurately gauging moral caliber from afar might be hard, but expressing intolerance at the voting box for outright corruption isn’t nearly as difficult. We can do that. And we should do that.

The reality is that a lot of the questions people fight about the most are extremely difficult policy questions where the answer is unclear and about which good people can disagree. I think we have a lot of room for mistakes and errors and experiments in most of our policies.

But I don’t think we have anywhere near as much room for error when it comes to the quality of our leaders.

Check out the other posts from the General Conference Odyssey this week and join our Facebook group to follow along!

Increasing Alcoholism: A Follow-Up

I posted an article a week or so ago on a new study claiming a rise in alcoholism. The study has been met with some major criticism. From Vox:

some researchers are pushing back. They argue that the data used in the study is based on a federal survey [NESARC] that underwent major methodological changes between 2001-’02 and 2012-’13 — meaning the increase in alcoholism rates could be entirely explained just by differences in how the survey was carried out between the two time periods. And they point out that the study’s conclusions are sharply contradicted by another major federal survey…That survey has actually found a decrease in alcohol use disorder from 2002 to 2013: In 2002, the percent of Americans 12 and older who qualified as having alcohol use disorder was 7.7 percent. In 2013, that dropped to 6.6 percent.

One key difference is the NESARC used data of people 18 years and older, while NSDUH used data of people 12 years and older. But even if you isolate older groups in NSDUH, the rates of alcoholism still dropped or remained relatively flat — certainly not the big rise the NESARC reported.

Now, the NSDUH isn’t perfect. For one, it surveys households — so it misses imprisoned and homeless populations, which are fairly big segments of the population and likely to have higher rates of drug use. But NESARC also shares these limitations, so it doesn’t explain the difference seen in the surveys.

Here are some of the major changes to the NESARC:

  • The NESARC changed some questions from wave to wave, which could lead survey takers to respond differently.
  • In the 2001-’02 wave, NESARC respondents were not given monetary rewards. In the 2012-’13 wave, they were. That could have incentivized different people to respond.
  • No biological samples were collected in the first wave, while saliva samples were collected in the second. What’s more, respondents were notified of this at the start of the survey — which could have led them to respond differently, since they knew they’d be tested for their drug use.
  • Census Bureau workers were used for the 2001-’02 survey, but private workers were used for the 2012-’13 survey. That could lead to big differences: As Grucza told me, “Some researchers speculate that using government employees might suppress reporting of socially undesirable behaviors.”

The article continues,

Researchers from SAMHSA told me that they would caution against trying to use the different waves of NESARC to gauge trends.

“Given these points, we would strongly caution against using two points in time as an indicator in trend, especially when the data for these two points in time were collected using very different methods and do not appear to be comparable,” SAMHSA researchers wrote in an email. “We would encourage the consideration of data from multiple sources and more than two time points, in order to paint a more complete and accurate portrayal of substance use and substance use disorder in the nation.”

In short, it looks like the JAMA Psychiatry study was based on some fairly faulty data.

When I asked about these problems surrounding the study, lead author Bridget Grant, with NIAAA, shot back by email: “There were no changes in NESARC methodology between waves and NSDUH folks know nothing about the NESARC. Please do not contact me again as I don’t know NSDUH methodology and would not be so presumptuous to believe I did.”

But based on SAMHSA’s and Grucza’s separate reviews of NESARC, its methodology did change.

When I pressed on this, Grant again responded, “Please do NOT contact me again.”

After this article was published, Grant confirmed NESARC went through some methodological changes between 2001-’02 and 2012-’13. But she argued that there’s no evidence such changes would have a significant impact on the results.

It concludes,

None of that means America doesn’t have an alcohol problem. Between 2001 and 2015, the number of alcohol-induced deaths (those that involve direct health complications from alcohol, like liver cirrhosis) rose from about 20,000 to more than 33,000. Before the latest increases, an analysis of data from 2006 to 2010 by the Centers for Disease Control and Prevention (CDC) already estimated that alcohol is linked to 88,000 deaths a year — more than all drug overdose deaths combined.

And another study found that rates of heavy drinking and binge drinking increased in most US counties from 2005 to 2012, even as the percentage of people who drink any alcohol has remained relatively flat.

But for now, it’s hard to say if a massive increase in alcohol use disorder is behind the negative trends — because the evidence for that just isn’t reliable.

Migration and Terrorism

Image result for terrorist

A new study examines the link between immigrants and terrorism:

In our recent work (Dreher et al. 2017) we provide a detailed analysis of how the number of foreigners living in a country has affected the number of terrorist attacks made by foreigners on citizens of their host countries. According to the raw data, in OECD countries between 1980 and 2010, for every million foreigners in the population, 0.8 terror attacks are committed per year, per country (there were 662 transnational attacks). While it is obvious that the number of attacks increases with the number of people living in a country (after all, with no foreigners in a country, no foreigners would commit any attacks), on average these numbers amount to about one attack by foreigners per year and host country, and 1.3 people die from these attacks in the average country and year.

Transnational terror is dwarfed in absolute numbers by the number of attacks made by the domestic population. In the 20 OECD countries that our sample covers, there were 2,740 attacks arising from the domestic population. In relative terms though, the picture is different – there were fewer than 0.18 terrorist attacks for every one million locally born citizens in a typical country and year. Overall, while the probability that foreigners are involved in an attack on the domestic population was much higher than the risk that citizens were involved in attacks on their own country, the risk associated with each additional foreigner was tiny.

In our statistical analysis, we investigate whether, and to what extent, an increase in the foreign population of the average OECD country would increase the risk of terrorist attacks from foreigners in a host country. We identify exogenous variation in the number of foreigners living in an OECD country using changes in migration resulting from natural disasters. These changes affected host countries differently, according to the specifics of each host- and origin-country pair.

Using data for 20 OECD host countries, and 187 countries of origin between 1980 and 2010, we find that the number of terror attacks increased with the number of foreigners living in a host country. This scale effect that relates larger numbers of foreigners to more attacks does not imply, however, that foreigners are more likely to become terrorists than the domestic population. When we calculate the effect of a larger local population on the frequency of terror attacks by locals, the effect is of a comparable size. We conclude that, in this period, migrants were not more likely to become terrorists than the locals of the country in which they were living.

To put these results in perspective, consider the expected effect of a decrease in the domestic population of 0.0002% (which is the average decrease in the domestic population of the 20 OECD countries we studied in 2015, according to the OECD). According to our model, this would have reduced the number of terrorist attacks by 0.00025 per country and year. The increase in the stock of foreigners living in these countries was 3.6% in the same year. According to our estimates, this would have created 0.04 additional attacks. We might argue that this hardly justifies a ban on foreigners as a group.

We find little evidence that terror had been systematically imported from countries with large Muslim populations. The exceptions were Algeria and Iran, where we found a statistically higher risk of being involved in terrorist attacks against the local population, compared to the average effect of foreigners from non-Muslim countries. In this light, the phrases ‘Muslim terror’ or ‘Islamist terror’ does not seem accurate or useful. Only 6% of the terrorist attacks in the US between 1980 and 2005 period were carried out by Muslims, and less than 2% of all attacks in Europe had a religious motivation between 2009 and 2013 (Alnatour 2017).

I’ve written before about how European labor laws may play a role in radicalization. The authors make a similar case for immigration bans:

Contrary to the expectations of many politicians and pundits, introducing strict laws that regulate the integration and rights of migrants does not seem to have been effective in preventing terror attacks from foreign-born residents. We rather find that repressing migrants already living in the country with these laws has alienated a substantial share of this population, which increases the risk of terror. Stricter laws on immigration thus have the potential to increase the risk of terror, at least immediately following the ban.

…Our results illustrate an important trade-off. While stricter immigration laws could reduce the inflow of (violent) foreigners and thus potentially the number of future terrorist attacks, the restrictions would also increase the probability that those foreigners already living in the country become more violent. Immigration bans, like those recently introduced in the US, would arguably increase the short-term risk of attacks, before potentially reducing risk when the number of foreigners in the population has decreased.

Far-Right Terrorism

Last year, I linked to a Cato study on the likelihood of a foreign terrorist attack (TL;DR: it’s astronomically low). With Charlottesville in the news, this piece from Foreign Policy was particularly interesting:

Related imageThe FBI and the Department of Homeland Security in May warned that white supremacist groups had already carried out more attacks than any other domestic extremist group over the past 16 years and were likely to carry out more attacks over the next year, according to an intelligence bulletin obtained by Foreign Policy.

Even as President Donald Trump continues to resist calling out white supremacists for violence, federal law enforcement has made clear that it sees these types of domestic extremists as a severe threat. The report, dated May 10, says the FBI and DHS believe that members of the white supremacist movement “likely will continue to pose a threat of lethal violence over the next year.”

…The FBI…has already concluded that white supremacists, including neo-Nazi supporters and members of the Ku Klux Klan, are in fact responsible for the lion’s share of violent attacks among domestic extremist groups. White supremacists “were responsible for 49 homicides in 26 attacks from 2000 to 2016 … more than any other domestic extremist movement,” reads the joint intelligence bulletin.

The report, titled “White Supremacist Extremism Poses Persistent Threat of Lethal Violence,” was prepared by the FBI and DHS.

The bulletin’s numbers appear to correspond with outside estimates. An independent database compiled by the Investigative Fund at the Nation Institute found that between 2008 and 2016, far-right plots and attacks outnumbered Islamist incidents by almost 2 to 1.

Now, granted, when we consider that the Southern Poverty Law Center “estimates that [today] there are between 5,000 and 8,000 Klan members, split among dozens of different – and often warring – organizations that use the Klan name,” that’s a huge improvement over the 4 million in the mid-1920s. But I find it ironic that groups that worry about the influx of immigrants in part due to potential terror attacks are more likely to commit said attacks in recent years.

Goodreads is my Cyberbrain

A Facebook friend posted a Quora answer to When people read hundreds of books a year, how much of them do they actually remember? I don’t know about “hundreds”, but I did read about 100 books in 2016 and chances are good I’ll break 100 again this year, too. Here’s the setup:

I read an embarrassing number of books (I’m in danger of having no life) but if I met you at a party (which I wouldn’t, because I have no life) and you mentioned a book that you’d read and I’d also read it, I might not admit it.

I’d lie because unless it was really, really special, I wouldn’t remember enough to talk about it intelligently.

The gist of the response thereafter is that it’s fine if you don’t remember the books you read, because (in this case) you can still harvest them for good ideas. And I think this is fine. It’s a perfectly valid reason to read books. Another valid reason would be the food analogy. You probably can’t remember (in any great detail) what you had for lunch last month, but it’s pretty important that you ate something right? Otherwise you’d starve. And so maybe books are kind of like food for your brain. Even if you don’t remember the specifics of any given meal, it still helps to have a high-quality diet. Another valid response.

But here’s one more: you can store what you remember about a book in your cyberbrain.

The idea of using computers–and especially the Internet / cloud–to augment human memory is an old one. And it’s not theoretical. It’s exactly what I do with my Goodreads reviews. I try to write a review of every book I read I also take lots and lots of notes in Evernote. Then, I promptly forget what I read. Sometimes I literally forget that I read a book at all. But when I go back and reread my reviews, a lot of my initial impressions come back.

Over my lifetime, I’ve certainly read thousands of books. And for the most part, I can’t remember them. I kind of have a big hole in my memory between the first few books I really loved as a kid in elementary school and middle school and the books that I started reviewing on Goodreads. In between, I really only remember a few books. The only exception is the ones I have on my shelves. If I pick up those paperbacks, I can basically always remember the overall plot and sometimes a surprising amount of detail. I just need the cues provided by the cover art–and maybe just the existence of a physical reminder–to trigger all those memories.

The Goodreads reviews are like that, but even better.

So review your books, kiddos. It’s like a diary of your literary life, and it can help you keep hold of memories that would otherwise be totally lost.

 

Minimum Wage Hikes and Automation Risks

A couple years ago, I wrote,

Other studies show that an increased minimum wage causes firms to incrementally move toward automation. Now, this too could be seen as a trade-off: automation and technological progress tend to make processes more efficient and therefore increase productivity (and eventually wages), raising living standards for consumers (which include the poor). Nonetheless, the point is that while unemployment in the short-term may be insignificant, the long-term effects could be much bigger. For example, one study finds that minimum wage hikes lead to lower rates of job growth: about 0.05 percentage points a year. That’s not much in a single year, but it accumulates over time and largely impacts the young and uneducated.

A couple new studies this year demonstrate the link between minimum wage hikes, automation, and job loss. As reported from AEI’s James Pethokoukis,

Now comes the new NBER working paper, “People Versus Machines: The Impact of Minimum Wages on Automatable Jobs” by Grace Lordan and David Neumark (bold is mine):

Based on CPS data from 1980-2015, we find that increasing the minimum wage decreases significantly the share of automatable employment held by low-skilled workers. The average effects mask significant heterogeneity by industry and demographic group. For example, one striking result is that the share in automatable employment declines most sharply for older workers. An analysis of individual transitions from employment to unemployment (or to employment in a different occupation) leads to similar overall conclusions, and also some evidence of adverse effects for older workers in particular industries.  … Our work suggests that sharp minimum wage increases in the United States in coming years will shape the types of jobs held by low-skilled workers, and create employment challenges for some of them. … Therefore, it is important to acknowledge that increases in minimum wage will give incentives for firm to adopt new technologies that replace workers earlier. While these adoptions undoubtedly lead to some new jobs, there are workers who will be displaced that do not have the skills to do the new tasks. Our paper has identified workers whose vulnerability to being replaced by machines has been amplified by minimum wage increases. Such effects may spread to more workers in the future.”

Three things: First this study is a great companion piece to a recent one by Daron Acemoglu and Pascual Restrepo analyzing the effect of increased industrial robot usage between 1990 and 2007 on US local labor markets: “According to our estimates, one more robot per thousand workers reduces the employment to population ratio by about 0.18-0.34 percentage points and wages by 0.25-0.5 percent.”

Second, Lordan and Neumark note that minimum wage literature often, in effect, ends up focusing on teenager employment as it presents aggregate results. But that approach “masks” bigger adverse impacts on some subgroups like older workers who are “more likely to be major contributors to their families’ incomes.” This seems like an important point.

Third, some policy folks argue that it’s a feature not a bug that a higher minimum wage will nudge firms to adopt labor-saving automation. (Thought not those arguing for robot taxes.) The result would be higher productivity and economic growth. But perhaps we are “getting too much of the wrong kind of innovation.

As the St. Louis Fed explains, “labor share declined 3.3 percentage points in advanced economies from 1980 to 2015”:

One of the explanations for the decline of the labor share has been an increase in productivity that has outpaced an increase in real wages, with several studies attributing half the decline to this trend.

This increase in productivity has been driven by technological progress, as manifested in a decline in the relative price of investment (that is, the price of investment relative to the price of consumption). As the relative price of investment decreases, the cost of capital goes down, and firms have an incentive to substitute capital for labor. As a result, the labor share declines.

The decline in the labor share that results from a decline in the relative price of investment has contributed to an increase in inequality: A decrease in the cost of capital tends to induce automation in routine tasks, such as bookkeeping, clerical work, and repetitive production and monitoring activities. These are tasks performed mainly by middle-skill workers.

Hence, these are the segments of the population that are more affected by a reduction in the relative price of investment. The figure below displays the correlation between changes in the advanced economies’ labor share and their Gini coefficients (which measure income inequality).

connection between gini coefficient and labor share

The Fed concludes,

Technological progress promotes economic growth, but as the findings above suggest, it can also reduce the welfare of a large part of the working population and eventually have a negative effect on economic growth.

An important role for policymakers would be to smooth the transition when more jobs are taken over by the de-routinization process. At the end of the day, technology should relieve people from performing repetitive tasks and increase the utility of our everyday lives.