The Cost of the Death Taboo

This post is part of the General Conference Odyssey.

Two thoughts from two different talks.

In “Blessed Are the Peacemakers”, Elder Burton said that:

We forget that we are not, and cannot be, totally independent of one another either in thought or action. We are part of a total community. We are all members of one family, as Paul reminded the Greeks at Athens when he explained that God “hath made of one blood all nations of men to dwell on all the face of the earth.” (Acts 17:26)

Although Elder Burton went in a different direction, that thought made me think about the talk before his, Elder LeGrand Richards’ What After Death?

I thought today that I would like to direct what I have to say to those parents who have lost children in death before they reached maturity and could enter into the covenant of marriage and have their own children here upon this earth. I reckon that there aren’t many families who haven’t had that experience.

Elder Richards was born in 1886. I wondered what childhood mortality rates looked like for him, so I checked a great site (Our World in Data), but data for the United States only goes back to 1933.

I added in the United Kingdom and then France to get an older data set.[ref]You can see from the graph that, while the lines are not identical, they follow a similar trend.[/ref] So, using France as a proxy, the kind of child mortality that Elder Richards would have been familiar was between 250 and 225 children per 1,000 dying before the age of 5.

By the time of this conference in 1974, the rate was down to about 20. For the most recent data (2013) the rate is about 4. In other words, the chances that a given newborn would die before age 5 have falledn from 25% to 2% to 0.4% from the time that Elder Richards was born to the time when he gave his talk to the time we are alive today. For a family with small children, the chance that none would die before the age of 5 was only 32% when Elder Richards was born. It was 92% in 1974. It is 98% today.

When he said, “I reckon that there aren’t many families who haven’t had that experience,” he was absolutely correct for his time, but the world has changed substantially since then.

The reason that I connect the two talks is that Elder Burton reminds us of how integral family is to our identity. As the saying goes: we’re social animals. And the first society is the family. This is a vital truth to who we are as human beings. I don’t think anything could possibly drive that lesson home than the unimaginable tragedy of losing a young child and having that family circle broken, at least temporarily.

I say “unimaginable” because to me it is. In my lifetime, having all your children survive to adulthood isn’t the exception; it’s the rule. But Elder Richards didn’t have to imagine it. As he discussed in his talk, two of his children died before they were old enough to be married.[ref]I realize my definition—dying before age 5—and Elder Richards’ definition—dying before being old enough to have married and have children—are not the same. I hope you can forgive the inaccuracy; I just went with the data I could quickly find.[/ref]

Right now I am reading The Clockwork Universe: Isaac Newton, the Royal Society, and the Birth of the Modern World. The author—Edward Dolnick—is at great pains to show how different the world of the 17th century was from the world of today. Back then, for example, no one knew what caused disease and nobody could do anything about it. From the Great Fire of London (1666) to a resurgence of Black Death (1665), the men and women who lived at that time lived their entire lives under the shadow of inexplicable, uncontrollable death.

One thing Dolnick doesn’t understand, however, is how recently that has changed. Modernity may have dawned in the 17th and 18th centuries but—as the childhoold mortality figures show—disease and accident continued to make death a common, everyday experience well into the 20th century. Not long ago I read Samuel Brown’s incredible book, Through the Valley of Shadows. Althogh it’s a technical book in many ways, Brown sets up his main discussion (of living wills, advance directives, and intensive care units) with a discussion of “the dying of death.”

Before the Dying of Death, death was part of everyday experience. Death was recognized as horrifying, but people were able to understand it as part of the overall meaning of life and knew how to prepare for it when the time came. The understanding of death was broad enough to cross religious boundaries… By the end of the Dying of Death, Americans had contained the terror of death by simply ignoring it until the moment of crisis, but the sanctity of death had disappeared along with menacing presence. People found themselves newly unprepared when they came to die. Where many generations of humans had spent most of their lives preparing for their deathbed, modern Americans spent only hours to at most days, right in the their death agony, trying to come to terms with what was once called the King of Terrors… Since twentieth-century Americans had not generally spent their lives in the shadow of death, when they came to approach Death, as every human being inevitably does, they discovered just how culturally defenseless they were before it’s terrible power.[ref]Through the Valley of Shadows, page 27[/ref]

These changes occurred during the late 19th and early 20th centuries, when basic understanding of the germ theory of disease led to incredibly advances in public health, but at that time there was still effectively nothing doctors could do to combat most diseases once they took hold. I was surprised at how recent this transition had occurred, but according to Brown, “physicians were mostly bad for your health until the recent past. The Baby Boomers are really the first generation born under the aegis of modern medicine.”[ref]Through the Valley of Shadows, page 32[/ref]

So, prior to the 1960s, doctors really couldn’t do anything at all to actively intervene in a wide variety of life-threatening medical emergencies. Since that time, however, our ability to postpone death has grown tremendously, to the point where ICUs frequently perform medical miracles. So, what has this newfound power achieved for us?

Well, it hasn’t all been good. Brown observes that “A major problem in contemporary society is that we combine our distaste for struggle or pain or disability with an unspeakable fear of death.”[ref]Through the Valley of Shadows, page 29[/ref] We have, in effect, stigmatized dying. As a result, “The dying–once celebrated as people with special wisdom who deserved the rapt attention of family and even strangers—[have] become America’s dirty secret.”[ref]Through the Valley of Shadows, page 29[/ref]

Additionally, ICUs—the frontlines in modern America’s war on death—have become places of trauma: “Many people leave the ICU with emotional scars as severe as those carried by combat veterans. Only a minority skate by without anxiety, depressing, or PTSD or some combination of the three.”[ref]Through the Valley of Shadows, page 167[/ref] This trauma is often the result of delusions, and rape delusions in particular: “it’s common for female patients to have memories of rape from urinary bladder catheters”[ref]Through the Valley of Shadows, page 140[/ref]. There are others, however:

The rape delusions associated with bladder catheters are haunting enough, but they don’t exhaust the list of terrible memories people often acquire in the ICU. Most of these frightening delusions relate to imprisonment, capture, or torture. Some feature aliens or homicidal doctors and nurses. More than a few incorporate the famous Capgras delusion, in which the important people in a person’s life are replaced by evil duplicates. These interpretations likely derive from the intense, paranoid attention that comes with high stress coupled with acute pain. The distressed brain tries to weave a meaningful narrative to explain why familiar faces (or people in professional gear and lab coats) are poking and prodding you as you are tied to a bed… some are frankly horrifying.

Let me explain why I’ve taking us on this long, long tangent. What I’m trying to explain is that as we’ve grown in our power to confront death, we have rediscovered an ancient truth: that power brings responsibility. This isn’t just about superheroes. It’s about ordinary men and women with no medical training and no preparation suddenly being told by doctors that it’s up to them to determine if their loved parent, or spouse, or child should live or die. But—precisely because death is so remote and even taboo—we’re completely and totally unprepared to shoulder this burden. As a result: many are crushed underneath it.

The majority of patients and families [emphasis added] come out of the ICU with post-traumatic stress, anxiety, or depression. They are more shell-shocked then combat veterans, according to an array of recent studies.[ref]Through the Valley of Shadows, page 5[/ref]

When it’s not about individuals staggering under the weight of responsibility they have no preparation for, it’s monstrous institutional inertia instead:

A friend’s elderly father, a devout Catholic, receive his last rites in a hospital. He struggled against the wrist restraints to create the sign of the cross in response to the priest’s gentle ministrations. The restraints intended to keep him from dislodging any medical equipment obstructed his desperate hunger to participate in the healthful rituals of the deathbed. He died later that day. It never occurred to the nurses and doctors to release the restraints for this final interaction with his priest. My friend and his family still remember that angry straining for divine connection, stymied by medical handcuffs.[ref]The Valley of Shadows, pages 137-138[/ref]

I share all this because if I just said, “Gee, now our children don’t die, and that’s weakened our appreciation for family,” it would sound banal (at best) or monstrously cruel (at worst). That’s not what I want to say. But I do want to illustrate how our medical prowess—despite absolutely being a blessing we should never surrender[ref]I don’t want any confusion on that point[/ref] has nonetheless presented us with fresh sets of problems we did not have to confront before.

When we stood powerless before death we had a kind of innocence. Now death seems to be far more contained, striking not children and spouses in their homes but the elderly in hospitals and hospices, and so we are all the less prepared to deal with it when it comes, as it surely must. That innocence is gone. Before we didn’t have to choose. Now—collectively and often individually—we do.

I feel like I need to say it again, and so I will one more time: I do not want to turn back the clock. I do not want to live in a world where having four children means probably having to watch at least one of them die in my arms. I want to live in a world where we can cure diseases and heal the sick. I thank God daily that my children are healthy and safe.

But this is a world that presents new and strange challenges. Elder Richards knew the pain of burying his own children, and this cemented in him a conviction of the importance of family relationships and the reality of life after death. He paid a high, high price for these blessings, one no parent would willingly pay.

The questions we have to ask are these: How are we going to acquire the wisdom and understanding to shoulder the responsibilities of technologically sophisticated modern medicine? How do we hold onto a fundamental understanding of the vital importance of family relationships in a world where—because death is so are—we so seldom have to learn through the painfully direct method of heartbreaking loss? How do we find the kind of life-sustaining, bedrock faith of Elder Richards without paying that staggeringly high cost?

I don’t know.

But I do believe that the best place to start is by understanding and cherishing the words and experiences of those who have paid that price before us, and then left bequeathed their words and testimonies to us who follow.

Check out the other posts from the General Conference Odyssey this week and join our Facebook group to follow along!

Illiberal Reformers: An Interview with Thomas Leonard

This is part of the DR Book Collection.

A few years ago, I took an interest in the history of the Progressive Era. This interest was peaked by conservative author Jonah Goldberg’s polemic Liberal Fascism and moved to more academic research during my undergrad. I studied the history the labor unions and the words and ideas of major progressive icons. One scholar whose work I came into contact with and continued to follow over the years was Princeton economist Thomas Leonard. I’ve known for the last few years that Leonard was working on a book that explored the relationship between progressive reformers’ economic agendas and their enthusiastic support of eugenics. Finally, his Illiberal Reformers: Race, Eugenics, and American Economics in the Progressive Era was published this year through Princeton University Press.

The book meticulously demonstrates that the progressive impulse toward inflating the administrative state was driven largely by self-promotion (i.e, the professionalization of economists), racist ideologies (i.e., the fear of race suicide),[ref]Even seemingly good things like national parks had racist overtones.[/ref] and an unwavering faith in science. Not only should the “undesirables” of the gene pool be sterilized, but they should be crowded out of the labor force as well. Those considered “unfit” for the labor market included blacks, immigrants, and women. In order to artificially raise the cost of employing the “unfit,” progressives sought to implement minimum wage (often argued to be a “tariff” on immigrant labor), maximum hours, and working standard legislation.

There is far more in Leonard’s book, which not only provides keen insights into progressive economics, but provides an excellent historical overview of race and eugenics in the Progressive Era. Check out his interview on the podcast Free Thoughts below.

When Trigger Warnings Don’t Work

Related imageTrigger warnings” have been all the rage lately. They’ve sparked a national discussion, but what have they really accomplished? “What is a trigger warning?” asks Mariah Flynn, the Education Program Coordinator for the Greater Good Science Center.

The term, often used interchangeably with “content warning,” is a heads up that readers may encounter distressing content—and in recent years, trigger or content warnings have become controversial. To some, like University of Chicago administrators, such warnings keep students from being challenged or engaging with provocative course materials. Others feel that such warnings are useful tools that keep learners from having a strong emotional response to certain kinds of content, usually depicting physical or emotional violence.

For all of the excitement around trigger warnings, they’re actually quite rare. In an effort to gather more information about their use on college campuses, the National Coalition Against Censorship conducted a survey of over 800 educators from the Modern Language Association and the College Art Association—and found that only one percent reported that their institutions had adopted a policy on trigger warnings. Moreover, only fifteen percent of respondents said that students had asked for warnings.

In many respects, framing content warnings as a “censorship” or “free speech” issue is not helpful to professors or students. There is no evidence that they lead to the widespread suppression of troubling material or class discussion. At worst, warnings are merely gratuitous for a majority of students. At their best, however, content warnings can actually help students engage with course material and develop a caring relationship with their teachers.

So while some students may claim they are too “triggered” to read classical mythology, actual policies regarding trigger warnings are rare (even if campus politics are not). Yet, Flynn points out that

[a]bout three-fourths of us will experience trauma over the course of our lifetime. About ten percent of those people will develop post-traumatic stress disorder (PTSD), experiencing symptoms like flashbacks, memory gaps, depression, or hyper-vigilance.

Avoiding triggering topics—a very common strategy for people with PTSD—isn’t the best way to process traumatic events. Avoidance of triggers is a symptom of PTSD, not a cure. In fact, exposure therapy (a specific type of cognitive behavioral therapy where patients are exposed to physical or mental reminders of their trauma) is not only most common method for treating PTSD; it’s also one of the most effective.

This research might lead some to suggest that perhaps we don’t need to be so concerned about student’s exposure to triggering content, if exposure is the best way for them to process past traumatic events. However, exposure therapy works best under the care of a trained therapist. Even though exposure is an effective way to deal with PTSD, instructors aren’t therapists and the classroom is not an appropriate place for such a therapy.

Trigger warnings are also challenging to implement, because identifying potential triggers isn’t easy. Individuals with past trauma are often triggered by seemingly neutral things that have nothing to do with the content an instructor might present in class—the scent of a certain type of cologne or hearing a song associated with the traumatic event they experienced. Educators won’t always know what might trigger a student who is a victim of trauma and can’t possibly provide a warning for everything that might be a trigger.

Flynn suggests three ways of tackling the issue:

  1. Be upfront about what students can expect from your course.
  2. Consider alternative readings or activities.
  3. Offer information on other coping strategies and self-care.

There are ways to be sensitive to the experiences and mental health of students. Implementing trigger warnings doesn’t seem to be the most effective means of doing so, especially when the policy is hijacked by political agendas.

Historians vs. Economists: The History of Slavery

Image result for 12 years a slave
Chiwetel Ejiofor as Solomon Northup in 2013’s ’12 Years a Slave’

A new article over at The Chronicle of Higher Education provides an excellent review of a controversy that has been brewing over the last couple years that should be of interest to those who care about history and economics. The controversy surrounds the new history of slavery and capitalism, marked by books like Johnson’s River of Dark Dreams, Beckert’s Empire of Cotton, and especially Baptist’s The Half Has Never Been Told. The main claim among these historians is that slavery was essential to American capitalism and the emergence of the Industrial Revolution. Economists and other social scientists are not convinced. “Most economic historians,” the article states,

have argued that “cotton textiles were not essential to the Industrial Revolution,” and that cotton production did not necessarily depend on slavery, according to [Dartmouth economist] Douglas A. Irwin…Summarizing economists’ thinking…Irwin points out that cotton was grown elsewhere in the world without slaves. Cotton production continued to rise in the United States even after slavery was abolished. “In this view, the economic rise of the West was not dependent on slavery,” Irwin says, “but came about as a result of an economic process described by Adam Smith in his book The Wealth of Nations — a process that depended on free enterprise, exchange, and the division of labor.”

Economists see the problem with the new histories on slavery as

stem[ming] in part from how the discipline of history has developed. In the ’60s and ’70s, historians and economists battled over economic history. But as historians turned toward culture, and economists became more quantitative, economic history increasingly became just a subfield of economics. For a variety of reasons, including the 2008 crisis, historians are turning their attention back to financial matters. But they “did not build up their tools in order to understand the material world,” says Rhode. “And they carry along certain ideological positions which they hold fervently and are not willing to test.” Historians, he says, “can’t be making stuff up.”

Historians, however, see economic history as too reductive:

“The problem is the economists left history for statistical model building,” says Eric Foner, a historian of 19th-century America at Columbia University. “History for them is just a source of numbers, a source of data to throw into their equations.” Foner considers counterfactuals absurd. A historian’s job is not to speculate about alternative universes, he says. It’s to figure out what happened and why. And, in the history that actually took place, cotton was extremely important in the Industrial Revolution.

Some economists who attack the new slavery studies are “champion nitpickers,” adds Foner…”They’re barking up the wrong tree. They’re so obsessed with detail that they don’t really confront the broader dynamics of the interpretations. Yes, I’m sure there are good, legitimate criticisms of the handling of economic data. But in some ways I think it’s almost irrelevant to the fundamental thrust of these works.”

The article is an excellent introduction to an important controversy in historical scholarship. Check it out.

Intact Immigrant Families

 

zillfigure1updated

Take a hard look at the graph above. I’ve discussed marriage and family structure a lot here at Difficult Run. The social science is pretty clear: marriage matters for children’s well-being. Concerns over immigration often revolve around culture: do immigrants assimilate well? What kind of foreign cultural elements are they bringing with them? I’ve addressed cultural diversity before. And according to the data above, intact families are yet another positive contribution made by immigrants. According to researcher Nicholas Zill,

Indeed, the latest data from the Census Bureau on the family living arrangements of U.S. children show that 75 percent of immigrant children live in married-couple families, compared to 61 percent of children of U.S.-born parents. The figure is the same for immigrant children who were born in this country as for those who were foreign-born.4 Children of immigrants are less likely than native children to be living with divorced, separated, or never-married mothers: 14 percent lived with their mothers only, compared to 26 percent of children of U.S.-born parents.

Furthermore, immigrant parents stay together despite the fact that many are living below or close to the poverty line. Half of U.S.-born children of recent immigrants are in families that are poor or near poor, with nearly a quarter living in families below the poverty line. The circumstances of foreign-born immigrant children are worse: 57 percent are in families that are poor or near poor, with 29 percent living in families below poverty. The comparable figures for native children are 38 percent in poor or near-poor families, with 18 percent below the poverty line.

Zill concludes that “we should…recognize the strong work ethic and robust family values that many immigrant families exemplify. Far from undermining our traditions, they may be showing us the way to “make America great again.””[ref]Especially since family structure matters more in rich countries.[/ref]

Minimum Wage and Employment: Is the Evidence “Well-Established”?

GMU economist Don Boudreaux wrote an open letter to Bloomberg‘s Barry Ritholtz on his blog Cafe Hayek. It was in response to Ritholtz’s recent article on the minimum wage, which claims that “modest increases in minimum wages don’t lead to job losses.” This, in Ritholtz’s view, is “well-established” in the literature. Ritholtz certainly has studies that can backup his position. For example, a brand new study by the Council of Economic Advisers found “that employment in the [generally low-wage] leisure and hospitality industry follows virtually identical trends in states that did and did not raise their minimum wage.” It goes on to note that “[t]his finding is consistent with a well-established empirical literature in which minimum wage increases are often found to have no discernible impact on employment (Card and Krueger 2016, Belman and Wolfson 2014).”[ref]The report has been criticized by some economists, being described as “substantially to the left of where the economics mainstream has been for at least six decades.”[/ref] But Boudreaux points out that the empirical literature does find modest negative impacts on low-wage employment. He writes,

Here’s a list only of some of the more prominent, recent scholarly empirical studies whose authors that find that even modest hikes in minimum wages destroy some jobs:

– Jeffrey Clemens and Michael Wither, “The Minimum Wage and the Great Recession: Evidence of Effects on the Employment and Income Trajectories of Low-Skilled Workers” (2014) (finding that “minimum wage increases reduced the national employment-to-population ratio by 0.7 percentage point”);[ref]The 2016 version can be found here.[/ref]

– Jeffrey Clemens, “The Minimum Wage and the Great Recession: Evidence from the Current Population Survey” (2015) (finding that minimum-wage increases during the Great Recession “reduced employment among individuals ages 16 to 30 with less than a high school education by 5.6 percentage points”);

– Jonathan Meer and Jeremy West, “Effects of the Minimum Wage on Employment Dynamics” (2013) (finding that “the minimum wage reduces job growth over a period of several years.  These effects are most pronounced for younger workers and in industries with a higher proportion of low-wage workers”);

– David Neumark, J.M. Ian Salas, and William Wascher, “More on recent evidence on the effects of minimum wages in the United States” (2014) (finding that “the best evidence still points to job loss from minimum wages for very low-skilled workers – in particular, for teens”);

– Yusuf Soner Baskaya and Yona Rubinstein, “Using Federal Minimum Wages to Identify the Impact of Minimum Wages on Employment and Earnings across the U.S. States” (2012) (finding that “[m]inimum wage increases boost teenage wage rates and reduce teenage employment”).

Indeed, you can read a whole book on the matter by David Neumark and William Wascher, Minimum Wages (2008), published by the MIT Press, that concludes that minimum wages do indeed destroy some jobs.

You can dispute the accuracy of all of the above findings, but you cannot dispute that these findings, along with many others that reach similar conclusions, are part of the scholarly record – a record that belies your assertion that it is “well-established” that modest minimum-wage hikes destroy no jobs.

Interestingly enough, Ritholtz cites a University of Washington study on the Seattle minimum wage law and asserts that it “found little or no evidence of job losses[.]” Yet, the study quite clearly states that the minimum wage law led to “a 1.2 percentage point decrease in the employment rate for these low-wage workers. That is, we conclude that Seattle experienced improving employment for low-wage workers, but the minimum wage law somewhat held employment back from what it would have been in the absence of the law” (pg. 12). It later summarizes,

While the intended effect of the Minimum Wage Ordinance (i.e., raising low-wage workers’ wages) appears to have been successful, there appears to have been some negative impacts on these worker’s rates of employment and hours worked. As noted previously, the rate of employment of these workers increased by 2.6 percentage points. However, the comparison regions all experienced even better employment rate increases (3.8% for Synthetic Seattle, 3.9% for Synthetic Seattle Excluding King County, 3.5% for SKP and 2.9% for King County Excluding Seattle and SeaTac). Thus, it appears that the Minimum Wage Ordinance modestly held back Seattle’s employment of low-wage workers relative to the level we could have expected (pg. 22).

Image result for you're fired gif arnold

What about hours worked?

Hours worked shows a similar pattern. Among workers earning less than $11 per hour at baseline in Seattle, hours worked increased by 12.2 relative to business as usual. So, again, Seattle’s employment situation for low-wage workers improved after the Minimum Wage Ordinance was passed. Hours worked increased, however, by more in the comparison regions (16.4 for Synthetic Seattle, 13.0 for Synthetic Seattle Excluding King County, 21.5 for SKP and 22.5 for King County Excluding Seattle and SeaTac). Thus, on balance, it appears that the Minimum Wage Ordinance modestly lowered hours worked (e.g., 4.1 hours per quarter relative to Synthetic Seattle, or 19 minutes per week) (pg.22).

The study concludes that “[t]he effects of disemployment appear to be roughly offsetting the gain in hourly wage rates, leaving the earnings for the average low-wage worker unchanged.” In short, “for those who kept their job, the Ordinance appears to have improved wages and earnings, but decreased their likelihood of being employed in Seattle relative other parts of the state of Washington” (pg. 33).

I’ve written about the current state of minimum wage research before. I think there are good reasons to be skeptical about its ability to truly help reduce poverty. And as previous research has noted, the debate is more “about the trade-off between good jobs with higher wages and more job stability versus easier access to jobs.”

Let’s try to keep the debate on track.

McKinsey & Co.: Five Pillars of Growth

Image result for globalization

The McKinsey Global Institute has a new briefing paper entitled “The US Economy: An Agenda for Inclusive Growth.” The paper seeks ways to help America “regain its dynamism and restore the sense that everyone is advancing together.” The paper lists “five areas where targeted investment and policy action could create substantial economic impact.” This impact would include a rise in “GDP growth to 3 or even 3.5 percent.” These include:

  1. Digitization: “The US economy is rapidly digitizing, but its progress is highly uneven. Focusing on the gap between lagging sectors and those on the digital frontier is a key part of the productivity puzzle. Government can play a role by promoting digital investment, digitizing public services and procurement, clarifying regulatory standards to encourage digital innovation, and taking a nimble and experimental regulatory approach to keep pace with technological change.”
  2. Globalization and trade: “The current debate around trade misses the point that globalization is becoming more digital—a shift that plays to US strengths. Today, less than 1 percent of US firms sell abroad. There are ways to expand participation by helping small businesses export on global e-commerce platforms and playing a matchmaking role to connect individual cities and smaller companies with foreign investors. But it is also time to confront the needs of communities that have experienced trade shocks. The workers who are caught up in industry transitions need more than retraining; their communities need reinvestment.”
  3. America’s cities: “Eighty percent of the US population lives in cities or the surrounding metro areas. But investment in urban transport infrastructure has not kept up with their needs, creating congestion that harms both productivity and the quality of life. A shortage of affordable housing and commercial space has worsened the squeeze on households and small businesses. Addressing urban issues would improve mobility, create new investment opportunities, and benefit companies. The overall economy would stand to gain, since cities are the engines of productivity.”
  4. Skills: “The United States needs to build a more dynamic and efficient labor market. Colleges and universities have to adapt and address the growing cost burdens. Additionally, we could make occupational licenses more portable, create more short-term training and credential pathways, expand “earn while you learn” apprenticeships, and make better use of online talent platforms to improve matching and design quicker, more effective education pathways.”
  5. A resource revolution: “Competition among fuel sources and efficiency improvements are combining to produce an unheralded energy revolution. Technology innovations are driving increased efficiency both in demand and supply, and renewables are becoming more price competitive. America’s widely diversified energy portfolio has hugely benefited the economy. The most important thing the ongoing resource revolution needs is room to play out. Technology is moving quickly, and a responsive regulatory approach would speed the allocation of capital to the most promising opportunities. The primary policy agenda here involves reducing friction and market distortions.”

Definitely a list worth considering.

World Opinions on Globalization

The Economist reports on a new YouGov poll that surveyed “19 countries to gauge people’s attitudes towards immigration, trade and globalisation. The data reveal a split between emerging markets and the West, which is increasingly turning its back on globalisation. Beset by stagnant wage growth, less than half of respondents in America, Britain and France believe that globalisation is a “force for good” in the world. Westerners also say the world is getting worse. Even Americans, generally an optimistic lot, are feeling blue: just 11% believe the world has improved in the past year.”

The chart above demonstrates that “countries with the fastest-growing economies tend to be more positive about globalisation. The French, Australians, Norwegians and Americans tend to oppose the idea of foreigners buying indigenous companies. But most Asians do not see a problem. Few in Hong Kong and Singapore would argue that their city-states should be self-sufficient, whereas most respondents in Indonesia, Thailand, India, the Philippines and Malaysia reckon that their countries shouldn’t have to rely on imports.” But “nationalism is especially pronounced in France, the cradle of liberty. Some 52% of the French now believe that their economy should not have to rely on imports, and just 13% reckon that immigration has a positive effect on their country. France is divided as to whether or not multiculturalism is something to be embraced. Such findings will be music to the ears of Marine Le Pen, the leader of the National Front, France’s nationalist, Eurosceptic party. Current (and admittedly early) polling has her tied for first place in the 2017 French presidential race.” There may even be some comfort for those concerned about growing anti-democratic sentiments. For one, these sentiments may not be as pronounced as often reported. Furthermore, the YouGov poll finds hope in younger generations:[ref]Though there is likely need for caution regarding this conclusion as well.[/ref]

While millennials tend to hold more left-wing economic views, they are far keener on the idea of globalisation, broadly conceived, thanks to their more positive attitudes towards multiculturalism. In America, 46% of those aged 18-34 think that immigrants had a positive effect on their country, compared with just 35% of those aged 55 and over. In Britain the generational gap is even bigger: 53% and 22%, respectively. And millennials were more optimistic in every country surveyed, save for Indonesia.

Let’s hope their left-wing economic views don’t devolve into anti-trade populism.

Review: Fukuyama’s The Origins of Political Order

Photo by Fronteiras do Pensamento, CC-SA
Photo by Fronteiras do Pensamento, CC-SA

This is part of the DR Book Collection.

I’m writing this review 6 months after finishing the book for a pretty simple reason: I had precisely 100 notes to transcribe into Evernote before I was ready to write my review. That should tell you how much I got out of the book, by the way. There are a only a few books–The Righteous Mind: Why Good People are Divided by Politics and Religion and The Island of Knowledge: The Limits of Science and the Search for Meaning,  maybe The Bonobo and the Atheist: In Search of Humanism Among the Primates–that netted me more fascinating notes and quotes than this one did. I loved it.

I guess it’s a work of political theory, but for the most part it reads as history with a dash of evolutionary psychology. In exploring the origins of political order, Fukuyama starts by going way, way back before pre-history to make his first essential point: biology matters. In this regard, he’s echoing Steven Pinker’s The Blank Slate: The Modern Denial of Human Nature, but the relationship here is fairly specific. According to Fukuyama, the primary problem with thinkers like Rousseau or Hobbes isn’t that they got the particulars of pre-social humanity right, it’s that the concept of “pre-social humanity” is an oxymoron. Humans, as the expression goes, are social animals. And that means we’re political animals. Politics didn’t come later–after the invention of writing or agriculture –but have been there from the beginning, inextricably intertwined with our development of speech. So, from this “biological foundation of politics”, Fukuuama draws the following propositions:

  • human beings never existed in a presocial state
  • natural human sociability is built around two principles, kin selection and reciprocal altruism
  • human beings have an innate propensity for creating and following norms or rules
  • human beings have a natural propensity for violence
  • human beings by nature desire not just material resources but also recognition

After laying this groundwork, Fukuyama than goes on to describe in broad strokes the evolution of human societies from bands to tribes to states. He invokes principles from biological evolution explicitly here, arguing that societies compete against each other in ways that are sometimes (but not always) analogous to competition between animals. This analogy shouldn’t be taken too far: there are treacherous debates about whether organisms or genes compete, for example, and about the viability of group selection, but Fukuyama’s primary concern is actually with the differences between biological and political evolution, and so those nuances are forgiveably overlooked.

As for the bands -> tribes -> states progression, the basic notion is that bands (groups of no more than 100 or so at the most) are held together by actual blood relation. Tribalism is a social innovation that allows bands to come together by claiming (real or fictitious) common descent. Two bands might have the same patriarch or matriarch, and so in the face of a common enemy they can rapidly coalesce into a single unit. This capacity means that it’s fairly easy for tribal societies to defeat band societies, because every time a solitary band and a band that’s part of a tribal society come into conflict, the latter can call upon as many tribal allies as needed to win the fight. As a result almost no band societies are left in existence.[ref]Those that do remain are in remote locations where the benefits of tribalism do not apply.[/ref]

But tribal segments are intrinsically unstable. Fukuyama cites an Arab expression: “Me against my brother, me and my brother against my cousin, me and my cousin against the stranger.” When there is no stranger to confront, the cousins go to war. When there is no cousin on the horizon, the siblings feud. And so states are yet another progression–as superior to tribes as tribes are to bands–because of their ability to support not only temporary, contingent cooperation but permanent, universal cooperation.[ref]Not that states are Utopias, of course, but simply that in a functioning state predation–murder, theft, and rape–are dangers the state opposes instead of relying on individuals to provide their own deterrence and defense.[/ref]

Another argument he makes–and this one seemed just a little tangential but it’s interesting enough to go into–can be summarized as: ideas matter. Fukuyama says, for example, that “It is impossible to develop any meaningful theory of political development without treating ideas as fundamental causes of why societies differ and follow distinct development paths” and that ideas are “independent variables.” He’s reacting to the idea–exemplified in Marx–that to understand history in general and political development in particular, all you need are the physical factors: how much stuff do people have and what do they need to do to get more of it? He’s right to reject this idea. It’s wrong. But I think that–along with lot of other folks these days–he drastically overstates the extent to which anybody actually believes this.

It’s true that Economists talk about Homo economicus (the model of human beings as perfectly rational, self-interested agents), but never without an ironic edge. They know[ref]Maybe I should say, we know, but I’m never sure if an MA in economics makes one an economist or not.[/ref] that this model is broken and doesn’t explain everything. That’s why the leading edge of critiquing human rationality intersects with economics: behavioral economics. Give economists some credit, they’ve already come up with bounded rationality as a fall-back, and you don’t do that unless you know that (unbounded) rationality is broken. Not that they’re satisfied with bounded rationality either, but economists are in the business of making models of human behavior and “all models are wrong.” Most of the folks who seem confused about this fact aren’t the economists, but the folks outside the discipline who don’t seem to be aware of the fact that economists are aware that their models are flawed.

Now, to Fukuyama’s main point: are ideas “independent variables”? I don’t think so. If Newton hadn’t figured out gravity, would some other clever chap have come along and figured it out by now? Probably so. I think that in most cases if you take out one particular genius, some other genius sooner or later comes to the same–or a very similar–realization. There’s no way to test it, but that’s my hunch. In fact, the whole business of a singular genius inventing this or that is often a delusion to begin with. Most of the really big breakthroughs–evolution and calculus come to mind first, but there plenty of others–were invented more or less simultaneously by different people at similar times.[ref]It turns out there’s a name of this: multiple discovery theory. I love Wikipedia so much.[/ref] This is strong evidence to me that something about the historical context of (for example) Darwin and Wallace or Newton and Leibniz strongly directed people towards those discoveries. Which, if true, means that scientific discoveries are emphatically not independent. I have a hunch that’s what’s true of science is probably true to some degree of non-scientific ideas as well. If Marx had never been born, would we have Marxism? Probably not, but we’d probably have something pretty darn similar. (After all, we’d still have Engels, wouldn’t we?) It’s not like collective ownership is a new idea, after all. We’ve had the Peasant’s Revolt and the Red Turban Rebellion and many, many more. Take that basic idea, throw in a little Hegel (Marx just retrofitted Hegelianism) and presto: Marxism. If Marx hadn’t done it, and Engels hadn’t either, someone else would probably have done something similar. Maybe even using Hegel.

I don’t want to overstate my rebuttal to Fukuyama’s overstatement, so let’s pull back just a bit. I’m saying it’s probable that–in a world without Marx–someone else invents an ideology pretty close to Marxism. But does it take off? Does it inspire Lenin and Stalin? Does it lead to Mao and Castro? Do we still have the Cold War? I have no idea. And, while we’re at it, I’m not saying that if you didn’t have Shakespeare, someone else would have written Romeo and Juliet. I think that’s pretty absurd. My argument has two points: first, there’s interaction between ideas and physical contexts. Neither one is independent of the other. Second, human society is a complex system and that means it’s going to have some characteristics that are robust and hard to change (stable equilibria) and others where the tiniest variation could give rise to a totally different course of events (unstable equilibria). Maybe there was something inevitable about the general contours of socialism such that if you subtract Marx, and then subract Engels too, you still end up with a Cold War around a basically capitalist / socialist axis. Or maybe if even a fairly trivial detail in Marx’s life had changed, then Stalin would have been a die-hard free market capitalist and the whole trajectory of the post World War II 20th century would have been unrecognizable. I don’t know. I just do know that–just as ideas aren’t merely the consequences of physical circumstances–they also aren’t uncaused lightning bolts from the void, either. Ideas and the physical world exist in a state of mutual feedback.

But the primary concern of the book is this question: how do political order arise? For Fukuyama, political order has three components:

  1. State building
  2. Rule of law
  3. Accountable government

His account is contrarian basically from start to finish, but never (to my mind) gratuitously so. He argues, for example, that instead of starting with the rise of liberal democracy in the West, the key starting position is ancient China, the first society to develop a state in the modern sense. On the other hand, China never developed a robust rule of law. It was rather rule by law, a situation in which the emperor was not constrained by the idea of transcendent laws (either religious or, later, constitutional) and therefore China’s precocious, early state became as much a curse as a blessing:

[P]recocious state building in the absence of rule of law and accountability simply means that states can tyrannize their populations more effectively. Every advance in material well-being and technology implies, in the hands of an unchecked state, a greater ability to control society and to use it for the state’s own purposes.

Fukuyama’s historical analysis is far-reaching. He spends quite a lot of time on India and the Middle East as well. At last he turns his analysis on Europe where–quite apart from the conventional East / West dichotomy–he goes country-by-country to show how the basic problems confronted by states in China, India, and the Middle East also sabotaged the development of most European states. France and Spain became weak absolutist governments with state building and rule of law, but no accountability. Russia became a strongly absolutist government. The difference? The central rules of Spain and France managed to subvert their political rivals (the aristocracy), but only just barely. In Russia, the czars completely dominated their political rivals, ruling with more or less unchecked power.

Fukuyama spends a lot of this time on England, specifically, which he holds up as a kind of lottery winner where all sorts of factors that went awry everywhere else managed to line up correctly. And the story he tells is a fascinating one, because he inverts basically everything you’ve been taught in school. Here’s a characteristic passage where he summarizes a few arguments that he makes at length in the book:

[T]he exit out of kinship-based social organization had started already during the Dark Ages with the conversion of Germanic barbarians to Christianity. The right of individuals including women to freely buy and sell property was already well established in England in the 13th century. The modern legal order had its roots in the fight waged by the Catholic church against the emperor in the late 11th century, and the first European bureaucratic organizations were created by the church to manage its own internal affairs. The Catholic church, long vilified as an obstacle to modernization, was in this longer-term perspective at least as important as the Reformation as the driving force behind key aspects of modernity. Thus the European path to modernization was not a spasmodic burst of change across all dimensions of development, but rather a series of piecemeal shifts over a period of nearly 1,500 years. In this peculiar sequence, individualism on the social level could precede capitalism. Rule of law could precede the formation of a modern state. And feudalism, in the form of strong pockets of local resistance to central authority, could be the foundation of modern democracy.

It’s a fascinating argument–just because it’s original and well-argued–but I also found it convincing. I think Fukuyama is basically correct.

So a couple more notes. First, there are basically two problems that Fukuyama sees consistently eroding political order, and both of them go back to the biological foundations of politics. The first is what he calls repatrimonialization. To keep things simple, let’s just say “nepotism” instead. The idea is that the band-level origins of human nature never go away, and the temptation to use the state’s authority to enrich one’s own kin is omnipresent. His discussion of the Catholic church’s invention of the doctrine of celibacy to successfully stave off this threat (bishops kept trying to pass on their callings to their children before that doctrine was created) and the unsuccessful attempts of the Mamluk Sultanate to use slave soldiers to stave off this threat (eventually the slave soldiers grew so politically powerful that they “reformed” the prohibitions against passing on property) are some of the most historically illuminating in the book.

The second problem is human conservatism. Fukuyama doesn’t mean in the partisan sense. He’s referring to our tendency–a universal aspect of human nature–to invent and then follow norms and laws. The problem here is that once we invent our laws, we stick to them. And when circumstances change, the norms/laws (and institutions) should change too, but humans don’t like to do that. So one of the #1 causes of the downfall of political order is a historically successful state proving incapable of reforming institutions to meet a changing environment due to sheer inertia. The classic example is pre-revolution France, and here Fukuyama finds a convention with which he has no quarrel:

We have seen numerous examples of rent-seeking coalitions that have prevented necessary institutional change and therefore provoked political decay. The classic one from which the very term rent derives was ancient regime France, where the monarchy had grown strong over two centuries by co-opting much of the French elite. This co-option took the form of the actual pruchase of small pieces of the state, which could then be handed down to descendants. When reformist ministers like Maupeou and Turgot sought to change the system by abolishing venal office altogether, the existing stakeholders were strong enough to block any action. The problem of venal officeholding was solved only through violence in the course of the revolution.

That was the first note (what are the threats that political order must overcome), and we get into those in a lot more detail in his second volume: Political Order and Political Decay: From the Industrial Revolution to the Globalization of Democracy.[ref]I also read that back in May, it’s also going to get 5-stars, but I’ve got another 100 or so notes to transcribe first![/ref]

The second note I wanted to make was about partisanship. First, it’s important to note that although Fukuyama celebrates the rise of modern liberalism in England, he’s not promoting English exceptionalism. He spends a lot of time talking about what he calls “getting to Denmark.” His point there is that Denmark is also a widely-respected stable, modern, prosperous democracy and it didn’t follow the trajectory of England. The point is that he’s not saying: everyone, copy the English. Although he traces the origins of liberalism the farthest back in time in England, he specifically notes that if Denmark could find its own way into liberalism without retracing that path: so can other nations.

This is an important point, because Fukuyama is dealing in comparative politics, and he has no problem drawing rather sweeping (albeit justified, in my mind) generalizations when contrasting, for example, India and China. This is the kind of thing that anyone in my generation or younger (young Gen-X / Millennials) has been trained to reflexively reject. If you compare societies, it’s because you’re a racist. Given that Fukuyama is comparing societies–and that he arguably has the most praise for the English in terms of the philosophical origins of modern liberalism–there is no doubt in my mind that he’s going to be (has been) attacked as a kind of apologist for white supremacy, etc.

And that’s not true. First, because as I said he’s adamant about the fact that other nations can (and have) found their way to liberalism without imitating all aspects of English (let alone European) culture, society, or politics. Second, because he has plenty of non-European success stories. (Unfortunately, those are mostly from his second volume, since this one only goes up to the French Revolution and so doesn’t cover the explosion of democracy world-wide since that time.) Third, and finally, because he’s more than willing to look at pros and cons of differing systems. For example, going back to China and their problem with despotism, here’s a comment he makes towards the end of the book:

An authoritarian system can periodically run rings around a liberal democratic one under good leadership, since it is able to make quick decisions unencumbered by legal challenges or legislative secondguessing. On the other hand, such a system depends on a constant supply of good leaders. Under a bad emperor, the unchecked powers vested in the government can lead to disaster. This problem remains key in contemporary China, where accountability flows only upward and not downward.

This is the kind of clear-eyed, open-minded analysis that I think we need more of, not less of. It’s hard to argue, for example, with the success of S. Korea in leap-frogging from despotism to liberal democracy. There’s no reason–in principle–that China could not do something similar. (Other than problems of scale, that is.)

So here are my final thoughts. First: this is a fascinating book and it’s a lot of fun to read. It’s full of interesting history along with interesting theorizing. Second: I am convinced by Fukuyama’s arguments. And lastly, I have a lot of respect for his approach. He’s a centrist, and so he’s going to tick some people off for praising the kinds of things that radicals like to attack. If you think liberal democracy is the devil, Fukuyama is an apologist for Satan. On the other hand, it would be entirely wrong to dismiss him as a partisan hack. He interacts with Hayek a lot, for example, but this includes a mixture of praise on some points and also staunch criticism on others. He’s willing to laud capitalism (as the evidence warrants, I might add) but also to tip some of the rights sacred cows. “Free markets are necessary to promote long-term growth,” he says, but finishes the sentence with, “but they are not self-regulating.” He also savages the small-government obsession of the right, arguing that if you like small government, maybe you should move to Somalia. He’s not just ridiculing the right in that case, however, but pointing out that:

Political institutions are necessary and cannot be taken for granted. A market economy and high levels of wealth don’t magically appear when you “get government out of the way”; they rest on a hidden institutional foundation of property rights, rule of law, and basic political order. A free market, a vigorous civil society, the spontaneous “wisdom of crowds” are all important components of a working democracy, but none can ultimately replace the functions of a strong, hierarchical government. There has been a broad recognition among economist in recent years that “institutions matter”: poor countries are poor not because they lack resources but because they lack effective political institutions. We need therefore to better understand where those institutions come from.

In other words–and he returns to this point in the second volume–Fukuyama is dismissive of arguments about the quantity of government in favor of arguments about the quality of government.

His ideas are interesting, they are relevant, and they are compelling. I highly, highly recommend this book.

The Myth of the Rational Voter: Lecture by Bryan Caplan

This is part of the DR Book Collection.

Image result for the myth of the rational voterFollowing the results of November’s presidential election, I decided to read up on the social science on voter rationality. The first was Ilya Somin’s Democracy and Political Ignorance. The second was economist Bryan Caplan’s Princeton-published The Myth of the Rational Voter: Why Democracies Choose Bad Policies. Caplan argues that voters are not merely ignorant about economic policy; they are systematically biased in a way that puts them at complete odds with the economic profession. These biases include:

  • Anti-market bias: the public drastically underestimates the benefits of markets.
  • Anti-foreign bias: the public drastically underestimates the benefits of interactions with foreigners.
  • Make-work bias: the public equates prosperity with employment rather than production.
  • Pessimistic bias: the public is overly prone to think economic conditions are worse than they are.

For me, the evidence from surveys regarding the opinions of the public vs. economists was the most illuminating. People overwhelmingly support protectionism. Not only that, “solid majorities of noneconomists think it should be government’s responsibility to “keep prices under control”” (pg. 51). Other examples include:

  • Far fewer economists are concerned about “excessive taxation” than the public.[ref]Most academic economists are moderate Democrats: “Controlling for individuals’ party identification and ideology makes the lay-expert belief gap a little larger. Ideologically moderate, politically independent economists are totally at odds with ideologically moderate, politically independent noneconomists. How can this be? Economics only looks conservative compared to other social sciences, like sociology, where leftism reigns supreme. Compared to the general public, the typical economist is left of center. Furthermore, contrary to critics of the economics profession, economists do not reliably hold right-wing positions. They accept a mix of “far right” and “far left” views. Economists are more optimistic than very conservative Republicans about downsizing or excessive profits—and more optimistic about immigration and welfare than very liberal Democrats” (pg. 82).[/ref]
  • Far fewer economists are concerned about the deficit being “too high” than the public.
  • Few economists think foreign aid spending is “too high”, while a large number of the public does (foreign aid actually takes up about 1% of the federal budget).
  • Few economists think their are “too many immigrants”, while this is a concern for the public.

The list goes on. You can see a lecture by Caplan on his book below.