Stuff I Say at School – Part XI: The Extent of Political Ignorance

This is part of the Stuff I Say at School series.

The Assignment

A critical literature review of political ignorance among the public. This section specifically explores the academic literature on the extent of political ignorance, demonstrating that Americans know very little when it comes to politics and policy.

The Stuff I Said

What makes this particular skit humorous is how much it reflects reality. According to political scientists Christopher Achens and Larry Bartels, there is a “folk theory” of democracy that is widespread in American culture. This theory paints average citizens as engaged, well-informed participants in the political process, deliberating policies and selecting leaders who represent their well-reasoned preferences. “Unfortunately,” write Achens and Bartels, “while the folk theory of democracy has flourished as an ideal, its credibility has been severely undercut by a growing body of scientific evidence…That evidence demonstrates that the great majority of citizens pay little attention to politics.”

Image result for michael delli carpini and scott keeter

Michael Delli Carpini and Scott Keeter have defined political knowledge as “the range of factual information about politics that is stored in long-term memory.” Most of the surveys on which claims about political knowledge are based consist of recall questions, which “are designed to measure whether or not a person has selected declarative memory.” Drawing on Carpini and Keeter’s work, Achens and Bartels display the ignorance of the typical American on these kinds of questions. For example, in 1952, “only 44% of Americans could name at least one branch of government. In 1972, only 22% knew something about Watergate. In 1985, only 59% knew whether their own state’s governor was a Democrat or a Republican. In 1986, only 49% knew which one nation in the world had used nuclear weapons.” Recent survey evidence continues to support these findings. A 2018 poll found that 67% of Americans cannot name all three branches of government. Another poll found that a sizeable minority (39%) of Americans think or are not sure if low GDP is better for the country than high GDP. The Woodrow Wilson Foundation recently found that only 1-in-3 Americans can pass the U.S. Citizenship Test, with less than half the population of all but one state (Vermont) being able to pass it. A 2014 Barna survey found that 84% of Americans are unaware that extreme poverty worldwide has decreased by more than half in the past three decades. Sixty-seven percent said they thought global poverty was actually increasing during that time. Similarly, a 2016 study found that only 8% of Americans believe extreme global poverty has decreased in the last 20 years. (A 2017 study placed the percentage slightly higher at fifteen.) The late statistician Hans Rosling often tested his audience’s knowledge of the state of the world. Overall, he found that only 5% of Americans could answer a multiple-choice question about global poverty correctly: worse than chimpanzees picking at random. This ignorance not only extends to basic facts about government, politics, and the economy, but to party makeup as well. A 2018 study found that “Republicans, Democrats, and independents, all overestimate the share of party-stereotypical groups in both the major parties.” For example, respondents thought 39.3% of Democrats belonged to a labor union (actual: 10.5%), 38.2% of Republicans earned over $250,000 a year (actual: 2.2%), and 31.7% of Democrats were gay, lesbian, or bisexual (actual: 6.3%).

Image result for against democracy

Georgetown political philosopher Jason Brennan divides the spread of political knowledge into four quartiles: “the top 25 percent of voters are well informed, the next 25 percent are badly informed, the next 25 percent are know-nothings, and bottom 25 percent are systematically misinformed.” According to data from the 1992 American National Election Studies, “93.4 percent of people in the top quartile, but only 13.1 percent of people in the bottom quartile, know that Republicans tend to be more conservative than Democrats. Among people in the lowest knowledge quartile, only 12.2 percent and 9.7 percent knew which party controlled the House of Representatives and Senate, respectively. The bottom 25 percent of citizens does worse than a coin flip when it comes to political knowledge—they are systematically in error.” When it comes to the demographics of these quartiles, political knowledge within the U.S.

is strongly positively correlated with having a college degree, but negatively correlated with having a high school diploma or less. It is positively correlated with being in the top half of income earners, but negatively correlated with being in the bottom half. It is strongly positively correlated with being in the top quarter of income earners, and strongly negatively correlated with being in the bottom quarter. It is positively correlated with living in the western United States, and negatively correlated with living in the South. Political knowledge is positively correlated with being or leaning Republican, but negatively correlated with being a Democrat or leaning independent. It is positively correlated with being between the ages of thirty-five and fifty-four, but negatively correlated with other ages. It is negatively correlated with being black, and strongly negatively correlated with being female.

Image result for democracy and political ignorance

Legal scholar Ilya Somin’s work scours both the academic literature as well as a sweeping array of public surveys, including (but not limited to) the Annenberg Public Policy Center, Kaiser Health Tracking Poll, Pew Research Center, Bloomberg, Public Policy Research Institute, Reason-Rupe, and American National Election Studies. Voter ignorance is not merely in regards to “specific policy issues but about the basic structure of government and how it operates.” He concludes, “Extensive evidence suggests that most Americans have little political knowledge. That ignorance covers knowledge of specific issues, knowledge of political leaders and parties, and knowledge of political institutions. The evidence extends to many of the crucial issues at stake in recent elections from 2000 to 2014. Moreover, much of the widespread ignorance relate to fairly basic issues about the politicians, parties, issues, and the structure of politics.”

Image result for myth of the rational voter

Relying on the 1996 Survey of Americans and Economists on the Economy (SAEE), GMU economist Bryan Caplan compares (1) the average belief of the general public on economic issues, (2) the average belief of Ph.D. economists, and (3) the estimated belief of a category Caplan labels the Enlightened Public. This latter category is the result of Caplan testing for both “self-serving” and “ideological” bias among economists by controlling for family income, job security, race, gender, age, and income growth. The Enlightened Public essentially are the answers to the questions “What would the average person believe if he had a Ph.D. in economics?” or “What would Ph.D. economists believe if their finances and political ideology matched those of the average person?” Caplan discovers that the answers of economists/Enlightened Public differ greatly from the general public on most economic issues. For example, the general public is far more concerned about the supposed negative economic effects of taxes, foreign aid, immigration, business tax breaks, the number of people on welfare, affirmative action, business profits, executive compensation, technology in the workplace, job outsourcing, and corporate downsizing. Caplan’s controls and comparisons indicate that (1) economic information and education changes one’s views about economic issues and (2) the general public is lacking in these qualifications. This gap between economists and the general public is further confirmed by a 2013 study by Paola Sapienza and Luigi Zingales. Drawing on the Economic Expert Panel (EEP) and Financial Trust Index (FTI)—both from the University of Chicago—the researchers find that, “[o]n average, the percentage of agreement with a statement differs 35 percentage points between the two groups.”

Image result for democracy for realists

Despite the strong consensus on the typical American’s political ignorance, Arthur Lupia of the University of Michigan is skeptical of the explanatory power of these survey data. He argues that in many cases, it is “not demonstrate[d] that recalling the items on [the] survey is a necessary condition for achieving high-value social outcomes” and, therefore, not a good standard for measuring relevant political knowledge. He also questions the legitimacy of the American National Election Studies, showing that obviously correct answers were sometimes marked as incorrect due to an overly-rigid grading system. Finally, he notes that “decades of surveys and experiments provide evidence that “don’t know” responses are mixtures of several factors. Ignorance is one such factor. Low motivation, personality, and gender also affect responses.” However, Achens and Bartels point out that “insufficient motivation is endemic to mass politics, not an artifact of opinion surveys[.]” Furthermore, they hold Lupia’s feet to the fire for the vagueness of statements like “high-quality decisions” or “high-value social outcomes.” Uninformed voters are supposedly capable of these things, yet Lupia provides no concrete examples. Brennan also argues that public polls actually overstate how much Americans really know about politics and policy. The first reason is because these polls “usually take the form of a multiple-choice test. When many citizens do not know the answer to a question, they guess. Some of them get lucky, and the surveys mark them as knowledgeable.” These polls “count a citizen as knowledgeable if they know that we spend more on social security than defense, but they typically don’t check if they know how much more we spend.” Finally, these questions are about “easily verifiable facts…While most voting Americans cannot answer such questions, these questions do not require specialized social scientific knowledge.” Unfortunately, greater question complexity is associated with greater ignorance. According to Carpini and Keeter, “as the amount of detail requested increases and as less visible institutions or processes are asked about, the percentage of the public able to correctly answer questions declines.”

In sum, the scholarly consensus appears to recognize that the average American citizen knows very little about the major players, institutions, and processes of their government. What’s more, there is a significant gap between expert views on policy-related issues and that of the average American.

What Were the Results of the Washing Machine Tariffs?

As reported by The Washington Post,

When economists at the University of Chicago and the Federal Reserve studied the 2018 duty on washing machines, they found the expected rise in retail prices from foreign manufacturers such as Samsung and LG. Surprisingly, though, these brands also increased dryer prices. Then domestic manufacturers followed suit, simply because they could.

All told, the research shows, U.S. consumers are spending an additional $1.5 billion a year on washers and dryers as a result of the tariffs. That’s an extra $86 for each washing machine and $92 for each dryer, the authors estimate. And less than 10 percent of that goes to the U.S. treasury — about $82.2 million — the study showed…Foreign manufacturers are passing some costs on to consumers, while domestic ones are simply pocketing extra profits, according to the study.

…Manufacturers also capitalized on buyer habits when they bumped up the price of dryers, which were not subject to the tariffs. “Many consumers buy these goods in a bundle,” Tintelnot said. “Part of the price increase for washers was hidden by increasing the price of dryers.”

In sum, “U.S. consumers shouldered 125 to 225 percent of the costs of the washing-machine tariffs. And the duty was mostly a dud on the job-creation front,” costing consumers about $815,000 for every one of the 1,800 jobs created.

That’s exciting. Looks like tariffs are exactly what they are cracked up to be.

Do Minimum Wage Hikes Drive Some Restaurants Out of Business?

Image result for restaurants minimum wage

From a recent NBER paper (quoting from an earlier draft): 

As theory would suggest, we find robust evidence that the impact of the minimum wage depends on how close a restaurant is to the margin of exit, proxied by its rating. Looking at city-level minimum wage changes in the San Francisco Bay Area (the “Bay Area”), we present two main findings. First, at all observed minimum wage levels, restaurants with lower ratings are more likely to exit, suggesting that they are less efficient in the economic sense. Moreover, lower rated restaurants are disproportionately affected by minimum wage increases. In other words, the impact of the minimum wage on exit is most pronounced among restaurants that are closer to the margin of exit. 

…Our results suggest that a $1 increase in the minimum wage leads to an 14 percent increase in the likelihood of exit for the median 3.5-star restaurant, but no impact for five-star restaurants (the point estimate is in fact negative, suggesting that the likelihood of exit might even decrease for five-star restaurants, but the estimate is not statistically different from zero). These effects are robust to a number of different specifications, including controlling for time-varying county characteristics that may influence both minimum wage policies and restaurant demand, city-specific time trends to account for preexisting trends, as well as county-year fixed effects to control for spatial heterogeneity in exit trends.

…Overall, our findings shed on the economic impact of the minimum wage. Basic theory predicts that the minimum wage will cause firms that cannot adjust in other ways to cover their increased costs to exit the market. We find that lower rated firms (which are already closer to the margin of exit) are disproportionately impacted by the minimum wage. After a minimum wage increase, they are more likely to exit the market altogether and more likely to raise their prices (pg. 2-5).

This matches previous research, which finds that labor-intensive restaurants tend to exit and make room for capital-intensive restaurants. 

Who Bears the Cost of the Minimum Wage?

Image result for minimum wage

From a forthcoming article in the American Economic Review (quoting from the draft version) on Hungarian minimum wage hikes:

Most firms responded to the minimum wage by raising wages instead of destroying jobs. Our estimates imply that out of 290 thousand minimum wage workers in Hungary, around 30 thousand (0.076% of aggregate employment) lost their job, while the remaining 260 thousand workers experienced a 60% increase in their wages. As a result, firms employing minimum wage workers experienced a large increase in their total labor cost that was mainly absorbed by higher output prices and higher total revenue. We also estimated that firms substituted labor with capital and their profits fell slightly. These results suggest that the incidence of the minimum wage fell mainly on consumers. Given the relatively small effect on employment, our results also suggest that minimum wages can redistribute income from consumers to low-wage workers without large efficiency losses. Our findings also indicate that the optimal level of the minimum wage is likely to vary across industries,cities and countries. In countries where low-wage jobs are concentrated in the local service sector (such as Germany or the U.S.) raising the minimum wage is likely to cause limited disemployment effects or efficiency losses. Moreover, in cities where mainly rich consumers enjoy the services provided by low wage workers this redistribution will be from rich to poor. The heterogenous responses across industries also underline the advantages of sector-specific minimum wage polices used in some European countries such as Italy or Austria. For instance, setting a higher minimum wage in the non-tradable sector than in the tradable sector can push up wages relatively more where it will generate more modest disemployment effects (pg. 23-24).

Passing the costs on to consumers fits with previous evidence. This also makes evident that the kind of industry (e.g., tradable vs. non-tradable) also matters when it comes to positive/negative effects of the minimum wage.

Stuff I Say at School – Part VIII: The Impact of Openness

This is part of the Stuff I Say at School series.

The Assignment

1. Do you feel that a country can thrive in an insular or isolated capacity? Is exchanges needed for a country to be successful? Do you see any examples of countries who have been reluctant to adopt new ideologies or integration?

2. What did we learn from the Columbian exchange that would be applicable to modern day society?

The Stuff I Said

1. While I think a country can thrive to some extent in isolation depending on a number of factors, it will not thrive as much as it could have had it been integrated into a larger exchange network. An extreme historical case is Tasmania: when the island was cut off from the mainland by rising sea levels, the population not only failed to progress, but actually regressed. Anthropologist Joseph Henrich surveyed the archaeological evidence and found that the isolation caused Tasmanians to lose a number of skills and technologies they had once possessed, including bone tools, cold-weather clothing, nets, fishing spears, barbed spears, etc. Even their canoeing skills and technologies worsened. Beyond comparative advantage, trade leads to innovation (what author Matt Ridley calls “ideas having sex”). And it is innovation–technological innovation in particular–that truly transforms standards of living. 

Protectionism and isolationism have had a bit of a global resurgence lately, but these positions fly in the face of the expert consensus as far as economic welfare is concerned (check out the survey data on tariffs at the bottom of the post). This populist backlash to globalization led to a string of recent academic books empirically and philosophically defending economic openness:

2. I’ll rely on Nobel laureate Angus Deaton for the next question:

The historian Ian Morris has described how increased trade around the second century CE merged previously separate disease pools that, since the beginning of agriculture, had evolved in the West, South Asia, and East Asia, “as if they were on different planets.” Catastrophic plagues broke out in China and in the eastern outposts of the Roman Empire. The Columbian exchange after 1492 is an even better-known example. Many historical epidemics started from new trade routes or new conquests.

…Yet globalization also opens its routes to the enemies of disease. We have already seen how the germ theory of disease–a set of ideas and practices developed in the North–spread rapidly to the rest of the world after 1945. Knowledge about drugs to control high blood pressure spread rapidly across the world after 1970, producing…synchronized declines in mortality…That cigarette smoking caused cancer did not have to be rediscovered country by country. While the origins of HIV/AIDS are in dispute, there is no dispute about its rapid spread from one continent to another. The scientific response–the discovery of the virus, the deduction of its means of transmission, and the development of chemotherapy that is transforming the disease from a fatal to a chronic condition–was extraordinarily rapid by historical standards, although hardly rapid enough for the millions who died as they waited. Today’s understanding of the disease, although still incomplete, has underpinned the response–not just in the rich world–and in the worst affected African countries rates of new infection have fallen in the past few years, and life expectancy is beginning to rise again (The Great Escape, pg. 150-151).


From Gregory Mankiw’s Principles of Economics, 7th ed. (pg. 32).

From the IGM Economic Experts Panel, University of Chicago

Stuff I Say at School – Part VII: The Importance of Institutions

This is part of the Stuff I Say at School series.

Summary & Commentary on Week’s Readings

Acemoglu et al argue that inefficient institutions persist for a number of major reasons. First, the lack of third-party enforcement of commitments prevents elites from relinquishing their monopoly on political power. Furthermore, the beneficiaries of the economic status quo are usually unwilling to risk their economic welfare through competition. This leads them to promote protectionism and further engage in rent-seeking activities. Institutions that encourage these kinds of activities fail to grow. We see this kind of conflict manifest in various areas of the economy, from labor and financial markets to regulations in pricing. The more institutions concentrate political power in the hands of the few, the more incentives are warped and distort paths to economic growth.

Image result for why nations fail

In their book Why Nations Fail: The Origins of Power, Prosperity, and Poverty, Daron Acemoglu and James Robinson distinguish between inclusive and extractive institutions, with the former creating the conditions for prosperity. “Inclusive economic institutions,” they write,

…are those that allow and encourage participants by the great mass of people in economic activities that make best use of their talents and skills and that enable individuals to make the choices they wish. To be inclusive, economic institutions must feature secure private property, an unbiased system of law, and a provision of public services that provides a level playing field in which people can exchange and contract; it also must permit the entry of new business and allow people to choose their careers…Inclusive economic institutions foster economic activity, productivity growth, and economic prosperity (pg. 74-75).

On the other hand, extractive economic institutions lack these properties and instead “extract incomes and wealth from one subset of society to benefit a different subset,” empowering the few at the expense of the many (pg. 76).

The importance of getting institutions right is highlighted by Rodrik and Subramanian’s study. Three theoretical culprits have been blamed for the vast income inequality between countries: (1) geography, (2) integration (globalization, international trade), and (3) institutions. Regression analyses indicate that institutions trump all other explanations. This is also shown from the outset of Acemoglu and Robinson’s Why Nations Fail, in their story of Nogales, Arizona (United States of America) and Nogales, Sonora, (Mexico). Acemoglu and Robinson lay out their archetype story of two towns with the same essential culture, geography, and relative free trade (NAFTA), in most ways they are the same place. The only reason they are two towns is an institutional barrier between two separate countries. Yet one is rich and one is poor because of institutions. The direct effects of geography are weak at best, while there were no direct effects from integration. However, there were indirect effects of integration: institutions have significant, positive effects on integration, while integration has a positive impact on institutions. This, in some sense, creates a virtuous, growth-enhancing cycle. Rodrik and Subramanian point out that the institutional factors emphasized the most have largely been market-oriented (e.g., property rights, enforceable contracts). Yet, factors such as regulation, financial stabilization, and social insurance also matter in getting institutions right.

The interaction between political and economic institutions is an important insight. For example, even though most research finds that seemingly liberal political institutions like democracy have no direct impact on economic growth, more recent evidence from Acemoglu and colleagues suggests that they may in fact contribute to growth. What’s more, the evidence strongly suggests that economic openness—particularly international trade—contributes to growth. A 2010 study used data from 131 developed and developing countries and found that reductions in trade protections led to higher levels of income per capita. A World Bank study found that between 1950 and 1998, “countries that liberalized their trade regimes experienced average annual growth rates that were about 1.5 percentage points higher than before liberalization. Postliberalization investment rates rose 1.5-2.0 percentage points, confirming past findings that liberalization fosters growth in part through its effect on physical capital accumulation…Trade-centered reforms thus have significant effects on economic growth within countries” (pg. 212). A 2016 IMF paper found that trade liberalization boosts productivity through increased competition and greater variety and quality of inputs. All this suggests that Sachs and Warner were correct when they found “that open policies together with other correlated policies were sufficient for growth in excess of 2 percent during 1970-89” (pg. 45; fn. 61). Their findings also suggest “that property rights, freedom, and safety from violence are additional determinants of growth” (pg. 50). Acemoglu and Robinson in a 2005 paper found “robust evidence that property rights institutions have a major influence on long-run economic growth, investment, and financial development, while contracting institutions appear to affect the form of financial intermediation but have a more limited impact on growth, investment, and the total amount of credit in the economy” (pg. 988).

In short, inclusive institutions are necessary to fully reap the benefits of an open economy.

Is Student Loan Forgiveness for the Marginalized?

Image may contain: text

I saw this floating around Facebook recently with the news of Elizabeth Warren’s student loan plan. For those unfamiliar with what Mayfield is referencing, here’s the entry from the HarperCollins Bible Dictionary:

As another Bible dictionary clarifies, “Though Leviticus 25 does not explicitly discuss debt cancellation, the return of an Israelite to his land plus the release of slaves implies the cancellation of debts that led to slavery or the loss of land.”

So does Warren’s plan benefit “the marginalized”?

According to Adam Looney at the Brookings Institution, Warren’s proposal is “regressive, expensive, and full of uncertainties…[T]he top 20 percent of households receive about 27 percent of all annual savings, and the top 40 percent about 66 percent. The bottom 20 percent of borrowers by income get only 4 percent of the savings. Borrowers with advanced degrees represent 27 percent of borrowers, but would claim 37 percent of the annual benefit.”

E Warren Distribution of benefit

He continues,

Debt relief for student loan borrowers, of course, only benefits those who have gone to college, and those who have gone to college generally fare much better in our economy than those who don’t. So any student-loan debt relief proposal needs first to confront a simple question: Why are those who went to college more deserving of aid than those who didn’t? More than 90 percent of children from the highest-income families have attended college by age 22 versus 35 percent from the lowest-income families. Workers with bachelor’s degrees earn about $500,000 more over the course of their careers than individuals with high school diplomas. That’s why about 50 percent of all student debt is owed by borrowers in the top quartile of the income distribution and only 10 percent owed by the bottom 25 percent. Indeed, the majority of all student debt is owed by borrowers with graduate degrees.

Drawing on 2016 data from the Federal Reserve’s Survey of Consumer Finances, Looney’s final analysis

shows that low-income borrowers save about $569 in annual payments under the proposal, compared to $900 in the top 10 percent and $2,653 in the 80th to 90th percentiles. Examining the distribution of benefits, top-quintile households receive about 27 percent of all annual savings, and the top 40 percent about 66 percent. The bottom 20 percent of borrowers by income get 4 percent of the savings…[W]hile households headed by individuals with advanced degrees represent only 27 percent of student borrowers, they would claim 37 percent of the annual savings. White-collar workers claim roughly half of all savings from the proposal. While the Survey of Consumer Finances does not publish detailed occupational classification data, the occupational group receiving the largest average (and total) amount of loan forgiveness is the category that includes lawyers, doctors, engineers, architects, managers, and executives.  Non-working borrowers are, by and large, already insured against having to make payments through income-based repayment or forbearances; most have already suspended their loan payments. While debt relief may improve their future finances or provide peace of mind, it doesn’t offer these borrowers much more relief than that available today.  

The Urban Institute’s analysis has similar findings (though their tone is more optimistic):

figure 2
figure 3

I’m not sure whether or not Warren’s plan is a good one (I’m skeptical, especially given some of the results abroad). But I’m not big on acting like college graduates in a rich country are the marginalized of society.

Does Good Management Produce More Equal Pay?

Nicholas Bloom–whose research on the economics of management I’ve relied on in my own work–and colleagues have an interesting article in Harvard Business Review:

For 2010 and 2015, the U.S. Census Bureau fielded the Management and Organizational Practices Survey (MOPS) in partnership with a research team of subject matter experts, including one of us (Nick), as well as Erik Brynjolfsson and John Van Reenen. The MOPS collects information on the use of management practices related to monitoring (collecting and analyzing data on how the business is performing), targets (setting tough, but achievable, short- and long-term goals), and incentives (rewarding high performers while training, reassigning, or dismissing low performers) at a representative sample of approximately 50,000 U.S. manufacturing plants per survey wave. We refer to practices that are more explicit, formal, frequent, or specific as “more structured practices.” From the MOPS and related data, researchers have demonstrated just how important the use of these structured management practices is for companies and even entire economies, since firms that implement more of these practices tend to perform better.  We wanted to know what effect these management practices have on workers.

We found that companies that reported more structured management practices according to the MOPS paid their employees more equally, as measured by the difference between pay for workers at the 90th (top) and 10th (bottom) percentiles within each firm.

The authors fully admit, “To be honest, it surprised us…If anything, we expected the opposite…We hypothesized that more structured management would lead to rewarding high-performers over others, therefore leading to a rise in inequality inside of the firm. As the chart above shows, the reality is exactly the reverse – and that remains true even after controlling for employment, capital usage, firm age, industry, state, and how educated the employees are.” They continue,

Our research finds that the negative correlation between structured management and inequality is driven by a strong negative correlation between the use of structured monitoring practices and inequality. By contrast, higher usage of structured incentives practices was positively correlated with inequality, albeit weakly. In other words, our finding seems to suggest that companies that collect and analyze specific and high-frequency data about their businesses tend to have a smaller gap between the earnings of workers at the top of the income distribution and the earnings of workers at the bottom of the distribution.

The authors offer several possible explanations:

Previous research shows that firms with more structured management practices are more profitable on average, and there’s long been evidence that when companies make extra profits they share some of them with workers. Perhaps companies with more structured practices allocate these profits such that less well-paid workers get more of the pie.

The relationship could also result from increased efficiency. Maybe firms with more structured practices have more efficient low-paid workers, as a response to training or monitoring practices, and their pay reflects that extra efficiency.

Finally, it could be that firms with more structured practices are more focused on specific tasks and rely more on outsourcing. More and more companies are outsourcing tasks like cleaning, catering, security, and transport. If outsourcing is more common for firms that use more structured practices, workers performing tasks outside of the companies’ core tasks would no longer be on those companies’ direct payrolls. If the jobs that are outsourced are lower-paying than the jobs that are held by employees, the companies’ pay data will become more equal.

Other research finds that paying employees higher wages

  • Motivates employees to work harder.
  • Attracts more capable and productive workers.
  • Lead to lower turnover
  • Enhance quality and customer service
  • Reduce disciplinary problems and absenteeism
  • Require fewer resources for monitoring
  • Reduces poor performance caused by financial anxiety

Looking forward to Bloom et al.’s published work.

How and Why to Rate Books and Things

Here’s the image that inspired this post:


Now, there’s an awful lot of political catnip in that post, but I’m actually going to ignore it. So, if you want to hate on Captain Marvel or defend Captain Marvel: this is not the post for you. I want to talk about an apolitical disagreement I have with this perspective.

The underlying idea of this argument is that you should rate a movie based on how good or bad it is in some objective, cosmic sense. Or at least based on something other than how you felt about the movie. In this particular case, you should rate the movie based on some political ideal or in such a way as to promote the common good. Or something. No, you shouldn’t. ALl of these approaches are bad ideas.

That's not how this works

The correct way to rate a movie–or a book, or a restaurant, etc.–is to just give the rating that best reflects how much joy it brought you. That’s it!

Let’s see if I can convince you.

To begin with, I’m not saying that such a thing as objective quality doesn’t exist. I think it probably does. No one can really tell where subjective taste ends and objective quality begins, but I’m pretty sure that “chocolate or vanilla” is a matter of purely personal preference but “gives you food poisoning or does not” is a matter of objective quality.

So I’m not trying to tell you that you should use your subjective reactions because that’s all there is to go on. I think it’s quite possible to watch a movie and think to yourself, “This wasn’t for me because I don’t like period romances (personal taste), but I can recognize that the script, directing, and acting were all excellent (objective quality) so I’m going to give it 5-stars.”

It’s possible. A lot of people even think there’s some ethical obligation to do just that. As though personal preferences and biases were always something to hide and be ashamed of. None of that is true.

The superficial reason I think it’s a bad idea has to do with what I think ratings are for. The purpose of a rating–and by a rating I mean a single, numeric score that you give to a movie or a book, like 8 out of 10 or 5 stars–is to help other people find works that they will enjoy and avoid works that they won’t enjoy. Or, because you can do this, to help people specifically look for works that will challenge them and that they might not like, and maybe pass up a book that will be too familiar. You can do all kinds of things with ratings. But only if the ratings are simple and honest. Only if the ratings encode good data.

The ideal scenario is a bunch of people leaving simple, numeric ratings for a bunch of works. This isn’t Utopia, it’s Goodreads. (Or any of a number of similar sites.) What you can then do is load up your list of works that you’ve liked / disliked / not cared about and find other people out there who have similar tastes. They’ve liked a lot of the books you’ve liked, they’ve disliked a lot of the books you’ve disliked, and they’ve felt meh about a lot of the books you’ve felt meh about. Now, if this person has read a book you haven’t read and they gave it 5-stars: BAM! You’re quite possibly found your next great read.

You can do this manually yourself. In fact, it’s what all of us instinctively do when we start talking to people about movies. We compare notes. If we have a lot in common, we ask that person for recommendation. It’s what we do in face-to-face interactions. When we use big data sets and machine learning algorithms to automate the process, we call them recommender systems. (What I’m describing is the collaborative filtering approach as opposed to content-based filtering, which also has it’s place.)

This matters a lot to me for the simple reason that I don’t like much of what I read. So, it’s kind of a topic that’s near and dear to my heart. 5-star books are rare for me. Most of what I read is probably 3-stars. A lot of it is 1-star or 2-star. In a sea of entertainment, I’m thirsty. I don’t have any show that I enjoy watching right now. I’m reading a few really solid series, but they come out at a rate of 1 or 2 books a year, and I read more like 120 books a year. The promise of really deep collaborative filtering is really appealing if it means I can find is valuable.

But if you try to be a good citizen and rate books based on what you think they’re objective quality is, the whole system breaks down.

Imagine a bunch of sci-fi fans and a bunch of mystery fans that each read a mix of both genres. The sci-fi fans enjoy the sci-fi books better (and the mystery fans enjoy the mystery books more), but they try to be objective in their ratings. The result of this is that the two groups disappear from the data. You can no longer go in and find the group that aligns with your interests and then weight their recommendations more heavily. Instead of having a clear population that gives high marks to the sci-fi stuff and high-marks to the mystery stuff, you just have one, amorphous group that gives high (or maybe medium) marks to everything.

How is this helpful? It is not. Not as much as it could be, anyway.

In theoretical terms, you have to understand that your subjective reaction to a work is complex. It incorporates the objective quality of the work, your subjective taste, and then an entire universe of random chance. Maybe you were angry going into the theater, and so the comedy didn’t work for you the way it would normally have worked. Maybe you just found out you got a raise, and everything was ten times funnier than it might otherwise have been. This is statistical noise, but it’s unbiased noise. This means that it basically goes away if you have a high enough sample.

On the other hand, if you try to fish out the objective components of a work from the stew of subjective and circumstantial components, you’re almost guaranteed to get it wrong. You don’t know yourself very well. You don’t know for yourself where you objective assessment ends and your subjective taste begins. You don’t know for yourself what unconscious factors were at play when you read that book at that time of your life. You can’t disentangle the objective from the subjective, and if you try you’re just going to end up introducing error into the equation that is biased. (In the Captain Marvel example above, you’re explicitly introducing political assessments into your judgment of the movie. That’s silly, regardless of whether your politics make you inclined to like it or hate it.)

What does this all mean? It means that it’s not important to rate things objectively (you can’t, and you’ll just mess it up), but it is helpful to rate thing frequently. The more people we have rating things in a way that can be sorted and organized, the more use everyone can get from those ratings. In this sense, ratings have positive externalities.

Now, some caveats:

Ratings vs. Reviews

A rating (in my terminology, I don’t claim this is the Absolute True Definition) is a single, numeric score. A review is a mini-essay where you get to explain your rating. The review is the place where you should try to disentangle the objective from the subjective. You’ll still fail, of course, but (1) it won’t dirty the data and (2) your failure to be objective can still be interesting and even illuminating. Reviews–the poor man’s version of criticism–is a different beast and it plays by different rules.

So: don’t think hard about your ratings. Just give a number and move on.

Do think hard about your reviews (if you have time!) Make them thoughtful and introspective and personal.

Misuse of the Data

There is a peril to everyone giving simplistic ratings, which is that publishers (movie studios, book publishers, whatever) will be tempted to try and reverse-engineer guaranteed money makers.

Yeah, that’s a problem, but it’s not like they’re not doing that anyway. The reason that movie studios keep making sequels, reboots, and remakes is that they are already over-relying on ratings. But they don’t rely on Goodreads or Rotten Tomatoes. They rely on money.

This is imperfect, too, given the different timing of digital vs. physical media channels, etc. but the point is that adding your honest ratings to Goodreads isn’t going to make traditional publishing any more likely to try and republish last years cult hit. They’re doing to do that anyway, and they already have better data (for their purposes) than you can give them.

Ratings vs. Journalism

My advice applies to entertainment. I’m not saying that you should just rate everything without worrying about objectivity. This should go without saying but, just in case, I said it.

You shouldn’t apply this reasoning to journalism because one vital function of journalism for society is to provide a common pool of facts that everyone can then debate about. One reason our society is so sadly warped and full of hatred is that we’ve lost that kind of journalism.

Of it’s probably impossible to be perfectly objective. The term is meaningless. Human beings do not passively receive input from our senses. Every aspect of learning–from decoding sounds into speech to the way vision works–is an active endeavor that depends on biases and assumptions.

When we say we want journalists to be objective, what we really mean is that (1) we want them to stick to objectively verifiable facts (or at least not do violence to them) and (2) we would like them to embody, insofar as possible, the common biases of the society they’re reporting to. There was a time when we, as Americans, knew that we had certain values in common. I believe that for the most part we still do. We’re suckers for underdogs, we value individualism, we revere hard work, and we are optimistic and energetic. A journalistic establishment that embraces those values is probably one that will serve us well (although I haven’t thought about it that hard, and it still has to follow rule #1 about getting the facts right). That’s bias, but it’s a bias that is positive: a bias towards truth, justice, and the American way.

What we can’t afford, but we unfortunately have to live with, is journalism that takes sides within the boundaries of our society.

Strategic Voting

There are some places other than entertainment where this logic does hold, however, and one of them is voting. One of the problems of American voting is that we go with majority-take-all voting, which is like the horse-and-buggy era of voting technology. Majority-take-all voting is probably much worse for us than a 2-party system, because it encourages strategic voting.

Just like rating Captain Marvel higher or lower because your politics make you want it to succeed or fail, strategic voting is where you vote for the candidate that you think can win rather than the candidate that you actually like the most.

There are alternatives that (mostly) eliminate this problem, the most well-known of which is instant-runoff voting. Instead of voting for just one candidate, you rank the candidates in the order that you prefer them. This means that you can vote for your favorite candidate first even if he or she is a longshot. If they don’t win, no problem. Your vote isn’t thrown away. In essence, it’s automatically moved to your second-favorite candidate. You don’t actually need to have multiple run-off elections. You just vote once with your full list of preferences and then it’s as if you were having a bunch of runoffs.

There are other important reasons why I think it’s better to vote for simple, subjective evaluations of the state of the country instead of trying to figure out who has the best policy choices, but I’ll leave that discussion for another day.

Limitations

The idea of simple, subjective ratings is not a cure-all. As I noted above, it’s not appropriate for all scenarios (like journalism). It’s also not infinitely powerful. The more people you have and the more things they rate (especially when lots of diverse people are rating the same thing), the better. If you have 1,000 people, maybe you can detect who likes what genre. If you have 10,000 people, maybe you can also detect sub-genres. If you have 100,000 people, maybe you can detect sub-genres and other characteristics, like literary style.

But no matter how many people you have, you’re never going to be able to pick up every possible relevant factor in the data because there are too many and we don’t even know what they are. And, even if you could, that still wouldn’t make predictions perfect because people are weird. Our tastes aren’t just a list of items (spaceships: yes, dragons: no). They are interactive. You might really like spaceships in the context of gritty action movies and hate spaceships in your romance movies. And you might be the only person with that tick. (OK, that tick would probably be pretty common, but you can think of others that are less so.)

This is a feature, not a bug. If it were possible to build a perfect recommendation it would also be possible to build (at least in theory) an algorithm to generate optimal content. I can’t think of anything more hideous or dystopian. At least, not as far as artistic content goes.

I’d like a better set of data because I know that there are an awful lot of books out there right now that I would love to read. And I can’t find them. I’d like better guidance.

But I wouldn’t ever want to turn over my reading entirely to a prediction algorithm, no matter how good it is. Or at least, not a deterministic one. I prefer my search algorithms to have some randomness built in, like simulated annealing.

I’d say about 1/3rd of what I read is fiction I expect to like, about 1/3rd is non-fiction I expect to like, and 1/3rd is random stuff. That random stuff is so important. It helps me find stuff that no prediction algorithm could ever help me find.

It also helps the system over all, because it means I’m not trapped in a little clique with other people who are all reading the same books. Reading outside your comfort zone–and rating them–is a way to build bridges between fandom.

So, yeah. This approach is limited. And that’s OK. The solution is to periodically shake things up a bit. So those are my rules: read a lot, rate everything you read as simply and subjectively as you can, and make sure that you’re reading some random stuff every now and then to keep yourself out of a rut and to build bridges to people with different tastes then your own.

Is Contract Enforcement Important for Firm Productivity?

Contract enforcement is a major player in measuring the ease of doing business in a country. A new working paper demonstrates the importance of enforceable contracts to firm productivity:

In Boehm and Oberfield (2018) we study the use of intermediate inputs (materials) by manufacturing plants in India and link the patterns we find to a major institutional failure: the long delays that petitioners face when trying to enforce contracts in a court of justice. India has long struggled with the sluggishness of its judicial system. Since the 1950’s, the Law Commission of India has repeatedly highlighted the enormous backlogs and suggested policies to alleviate the problem, but with little success. Some of these delays make international headlines, such as in 2010, when eight executives were convicted in the first instance for culpability in the 1984 Bhopal gas leak disaster. One of them had already passed away, and the other seven appealed the conviction (Financial Times 2010)

These delays are not only a social problem, but also an economic problem. When enforcement is weak, firms may choose to purchase from suppliers that they trust (relatives, or long-standing business partners), or avoid purchasing the inputs altogether such as by vertically integrating and making the components themselves, or by switching to a different production process. These decisions can be costly. Components that are tailored specifically to the buyer (‘relationship-specific’ intermediate inputs) are more prone to hold-up problems, and are therefore more dependent on formal court enforcement.

…Our results suggest that courts may be important in shaping aggregate productivity. For each state we ask how much aggregate productivity of the manufacturing sector would rise if court congestion were reduced to be in line with the least congested state. On average across states, the boost to productivity is roughly 5%, and the gains for the states with the most congested courts are roughly 10% (Figure 3).