The Effects of Indian Child Labor Laws

A recent working paper looks at the effects of India’s 1986 anti-child labor law. Once again, good intentions and actual outcomes are at odds with one another:

The estimated effect of the ban is to increase relative employment among children under the age of 14. Having an underage sibling leads to a 0.3 percentage point increase in the likelihood of engaging in work after the ban for the very young. While this point estimate is small, it is both statistically and economically significant; the pre-ban proportion of children employed in that age range is only 2 percent so the effect of the ban is to increase employment by 15% over the mean for this group. The ban increases the probability of employment by 0.8 percentage points (5.6% over the mean) for young children ages 10-13. However, older children ages 14-17 overall are unaffected by the ban. The effect for this group is both small relative to the mean and statistically insignificant. Again, the largest increase in child labor is in agriculture…which is consistent with the partial mobility case of the two-sector model where there is restricted entry into manufacturing (pg. 22).

The authors then look at five measures of household welfare:

  • Per capita expenditure.
  • Per capita food expenditure.
  • Caloric intake per capita.
  • Staple share of calories; i.e., “a measure of household nutritional adequacy in the presence of caloric needs that are unknown or variable across households. [The] logic is that if households attach a high disutility to having caloric intake below caloric needs, they will substitute towards the cheapest sources of calories (staples)” (pg. 25).
  • Household index asset; i.e., “a set of variables that capture the quality and quantity of housing, the type of energy used for cooking and lighting, and the quantity of electricity used (which is likely to be correlated with the number of appliances and durables used by the household” (pg. 25).

Their findings?

We find a negative and statistically significant point estimate of the ban’s effect on four out of five welfare measures. The one exception is caloric intake per capita which has a positive but not statistically significant coefficient. This is consistent with households near-subsistence – the ones likely to be most affected by the ban – being unable to cut back on calories and instead reducing other aspects of household welfare (consuming more less tasty staples or selling assets) as well as the idea that increased child labor for these households may increase household caloric requirements and thereby constrain households from adjustment on this margin. However the changes for all of the welfare measures are quantitatively small – about 0.01 standard deviations of the pre-ban cross-section – and the standard errors are small enough to rule out large positive or negative effects of the ban (pg. 26).

Nonetheless, “we take this as evidence that the ban makes these households unambiguously worse off” (pg. 5). They conclude,

This paper is the first empirical investigation of the impact of India’s most important legal action against child labor. While the Child Labor (Prohibition and Regulation) Act of 1986 prevented employers from employing children in certain sectors and increased regulation of child labor in non-family run businesses, the net result of this ban appears to be an increase in child labor in some families. We find that child wages decrease in response to such laws and poor families send out more children into the workforce. Due to increased employment, affected children are less likely to be in school. These results are consistent with a two sector model with some frictions on mobility across sectors where the ban is more stringently enforced in one sector than the other. Importantly, we also examine the overall welfare effects of the ban on households. Along various measures of household consumption and expenditure, we find that the ban leads to small decreases in household welfare.

This paper does not intend to suggest that all child labor bans are useless. In fact, well formulated and implemented bans could absolutely help in eliminating child labor; but as we do in this case, research would have to examine how a decrease in child labor affects child and household welfare (Baland and Robinson (2000); Beegle, Dehejia and Gatti (2009)). To echo the reasoning in Basu (2004): “Legal interventions, on the other hand, even when they are properly enforced so that they do diminish child labor, may or may not increase child welfare. This is one of the most important lessons that modern economics has taught us and is something that often eludes the policy maker” (pg. 30).

This isn’t all that surprising. Consider Paul Krugman:

In 1993, child workers in Bangladesh were found to be producing clothing for Wal-Mart, and Senator Tom Harkin proposed legislation banning imports from countries employing underage workers. The direct result was that Bangladeshi textile factories stopped employing children. But did the children go back to school? Did they return to happy homes? Not according to Oxfam, which found that the displaced child workers ended up in even worse jobs, or on the streets–and that a significant number were forced into prostitution.

The 1997 UNICEF State of the World’s Children report had similar findings:

The consequences for the dismissed children and their parents were not anticipated. The children may have been freed, but at the same time they were trapped in a harsh environment with no skills, little or no education, and precious few alternatives. Schools were either inaccessible, useless or costly. A series of follow-up visits by UNICEF, local non-governmental organizations (NGOs) and the International Labour Organization (ILO) discovered that children went looking for new sources of income, and found them in work such as stone-crushing, street hustling and prostitution — all of them more hazardous and exploitative than garment production. In several cases, the mothers of dismissed children had to leave their jobs in order to look after their children.

In cases like this, legislation is rarely the answer. In fact, according to economist Robert Whaples,

Most economic historians conclude that…legislation was not the primary reason for the reduction and virtual elimination of child labor between 1880 and 1940 [in the United States]. Instead they point out that industrialization and economic growth brought rising incomes, which allowed parents the luxury of keeping their children out of the work force. In addition, child labor rates have been linked to the expansion of schooling, high rates of return from education, and a decrease in the demand for child labor due to technological changes which increased the skills required in some jobs and allowed machines to take jobs previously filled by children. Moehling (1999) finds that the employment rate of 13-year olds around the beginning of the twentieth century did decline in states that enacted age minimums of 14, but so did the rates for 13-year olds not covered by the restrictions. Overall she finds that state laws are linked to only a small fraction – if any – of the decline in child labor. It may be that states experiencing declines were therefore more likely to pass legislation, which was largely symbolic.

The road to hell and all that.

Image result for good intentions hell

Nothing Is The Way You Think It Is

I read an interesting book called The Swerve: How the World Became Modern last week. It won a Pulitzer and National Book Award, but I wasn’t that impressed. There were some really interesting points, however, and a couple of them reinforced this lesson that I feel like I keep learning again and again and again but never fully internalize: the world isn’t the way you think it is. Let me give you two examples.

Thomas Harriot. (Public Domain)

First, the book introduced me to Thomas Harriot. Who’s he? Well, you’ve never heard of him, but in a nutshell he came up with all of the ideas that Galileo and others are credited with before they did but–since he didn’t want to get vilified–he kept his ideas to himself. Here’s the passage from the book describing him:

Thomas Harriot…constructed the largest telescope in England, observed sunspots, sketched the lunar surface, observed the satellites of planets, proposed that planets moved not in perfect circles but in elliptical orbits, worked on mathematical cartography, discovered the sine law of refraction, and achieved major breakthroughs in algebra. Many of these discoveries anticipated ones for which Galileo, Descartes, and others became famous. Bu Harriot isn’t credited with any of them. They were found only recently in the mass of unpublished papers he left at his death. Among those papers was a careful list that Harriot, an atomist, kept of the attacks upon him as a purported atheist. He knew that the attacks would only intensify if he published any of his findings, and he preferred life to fame. Who can blame him?

I know this isn’t new, but it just reinforces this notion I have that if we ever got access to a giant library in the sky where we could see who came up with what when, we’d find that the list of famous people credited with major discoveries and the list of people who actually thought them up first would be almost entirely distinct. But it’s not as simple as just lazily saying, “everything’s been thought of before.” As far as I can tell there really are a few singular geniuses–Newton and Einstein come to mind–who made breakthroughs that are unambiguously their own. So there is such a thing as being the first person to discover something. It’s just that the record we have is really, really inaccurate.

Another example was the long, long list of ideas from Epicureanism that show modernity is a hoax. I talked about this in my review, and here’s what I said:

I was also utterly shocked–once again–at how many of the core tenets of modernity from evolution by natural selection to materialism are actually retreads on philosophy that’s thousands of years old. I don’t know if they still teach this way, but when I was in school we learned about progress. In order to make the progress narrative stick, they had to go out of their way to ridicule caricatures of Greek thought that–without the ridicule and the caricature–would be so similar to modern thought that the progress narrative would go out the window. So, while we believe in atoms today, of course that’s much different than the atomism of Democritus, right? Well, yes and then again no.

I transcribed a lot of the list of core principles from Epicureanism (in The Swerve) today, and on top of evolution by natural selection and materialism, we’ve got all the core tenets of New Atheism (e.g. ” The universe has no creator or designer,” “The soul dies,” ” All organized religions are superstitious delusions,” and ” Religions are invariably cruel.”) and many more basic scientific tenets, including the idea that there is an underlying set of physical law that govern the interactions of atoms to generate all material phenomena.

I think some of this is overblown. My biggest complaint about the book is that it’s too partisan in favor of New Atheism, and so it’s easy to suspect that Stephen Greenblatt read his own ideology back on top of the ancient Epicureans (intentionally or not). I completely lack the training to have a strong opinion on that. But it seems abundantly clear that–of not a carbon copy of New Atheism–quite a lot of the raw material for cutting-edge pop philosophy is literally thousands and thousands of years old. Which, again, is not the message that I got in school.

So–like I said–nothing is the way you think it is. The more you read and learn, the more you realize just how fragile and provisional all your beliefs truly are.

The Origins of Formal Segregation Laws

Image result for segregation

A new NBER paper looks at the decline in collective action promoting segregation and the rise of formal laws enforcing it. From the ungated version:

The goal of the analysis is to identify which of the two channels (i.e., increases in black housing demand and/or reductions in white vigilante activity) actually drove demand for passage of municipal segregation ordinances. Although our data and estimating strategies are limited, the patterns we observe are consistent with the predictions of the model, though the evidence for the vigilante channel is stronger than for the housing demand channel. In particular, whether we use city-level or ward-level data, we find only mixed evidence that demand for segregation ordinances is strongest in areas with the fastest growing black populations.

By contrast, we find relatively strong and robust evidence for the second channel involving white vigilante activity. Across a variety of model specifications and different measures of white vigilante activity, it is clear that in the cities where whites were able to police color lines and punish deviations through private channels, there was relatively little demand for segregation ordinances. For example, the data show that in cities located in counties with high lynching rates (a direct indicator of the ability of whites to organize privately to punish blacks for violating established racial norms) the probability of passing a segregation ordinance is significantly lower than in places with low lynching rates. Similarly, cities that possessed a robust volunteer fire department (an alternative measure of the ability to provide public goods through private channels) are significant less likely to pass a segregation ordinance. We supplement our city-level analysis with ward level data from St. Louis. With the ward-level data from St. Louis, we can identity which wards were the strongest supporters of the city’s segregation ordinance. The patterns observed in St. Louis suggest that support for the city’s segregation ordinance was strongest in the wards where it was difficult for white communities to coordinate private vigilante activity (pg. 4-5).

The authors conclude,

The existing literature on the origins of municipal segregation ordinances argues that segregation ordinances were passed largely because of rapidly growing black populations in urban areas and variation in the intensity anti-black preferences across cities. Our results suggest the existing literature needs to be revised. While there is evidence that growing black populations might have played a role in the propagation of segregation ordinances, the results here suggest that a decline in the ability of whites to provide a local public good (i.e. segregation) through private vigilante activity was especially important. In particular, the negative coefficient on lynching and the positive coefficients on white population growth are consistent with the hypothesis that segregation ordinances were passed in those cities where it was becoming increasingly difficult for whites to organize and punish blacks for violating established color lines in residential housing markets.

More generally, the model developed and tested here has broad implications for our understanding of residential segregation the processes that give rise to it. Of particular interest is the exploration of how market processes such as tipping interact with institutional change. While prior research has tended to treat market-related processes such as tipping independently from institutions, both formal and informal, the framework here integrates them. In the process, it can help us understand political institutions and market processes work together to drive segregation and make it persistent (pg. 34-35).

Houston, Hurricanes, and History

Economic and policy historian Phillip Magness has an enlightening post on Houston’s Harvey situation:

Older generations remember earlier storms and hurricanes that produced similar effects going back decades, although you have to return to December 6-9, 1935 to find an example that compares to Harvey’s stats.

Houston was a much smaller city in 1935, both in population and in geographical spread. But by some metrics the 1935 flood was even more severe. Buffalo Bayou – the main waterway through downtown – peaked at over 54 feet. Harvey, in all its devastation, hit “only” 40 feet by comparison. The 1935 storm dropped less rain, the maximum recorded being about 20 inches to the north of town where Houston’s main airport now sits. But it was also complicated by the problem of severe storms upstream that flowed into town and caused almost all of the other creeks and bayous that flooded last weekend to exceed their banks. Reports at the time noted that as much as 2/3rds of what was then rural and unpopulated farmland in surrounding Harris County saw flooding. Those areas are now suburbs today.

The effects of the 1935 flood on populated areas are also eerily similar to what we saw on television over the weekend. I recommend watching this film of the aftermath for comparison. All of downtown was underwater, as the film shows. People were stranded on rooftops as rivers of water emerged around them. There are even clips of rescuers navigating the streets of neighborhoods in small boats and canoes as water reached second and third stories on nearby buildings.

In the aftermath of the 1935 flood, the federal government commissioned an extensive study of Houston’s rainfall patterns. They produced the following map of the Houston storm’s effects, showing unsettling similarities to what we just witnessed (note that this map does not include the areas to the north of town, where rainfall in 1935 was significantly higher. These are the suburbs that flooded along Cypress and Spring Creeks last weekend and the farmland that similarly flooded in 1935)

And therein lies the importance of history to understanding what we just witnessed in catastrophic form this weekend. Houston floods fairly regularly. In fact, downtown Houston has suffered a major flood on average about once a decade as far back as records extend in the 1830s.

He continues:

tropical storms and hurricanes throughout the 20th century revealed Houston’s continued vulnerability to storms.

The reasons have to do almost entirely with topography and geography. Houston sits on the gulf of Mexico in an active hurricane zone that attracts large storms. But more significantly, Houston’s topography is extraordinarily flat. The elevation drop across the entire city and region is extremely modest. Most local waterways are slow-moving creeks and bayous that wind their way through town and eventually trickle into the shallow, marshy Trinity bay. Drainage is slow on a normal day. During a deluge, these systems fill rapidly with water that effectively has nowhere to go.

We’ve seen a flurry of commentators in the past few days attributing Houston’s flooding to a litany of pet political causes. Aside from the normal carping about “climate change” (which always makes for a convenient point of blame for bad warm weather events, even as environmentalists simultaneously decry the old conservative canard about blizzards contradicting Al Gore), several pundits and journalists have opportunistically seized upon Houston’s famously lax zoning and land use regulations to blame Harvey’s destruction on “sprawl” and call for “SmartGrowth” policies that restrict and heavily regulate future construction in the city.

According to this argument, Harvey’s floods are a byproduct of unrestricted suburban development in the north and west of the city at the expense of prairies that would supposedly absorb rainwater at sufficient rates to prevent natural disasters and that supposedly served this purpose “naturally” in the past.

There are multiple problems with this line of argument that suggest it is rooted in naked political opportunism rather than actual concern for Houston’s flooding problems.

And here they are:

  1. “flooding has been a regular feature of Houston’s landscape since the beginning of recorded history in the region. And catastrophic flooding – including multiple storms in the 19th century and the well-documented flood of December 1935 – predates any of the “sprawl” that has provoked these armchair urban designers’ ire.”
  2. “the flooding we saw in Harvey is largely a result of creeks and bayous backlogging and spilling over their banks as more water rushes in from upstream. While parking lot and roadway runoff from “sprawl” certainly makes its way into these streams, it is hardly the source of the problem. The slow-moving and windy Brazos river reached record levels as a result of Harvey and spilled over its banks, despite being nowhere near the city’s “sprawl.” The mostly-rural prairie along Interstate 10 to the extreme west of the city recorded some of the worst flooding in terms of water volume due to the Brazos overflow, although fortunately property damage here will be much lower due to being rural.”
  3. “the very notion that Houston is a giant concrete-laden water retention pond is itself a pernicious myth peddled by unscrupulous urban planning activists and media outlets. In total acres, Houston has more parkland and green space than any other large city in America and ranks third overall to San Diego and Dallas in park acreage per capita.”
  4. “a 2011 study by the Houston-Galveston Area Council…actually measured the ratio of impervious-to-pervious land cover within the city limits (basically the amount of water-blocking concrete vs. water-absorbing green land). The study used an index scale to measure water-absorption land uses. A low score (defined as less than 2.0 on the scale) indicates a high presence of green relative to concrete. A high score (defined as greater than 5.0) indicates high concrete and low levels of greenery and other water-absorbing cover. The result are in the map below, showing the city limits. Gray corresponds to high levels of pervious surfaces (or greenery). Black corresponds to high impervious surface use (basically either concrete or lakes that collect runoff). As the map shows, over 90% of the land in the city limits is gray, indicating more greenery and higher water absorption. Although they did not measure unincorporated Harris County, it also tends to be substantially less dense than the city itself.”

In short,

Houston’s flood problems are a distinctive feature of its topography and geography, and they long predate any “sprawl.” While steps have been taken over the years to mitigate them and reduce the severity of flooding, a rare but catastrophic event will unavoidably overwhelm even the most sophisticated flood control systems. Harvey was one such event – certainly the highest floodwater event to hit Houston in over 80 years, and possibly the worst deluge in its recorded history. But it is entirely consistent with almost 2 centuries of recorded historical patterns. In the grander scheme of causes for Harvey’s flooding, “sprawl” does not even meaningfully register.

Humane Liberalism

Related imageAs mentioned before, the newest issue of Dialogue was just released. The first article of the new issue is Robert Rees’ “Reimagining the Restoration: Why Liberalism is the Ultimate Flowering of Mormonism.” Rees attempts to redeem the word from its current negative connotations in American society, reviewing its meaning in the Middle Ages to the Enlightenment. He further connects to Joseph Smith’s statement that God “is more liberal in His views, and boundless in His mercies and blessings, than we are ready to believe or receive” (pg. 4). Rees goes on to emphasize liberal commitments to earth stewardship, gender equality, the poor, peace, education, etc.

The article reminded me of a recent essay by economic historian Deirdre McCloskey titled “Manifesto for a New American Liberalism, or How to Be a Humane Libertarian.” As McCloskey notes, “Outside the United States libertarianism is still called plain “liberalism,” as in the usage of the president of France, Emmanuel Macron, with no “neo-” about it” (pg. 1). “Liberals 1.0 don’t like violence,” she continues. “They are friends of the voluntary market order, as against the policy-heavy feudal order or bureaucratic order or military-industrial order. They are, as Hayek declared, “the party of life, the party that favors free growth and spontaneous evolution,” against the various parties of left and right which wish “to impose [by violence] upon the world a preconceived rational pattern.” In McCloskey’s view, “humane liberals are very far from being against poor people. Nor are they ungenerous, or lacking in pity. Nor are they strictly pacifist, willing to surrender in the face of an invasion. But they believe that in achieving such goods as charity and security the polity should not turn carelessly to violence, at home or abroad, whether for leftish or rightish purposes, whether to help the poor or to police the world. We should depend chiefly on voluntary agreements, such as exchange-tested betterment, or treaties, or civil conversation, or the gift of grace, or a majority voting constrained by civil rights for the minority” (pg. 2). She explains,

Such a humane liberalism has for two centuries worked on the whole astonishingly well. For one thing it produced increasingly free people, which (we moderns think) is a great good in itself. Slaves, women, colonial people, gays, handicapped, and above all the poor, from which almost all of us come, have been increasingly allowed since 1776 to pursue their own projects consistent with not using physical violence to interfere with other people’s projects. As someone put it: In the eighteenth century kings had rights and women had none. Now it’s the other way around. And—quite surprisingly—the new liberalism, by inspiriting for the first time in history a great mass of ordinary people, produced a massive explosion of betterments. 

…The Enrichment was, I say again in case you missed it, three thousand percent per person, near enough, utterly unprecedented. The goods and services available to even the poorest rose by that astounding figure, in a world in which mere doublings, increases of merely 100 percent, had been rare and temporary, as in the glory of fifth-century Greece or the vigor of the Song Dynasty. In every earlier case, the little industrial revolutions had reverted eventually to a real income per head in today’s prices of about $3 a day, which was the human condition since the caves. Consider trying to live on $3 a day, as many people worldwide still do (though during the past forty years their number has fallen like a stone). After 1800 there was no reversion. On the contrary, in every one of the forty or so recessions since 1800 the real income per head after a recession exceeded what it had been at the previous peak. Up, up, up. Even including the $3- a-day people in Chad and Zimbabwe, world real income per head has increased during the past two centuries by a factor of ten, and by a factor of thirty as I said, in the countries that were lucky, and liberally wise. Hong Kong. South Korea. Botswana. The material and cultural enrichment bids fair to spread now to the world.

And the enrichment has been equalizing. Nowadays in places like Japan and the United States the poorest make more, corrected for inflation, than did the top quarter or so two centuries ago (pgs. 4-5).

The whole thing is worth reading. Check it out.

The Long-Term Effects of the African Slave Trade

According to economist Nathan Nunn, the African slave trade (unsurprisingly) had numerous negative long-term effects, economically, socially and culturally. He writes,

An empirical literature has emerged that aims to supplement these historical accounts with quantitative estimates of the long-run impact of Africa’s slave trades. The first paper that attempted to provide such estimates was Nunn (2008). In the study, I undertook an empirical test, with the following logic. If the slave trades are partly responsible for Africa’s current underdevelopment, then, looking across different parts of Africa, one should observe that the areas that are the poorest today should also be the areas from which the largest number of slaves were taken in the past.

To undertake this study, I had to first construct estimates of the number of slaves taken from each country in Africa during the slave trades (i.e. between 1400 and 1900).

These estimates were  constructed  by  combining  data   on   the   number   of   slaves shipped from each African port or region  with  data  from  historical documents that reported the ethnicity of over 106,000 slaves taken from  Africa. Figure 1 provides an image showing a typical page from these historical documents. The documents shown are slave manumission records from Zanzibar. Each row reports information for one slave, including his/her name, ethnicity, age, and so on.

After constructing the estimates and connecting these with measures of modern day economic development, I found that, indeed, the countries from which the most slaves had been taken (taking into account differences in country size) were today the poorest in Africa. This can be seen in Figure 2, which is taken from Nunn (2008). It shows the relationship between the number of slaves taken between 1400 and 1900 and average real per capita GDP measured in 2000. As the figure clearly shows, the relationship is extremely strong. Furthermore, the relationship remains robust when many other key determinants of economic development are taken into account…According to the estimates from Nunn (2008), if the slave trades had not occurred, then 72% of the average income gap between Africa and the rest of the world would not exist today, and 99% of the income gap between Africa and other developing countries would not exist. In other words, had the slave trades not occurred, Africa would not be the most underdeveloped region of the world and it would have a similar level of development to Latin America or Asia.

“In a series of studies,” Nunn continues,

Whatley and Gillezeau (2011) and Whatley (2014) combine slave shipping records with ethnographic data and estimate the relationship between slave shipments and institutional quality and ethnic diversity in the locations close to the ports of shipment. Their analysis, consistent with Nunn (2008) and Green (2013), indicates that the slave trades did result in greater ethnic fractionalisation. In addition, their analysis also shows that the slave trades resulted in a deterioration of local ethnic institutions, measured in the late pre-colonial period.

Another subsequent study, undertaken by Nunn and Wantchekon (2011) asks whether the slave trades resulted in a deterioration of trust…In our study, Wantchekon and I extended the data construction efforts in Nunn (2008) and constructed estimates of the number of slaves taken from each ethnic group in Africa (rather than country). The ethnicity level estimates are displayed visually in Figure 3. The analysis combined the ethnicity-level slave export estimates with fine-grained household survey data, which reports individuals’ trust of those around them, whether neighbours, relatives, local governments, co-ethnics, or those from other ethnicities. The study documented a strong negative relationship between the intensity of the slave trade among one’s ethnic ancestors and an individual’s trust in others today.

The study then attempted to distinguish between the two most likely channels through which the slave trades could have adversely affected trust. One is that the slave trades made individuals and their descendants inherently less trusting. That is, it created a culture of distrust. In the insecure environment of the slave trade, where it was common to experience the betrayal of others, even friends and family, greater distrust may have developed, which could persist over generations even after the end of the slave trade.

Another possibility is that the slave trades may have resulted in a long-term deterioration of legal and political institutions, which are then less able to enforce good behaviour among citizens, and as a result people trust each other less today.

The study undertook a number of different statistical tests to identify the presence and strength of the two channels. They found that each of the tests generated the same answer: both channels are present. The slave trades negatively affected domestic institutions and governance, which results in less trust today. In addition, the slave trade also directly reduced the extent to which individuals were inherently trusting of others. We also found that, quantitatively, the second channel is twice as large as the first channel.

Guess what? The slave trade likely boosted the practice of polygyny in West Africa:

This is due to the fact that it was primarily males who were captured and shipped to the Americas, resulting in a shortage of men and skewed sex ratios within many parts of Africa. Interestingly, Dalton and Leung (2014) found that there is no evidence of such an impact for the Indian Ocean slave trade, where there was not a strong preference for male slaves. This has led the authors to conclude that Africa’s history of the slave trades is the primary explanation for why today polygyny is much more prevalent in West Africa than in East Africa.

Nunn concludes,

Although research understanding the long-term impacts of Africa’s slave trades is still in progress, the evidence accumulated up to this point suggests that this historic event played an important part in the shaping of the continent, in terms of not only economic outcomes, but cultural and social outcomes as well. The evidence suggests that it has affected a wide range of important outcomes, including economic prosperity, ethnic diversity, institutional quality, the prevalence of conflict, the prevalence of HIV, trust levels, female labour force participation rates, and the practice of polygyny. Thus, the slave trades appear to have played an important role in shaping the fabric of African society today.

American Revolution: Taxation *and* Representation?

A week late, but what were some of the political economics behind the American Revolution? Here’s the abstract from a new working paper:

Why did the most prosperous colonies in the British Empire mount a rebellion? Even more puzzling, why didn’t the British agree to have American representation in Parliament and quickly settle the dispute peacefully? At first glance, it would appear that a deal could have been reached to share the costs of the global public goods provided by the Empire in exchange for political power and representation for the colonies. (At least, this was the view of men of the time such as Lord Chapman, Thomas Pownall and Adam Smith). We argue, however, that the incumbent government in Great Britain, controlled by the landed gentry, feared that allowing Americans to be represented in Parliament would undermine the position of the dominant coalition, strengthen the incipient democratic movement, and intensify social pressures for the reform of a political system based on land ownership. Since American elites could not credibly commit to refuse to form a coalition with the British opposition, the only realistic options were to maintain the original colonial status or fight a full-scale war of independence.

Happy belated July 4th!

Image result for fourth of july gif

How Important is Human Capital for Economic Development?

Image result for human capital

How important was human capital–specialized scientific knowledge typically in the hands of relatively few elites–to the British Industrial Revolution? According to a 2016 working paper, not as important as you’d think. Economist B. Zorina Khan finds that “evidence from the backgrounds and patenting of the great inventors in Britain suggest that the formal acquisition of human capital did not play a central role in the generation of new inventive activity, especially in the period before the second industrial revolution” (pg. 21). It turns out that “scientists were not well-represented among the great British inventors nor among patentees during the height of industrial achievements…Instead, many of the most productive inventors, such as Charles Tennant, were able to acquire or enhance their inventive capabilities through apprenticeships and informal learning, honed through trial and error experimentation” (pg. 23).

By examining the patent record, Khan finds that

the patterns are consistent with the notion that at least until 1870 a background in science did not add a great deal to inventive productivity. If scientific knowledge gave inventors a marked advantage, it might be expected that they would demonstrate greater creativity at an earlier age than those without such human capital. Inventor scientists were marginally younger than nonscientists, but both classes of inventors were primarily close to middle age by the time they obtained their first invention (and note that this variable tracks inventions rather than patents). Productivity in terms of average patents filed and career length are also similar among all great inventors irrespective of their scientific orientation. Thus, the kind of knowledge and ideas that produced significant technological contributions during British industrialization seem to have been rather general and available to all creative individuals, regardless of their scientific training (pg. 18).

In short,

The overall empirical findings together suggest that, by focusing their efforts in a particular industry, relatively uneducated inventors were able to acquire sufficient knowledge that allowed them to make valuable additions to the available technology set. After 1820, as the market expanded and created incentives to move out of traditional industries such as textiles and engines, both scientists and nonscientists responded by decreasing their specialization. The patent reforms in 1852 encouraged the nonscience-oriented inventors to increase their investments in sectoral specialization, but industrial specialization among the scientists lagged significantly. This is consistent with the arguments of such scholars as Joel Mokyr, who argued that any comparative advantage from familiarity with science was likely based on broad unfocused capabilities such as rational methods of analysis that applied across all industries (pg. 20).

“More generally,” she writes,

the experience of the First Industrial Nation indicates that creativity that enhances economic efficiency is somewhat different from additions to the most advanced technical discoveries. The sort of creativity that led to spurts in economic and social progress comprised insights that were motivated by perceived need and by institutional incentives, and could be achieved by drawing on practical abilities or informal education and skills. Elites and allegedly “upper-tail knowledge” were neither necessary nor sufficient for technological productivity and economic progress. In the twenty-first century, specialized human capital and scientific knowledge undoubtedly enhance and precipitate economic growth in the developed economies. However, for developing countries with scarce human capital resources, such inputs at the frontier of “high technology” might be less relevant than the ability to make incremental adjustments that can transform existing technologies into inventions that are appropriate for general domestic conditions. As Thomas Jefferson pointed out, a small innovation that can improve the lives of the mass of the population might be more economically important than a technically-advanced discovery that benefits only the few (pgs. 23-24).

I’m reminded of something Matt Ridley said in his TED talk years ago:

We’ve gone beyond the capacity of the human mind to an extraordinary degree. And by the way, that’s one of the reasons that I’m not interested in the debate about I.Q., about whether some groups have higher I.Q.s than other groups. It’s completely irrelevant. What’s relevant to a society is how well people are communicating their ideas, and how well they’re cooperating, not how clever the individuals are. So we’ve created something called the collective brain. We’re just the nodes in the network. We’re the neurons in this brain. It’s the interchange of ideas, the meeting and mating of ideas between them, that is causing technological progress, incrementally, bit by bit…Because through the cloud, through crowd sourcing, through the bottom-up world that we’ve created, where not just the elites but everybody is able to have their ideas and make them meet and mate, we are surely accelerating the rate of innovation.

Infrastructure, Knowledge, and Technological Progress

Alice Austen-Vintage Photos-NYC-Mailman

“In a recent working paper,” economist Daron Acemoglu and colleagues “empirically tests the hypothesis that the US government’s infrastructural capacity helped drive innovation during the 19th century (Acemoglu et al. 2016). Our results suggest that, notwithstanding the view that the American state was weak in the 19th century, a major part of the explanation for US technological progress and prominence is the way in which the US developed an effective state.” Using the U.S. Post Office as a proxy and relying on “historical records compiled by the US Postmaster General,” the researchers

determined how many post offices were in each US county for several years between 1804 and 1899. As a measure of county-level innovative activity, [they] use the number of patents granted to inventors living in the county (these data are presented in Akcgit et al. 2013). There are several reasons for expecting the number of post offices to impact the number of patent grants. First, post offices facilitated the spread of ideas and knowledge. Second, more prosaically, the presence of a post office made patenting much easier, in part because patent applications could be submitted by mail free of postage (Khan 2005, p. 59). Third, the presence a post office is indicative of – and thus the proxy for – the presence and functionality of the state in the area. This expanded state capacity may have meant greater access to legal services and regulation, or greater security of other forms of property rights, all of which are essential conditions for modern innovative activity.

The results?

We find a significant correlation between a history of state presence – using the number of post offices as a proxy – and patenting in US counties. We show that the correlation holds either using a sample of the 935 US counties that had been established by 1830, or using a sample to which counties are added as they were established between 1830 and 1890, ultimately reaching 2,644 in total. This relationship is not only statistically significant, but also economically meaningful. Our results suggest that the opening of a post office in a county that did not previously have a post office or patents on average increased the number of patents by 0.18 in the long run.  

…One concern with this initial set of results might be that they are confounded by the possibility that post offices were built in counties that already had more patenting activity. Though we cannot fully rule out such reverse causality concerns, we find no statistically or economically significant correlation between patenting and the number of post offices in a county in future years. This suggests that post offices led to patenting and not the other way around. Historical evidence also suggests that post offices were established for a range of idiosyncratic reasons during the 19th century, making it unlikely that reverse causality is driving the association.

…Taken together – while we do not establish unambiguously that the post office and greater state capacity caused an increase in patenting – our results highlight an intriguing correlation and suggest that the infrastructural capacity of the US state played an important role in sustaining 19th century innovation and technological change. In the current economic climate in which pessimism about US economic growth prospects is common, we present a more optimistic historical narrative in which government policy and institutional design have the power to support technological progress.

While I don’t dispute the importance of infrastructure and the role of the state in developing it, I’m curious if “state capacity” is the real takeaway from this study. To restate a section from above, “post offices facilitated the spread of ideas and knowledge” (italics mine). I’ve highlighted studies before that show how important social networks, communication capacity, and information flows are for decreasing poverty. These aspects are deemed more important than institutions. This seems to be the case with technological innovation as well. While the state can certainly help increase the spread of ideas, I wonder if post offices should be viewed less as “state capacity” and more as proxy for information flows.

Check out the study and determine for yourself.

Modern Lessons from 18th-Century Scottish Colleges

Last year, Ro had a brief piece about how Germany offers free college, not the “college experience”. The results of free college are arguably underwhelming. But the debate over college isn’t new, but can be found in the writings of Adam Smith. As The Atlantic explains,

While extravagances such as hot tubs, movie theaters, and climbing walls may seem to make this discussion distinctively modern, parts of today’s college-cost dilemma are recognizable, in fact, in an 18th-century debate about how best to finance a university’s operations. It was so important that Adam Smith took time out of analyzing more traditional economic subjects like the corn laws to devote a long section of The Wealth of Nations to it. And with cause: The Scottish universities of the 18th century, much like America’s today, had been quickly becoming the universally acknowledged ticket to social advancement.

Smith, despite accusations of Connery-esque misplaced nationalism, was justly proud of the Scottish system of universities, which ran on a radical (by today’s standards, at least) system in which students paid their professors directly…But by the end of the century, it had five of the most cutting-edge universities in Europe, one of the world’s best medical schools, and a booming professional class from which its southerly neighbor and occupier frequently drew its doctors, lawyers, and professors. It had pioneered the study of English literature as a subject, having perceived that for many of its students, raised speaking Scots or Gaelic, English actually was a foreign language. It offered up world-class Enlightenment philosophes such as David Hume, Adam Ferguson, and Adam Smith, all of whom were at least partially educated in its universities.

Smith noted the differences between the universities of Scotland and Oxford where later attended:

Image result for adam smithIn Scotland, students exercised complete consumer control over with whom they studied and which subjects they deemed relevant. Oxford—and in fact most other European universities—employed a system similar to the way that American universities handle tuition payments today: One tuition payment was made directly to the university, and the university decided how to distribute what came in…Smith points out how [Oxford] often fell short of the Scottish system, where direct payment of fees served as motivation for faculty responsibility. “The endowments of [British] schools and colleges have necessarily diminished more or less the necessity of application in the teachers,” Smith writes in his opening sally against bundling the costs of education. “In the university of Oxford, the greater part of the publick professors have, for these many years, given up altogether even the pretence of teaching.” In the the Scottish system, “the salary makes but a part, and frequently but a small part of the emoluments of the teacher, of which the greater part arises from the honoraries or fees of his pupils,” he explains.

What’s wrong with the Oxford (and contemporary universities generally) approach?

Prices are information about what people need and want, so the trouble with bundling together a large number of services on a single bill is that it becomes difficult to tell exactly what one is paying for, or for the people sending out that bill to determine what students in fact want to pay for. In the current American system, such decisions are based on fluctuation in enrollment—a very high-level piece of data that can encompass any number of students’ preferences—but not on the micro-level of whether the students of Texas Tech University, for instance, really wanted a water park instead of more or better Spanish-language instructors.

There are potential problems to the Scottish approach. For example,

evidence has recently pointed to the patent unfairness and sexism of student evaluations of their professors. Many an academic has bemoaned the growing “customer” mentality of their students, and with good reason: It can lead to grade inflation and a subsequent lowering of standards. But as Smith would surely have appreciated, the right incentives could bring 18-year-olds to seek out the highest-quality teachers rather than the most forgiving graders. That’s how it worked in Scotland in the 18th century, where there was a simple way of dealing with the problem that the best professors were not always the easiest fellows: rigorous, frequent, and comprehensive oral and essay examinations, which were administered in lieu of evaluations in individual courses. Students were allowed to select which university services and which university teachers they would pay for, but in the end if they could not pass a university-wide exam, their choice to take the 18th-century equivalent of Rocks for Jocks would have been swiftly punished.

The entire article is interesting. Check it out.