I’ll just leave this here. From AEI’s James Pethokoukis:
Tearing at the walls in the corners of my mind
From The Economist:
Many economists believe that national accounts may underestimate the economic significance of technological innovations. Despite the advent of the internet, smartphones and artificial intelligence, the official value added by the information industry as a share of GDP has scarcely changed since 2000. What might explain this paradox?
Part of the problem is that GDP as a measure only takes into account goods and services that people pay money for. Internet firms like Google and Facebook do not charge consumers for access, which means that national-income statistics will underestimate how much consumers have benefitted from their rise.
One way to quantify how much these internet services are worth is by asking people how much money they would have to be paid to forgo using them for a year. A new working paper by Erik Brynjolfsson, Felix Eggers and Avinash Gannamaneni, three economists, does exactly this and finds that the value for consumers of some internet services can be substantial. Survey respondents said that they would have to be paid $3,600 to give up internet maps for a year, and $8,400 to give up e-mail. Search engines appear to be especially valuable: consumers surveyed said that they would have to be paid $17,500 to forgo their use for a year.
Recall the thought experiment put forth by The Washington Post: “Try this thought experiment. Adjusted for inflation, would you rather make $50,000 in today’s world or $100,000 in 1980’s? In other words, is an extra $50,000 enough to get you to give up the internet and TV and computer that you have now?”
In the most recent issue of Nature Human Behaviour, neuroscientist Molly Crockett suggests that “digital media may exacerbate the expression of moral outrage by inflating its triggering stimuli, reducing some of its costs and amplifying many of its personal benefits.”
A recent study conducted in the US and Canada suggests that encountering norm violations in person is relatively rare: less than 5% of reported daily experiences involved directly witnessing or experiencing immoral acts. But the internet exposes us to a vast array of misdeeds, from corrupt practices of bankers on Wall Street, to child trafficking in Asia, to genocide in Africa — the list goes on. In fact, data from a study of everyday moral experience show that people are more likely to learn about immoral acts online than in person or through traditional forms of media…Research on virality shows that people are more likely to share content that elicits moral emotions such as outrage. Because outrageous content generates more revenue through viral sharing, natural selection-like forces may favour ‘supernormal’ stimuli that trigger much stronger outrage responses than do transgressions we typically encounter in everyday life. Supporting this hypothesis, there is evidence that immoral acts encountered online incite stronger moral outrage than immoral acts encountered in person or via traditional forms of media…These observations suggest that digital media transforms moral outrage by changing both the nature and prevalence of the stimuli that trigger it. The architecture of the attention economy creates a steady flow of outrageous ‘clickbait’ that people can access anywhere and at any time.
This could be a problem:
By increasing the frequency and extremity of triggering stimuli, one possible long-term consequence of digital media is ‘outrage fatigue’: constant exposure to outrageous news could diminish the overall intensity of outrage experiences, or cause people to experience outrage more selectively to reduce emotional and attentional demands. On the other hand, studies have shown that venting anger begets more anger. If digital media makes it easier to express outrage, this could intensify subsequent experiences of outrage. Future research is necessary to resolve these possibilities…[Online], people can express outrage online with just a few keystrokes, from the comfort of their bedrooms, either directly to the wrongdoer or to a broader audience. With even less effort, people can repost or react to others’ angry comments. Since the tools for easily and quickly expressing outrage online are literally at our fingertips, a person’s threshold for expressing outrage is probably lower online than offline…And just as a habitual snacker eats without feeling hungry, a habitual online shamer might express outrage without actually feeling outraged. Thus, when outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioural expression.
So why the outrage?
[E]xpressing moral outrage benefits individuals by signalling their moral quality to others. That is, outrage expression provides reputational rewards. People are not necessarily conscious of these rewards when they express outrage. But the fact that people are more likely to punish when others are watching indicates that a concern for reputation at least implicitly whets our appetite for moral outrage. Of course, online social networks massively amplify the reputational benefits of outrage expression. While offline punishment signals your virtue only to whoever might be watching, doing so online instantly advertises your character to your entire social network and beyond. A single tweet with an initial audience of just a few hundred can quickly reach millions through viral sharing — and outrage fuels virality.
And while this outrage may “benefit society by holding bad actors accountable and sending a message to others that such behaviour is socially unacceptable,” for the most part
moral disapproval ricochets within echo chambers but only occasionally escapes. Second, by lowering the threshold for outrage expression, digital media may degrade the ability of outrage to distinguish the truly heinous from the merely disagreeable. Third, expressing outrage online may result in less meaningful involvement in social causes, for example through volunteering or donations. People are less likely to spend money on punishing unfairness when they are given the opportunity to express their outrage via written messages instead. Finally, there is a serious risk that moral outrage in the digital age will deepen social divides. A recent study suggests a desire to punish others makes them seem less human. Thus, if digital media exacerbates moral outrage, in doing so it may increase social polarization by further dehumanizing the targets of outrage.
The framework proposed here offers a set of testable hypotheses about the impact of digital media on the expression of moral outrage and its social consequences…Preliminary data support the framework’s predictions, showing that outrage-inducing content appears to be more prevalent and potent online than offline. Future studies should investigate the extent to which digital media platforms intensify moral emotions, promote habit formation, suppress productive social discourse, and change the nature of moral outrage itself. There are vast troves of data that are directly pertinent to these questions, but not all of it is publicly available. These data can and should be used to understand how new technologies might transform ancient social emotions from a force for collective good into a tool for collective self-destruction.
Lay off the outrage porn.
AEI’s James Pethokoukis has a nice little blog post on the negative effects of ill-conceived regulation:
So I very much liked a Mercatus study last year finding US economic growth has been slowed by an average 0.8% per year since 1980 due to the cumulative effects of regulation. Also a favorite of mine: A 2013 study from economists John Dawson of Appalachian State University and John Seater of North Carolina State University, Federal Regulation and Aggregate Economic Growth, that estimates the past 50 years of federal regulations have reduced real GDP by roughly two percentage points a year, or nearly $40 trillion. Both studies show pretty sizable effects from smarter regulation or deregulation.
He points to new articles at Reason and National Affairs demonstrating that the Federal Communications Commission limited tech advancement, including cell phones. As economist Thomas Winslow Hazlett writes in his Reason piece,
When AT&T wanted to start developing cellular in 1947, the FCC rejected the idea, believing that spectrum could be best used by other services that were not “in the nature of convenience or luxury.”… A child conceived at the same time as cellular would have been 37 years old by the time the first commercial cellphone—Gordon Gecko’s $3,995 Motorola DynaTAC 8000X brick—was released onto the market. Once the blockage was cleared, progress popped. Soon, the science fiction vision of the Star Trek communicator was reality.
Check them out.
The project “refutes the idea that social media are making humans any less human…The sceptics’ reaction to new technology seems equally deep-rooted. New means of communication from railways and the telegraph onwards have always attracted critics. Sooner or later, the doubters either convert, or die.”
Science writer Ronald Bailey has a brief write-up on some of the research regarding nuclear power and health outcomes:
A 2015 recent analysis by Israeli researcher Yehoshua Socol in the journal Dose-Reponse reconsiders the health consequences of the the Chernobyl accident. Socol argues that using even the most conservative linear no-threshold hypothesis to calculate cancer risk cannot distinguish any increase above normal background rates of cancer incidence and mortality. Assume 50,000 cancer deaths would result from Chernobyl’s radiation. Socol notes, assuming current mortality rates, that over the next 50 years some 50 million people (plus or minus 2.5 million) will die of cancer in developed countries. Given the annual uncertainty of 50,000 deaths per year, it would be impossible to detect what number, if any, of those deaths can be attributed to exposures to Chernobyl.
Socol concludes that “unlike the widespread myths and misperceptions, there is little scientific evidence for carcinogenic, mutagenic or other detrimental health effects caused by the radiation in the Chernobyl-affected area, besides the acute effects and small number of thyroid cancers. On the other hand, it should be stressed that the above-mentioned myths and misperceptions about the threat of radiation caused, by themselves, enormous human suffering.”
A fascinating December 2015 study by European researchers in the Journal of Geophysical Research-Atmospheres asked what would the health consequences to Europe if the continent had closed all of its nuclear power plants and switched to coal-fired generation between 2005 and 2009? They calculated that there would have been an increase of around 100,000 premature deaths annually owing to increased air pollution (most of them due to cardiopulmonary illnesses). If these calculations are correct, the number of deaths attributable to coal would have been three times higher than even the worst-case Chernobyl cancer scenario being pushed by activists. If the WHO’s estimates are right, coal kills at more than 1,000 times the rate of Chernobyl radiation.
Chernobyl was bad enough, but exaggerating its effects to further an unscientific campaign against nuclear power is ethically sleazy and may have the unintended consequence of killing more people than the activists claim they want to save.
There’s Hollywood, then there’s reality.
How important have high-skilled immigrants been to innovation in US history? According to a recent study,
Medical inventions (e.g. surgical sutures) accounted for the largest share of immigrants, but this category produced just 1% of all US patents. However, immigrants were also active in chemicals and electricity – two sectors that had a particularly large effect on US economic growth, accounting for 13.9% and 12.6% of all US patents, respectively. Noticeably, immigrants accounted for at least 16% of patents in every area. This evidence suggests that their impact on inventive activity was widespread.
[The graph below] also shows that the majority of immigrant inventors originated from European countries, with Germans playing a particularly prominent role. This is consistent with the findings of Moser et al. (2014) who show that German-Jewish émigrés who fled the Nazi regime boosted innovation in the US chemicals industry by around 30%. Today the closest analogue to these high-impact individuals would be inventors of Indian and Chinese ethnic origin who make substantial contributions to the development of innovation clusters in areas like Silicon Valley (Hunt and Gauthier-Loiselle 2010, Kerr and Lincoln 2010).
constructed a measure of foreign-born expertise, which multiplies the share of each country’s patents granted in a given technology area between 1880 and 1940 (as a measure of proficiency) by the number of immigrants from that country in the 1940 Census (as a measure of how intensely that proficiency diffuses to the host country).
We find that technology areas with higher levels of foreign-born expertise experienced much faster patent growth between 1940 and 2000, in terms of both quality and quantity, than otherwise equivalent technology areas. Although we do not identify a causal relationship, our quantitative evidence can be used alongside qualitative evidence to highlight two areas where immigrant inventors may have acted as catalysts to economic growth: through their own inventive activity and through externalities affecting domestic inventors.
Immigrant inventors were responsible for some of the most fundamental technologies in the history of US innovation, which still influence our lives today. For example, Nikola Tesla, who was born in Serbia, worked in America on alternating current electrical systems; the Scotsman Alexander Graham Bell was instrumental to the development of the telephone from a workshop in Boston; Swedish inventor David Lindquist, while living in Yonkers, New York, assigned his patents relating to the electric elevator to the Otis Elevator Company located in Jersey City, New Jersey; and Herman Frasch, a German-born chemist, worked in Philadelphia and Cleveland on techniques which are analogous to modern fracking.
In short, the evidence suggests that “immigrant inventors were of central importance to American innovation during the 19th and 20th centuries. Although the migration of high-skilled inventors to the US involved some costs, immigrant inventors contributed heavily to new idea creation, through both their own work and collaboration with domestic inventors. Our evidence aligns with the view that growth in an economy is determined by its ablest innovators, regardless of national origin. The movement of high-skilled individuals across national borders therefore appears to have aided the development of the United States as an innovation hub.”
A 2016 Mercatus working paper argues that “certificate-of-need (CON) laws restrict healthcare institutions from expanding, offering a new service, or purchasing certain pieces of equipment without first gaining approval from regulators.” Drawing on data from the Standard Analytic Files and the American Health Planning Association, the authors review the 21 states with CON requirements “for at least one of three regulated imaging services: MRI (magnetic resonance imaging) scanners, CT (computed tomography) scanners, and PET (positron emission tomography) scanners. Medicare claims provide an estimate of the utilization of these different services and allow their utilization and accessibility to be compared between CON and non-CON states.”
The researchers conclude,
CON laws act as barriers to entry for nonhospital providers and favor hospitals over other providers. In consequence, consumers of MRI, CT, and PET scanning services are driven to seek these services either out of state or in hospitals. More research is needed to determine whether additional costs and barriers in the healthcare industry restrict specific market providers and affect where procedures occur.
According to The New York Times,
The Environmental Protection Agency has concluded that hydraulic fracturing, the oil and gas extraction technique also known as fracking, has contaminated drinking water in some circumstances, according to the final version of a comprehensive study first issued in 2015. The new version is far more worrying than the first, which found “no evidence that fracking systemically contaminates water” supplies. In a significant change, that conclusion was deleted from the final study.
So why the change? Is there new evidence demonstrating that fracking is in fact a danger to water sources? Not really. CBS reports,
The government report notes concerns over well leaks and waste water spilling above ground. The agency didn’t pinpoint any damage related to the fracking deep underground itself. “What we found is that although the overall incidents of impacts is low, that there are vulnerabilities,” said EPA science adviser Thomas Burke. The EPA is taking a tougher stance than ever before. Language in an earlier draft of the report downplaying fracking concerns was removed. It said: “We did not find evidence that these mechanisms have led to widespread, systemic impacts on drinking water resources.” Burke explained why they omitted the lighter language. “The gaps in information unfortunately do not allow us to say how much, what is the rate of the impact. And so that sentence was removed,” Burke said.
Elsewhere, Burke told reporters, “While the number of identified cases of drinking water contamination is small, the scientific evidence is insufficient to support estimates of the frequency of contamination…Scientists involved with finalising the assessment specifically identified this uncertainty in the report.”
The above can hardly be interpreted as a seismic, anti-fracking change. Science writer Ronald Bailey observes,
First, most of the instances and speculations cited in the EPA report are applicable to all oil and gas wells, not just to wells created by means of fracking. These include harms caused by spills, leaks due to faulty well casings, and inadequate treatment and disposal of fluids and water that flow from wells.
Focusing chiefly on the process of fracking itself—creating cracks by injecting pressurized fluids into shale rocks as a way to release trapped oil and natural gas—the EPA report looks at four pathways by which fracking specifically could contaminate drinking water supplies. Most of the agency’s findings are couched in conditional language. They include the possibility that fluids and natural gas could migrate via fracked cracks that might extend directly into drinking water aquifers; because well casings for horizontal drilling might be less able to withstand the high fracking pressures they may be more likely to leak allowing contaminants to migrate; migration might occur when a fracked well “communicates” with a nearby previously drilled well that is not able to withstand the additional pressures from newly released natural gas; and fracked cracks might intersect with natural faults allowing contaminants to migrate into drinking water supplies.
The EPA cites the results of lots of computer models that find that migration of fluids and natural gas by these four pathways is possible. However, given the fact that by some estimates as many as 35,000 fracked oil and gas wells are drilled each year in the United States, it is astonishing how few examples of actual contamination and other harms are identified in the EPA report…Given even the limited quantitative findings in the EPA’s final report, the agency should have reaffirmed its original more qualitative statement that there is little “evidence that these mechanisms have led to widespread, systemic impacts on drinking water resources.”
Read the report for yourself: “However, significant data gaps and uncertainties in the available data prevented us from calculating or estimating the national frequency of impacts on drinking water resources from activities in the hydraulic fracturing water cycle” (pg. 2). The 2015 draft report read, “We did not find evidence that these mechanisms have led to widespread, systemic impacts on drinking water resources in the United States” (pg. 6). These two reports communicate virtually the same thing. The newest report still, to quote the draft, “did not find evidence that these mechanisms have led to widespread, systemic impact on drinking water resources in the United States.” The language is simply massaged to emphasize “data gaps and uncertainties.” Both the draft and final reports acknowledge that fracking can impact drinking water sources under certain circumstances. That’s not a revelation. What the draft highlighted was the infrequency of these incidents. What the new report highlights is a lack of good data to quantify the frequency. However, the takeaway for the scientifically minded is nearly identical: there is no evidence that fracking has “led to widespread, systemic impact on drinking water resources.” Nonetheless, better data and continual research is needed (absence of evidence is not evidence of absence and whatnot).
Future evidence may indeed condemn fracking mechanisms or at least call for better regulations. For now, that evidence is sorely lacking. Natural gas is both economically and environmentally beneficial. We need to be careful not to squash it due to faulty interpretations of government reports.
I’ve posted before about McKinsey’s findings regarding digital globalization. They reported,
Data flows directly accounted for $2.2 trillion, or nearly one-third, of [globalization’s] effect [in a decade]—more than foreign direct investment. In their indirect role enabling other types of cross-border exchanges, they added $2.8 trillion to the world economy. These combined effects of data flows on GDP exceeded the impact of global trade in goods.
This in turn supported research by economist Andreas Bergh, who found that
the poverty-decreasing effect of globalization is bigger in countries where institutions are worse. The graph below shows how the marginal effect of information flows on poverty varies depending on the level of bureaucratic quality. The slope looks the same for all institutional indicators, suggesting that globalization is especially important for the poor in countries with high corruption levels and inefficient public sectors.
A new Harvard working paper supports these findings, suggesting that communication networks and social interactions are more important than institutions. The authors explain,
Telling institutional versus socio-technological interpretations apart has been challenging. This paper tests these two hypotheses by measuring convergence in income across Colombian municipalities along two distinct geospatial divisions: one institutional, one socio-technological. The institutional explanation would emphasize the role that belonging to a particular departamento, or state, has on the institutional arrangements and the provision of public goods, thus affecting the incentive structure of agents to operate with better technology.
Although Colombia is a unitary republic, not a federation, states have significant autonomy1 . Studies on Colombia, including those that take an institutional perspective such as Acemoglu et al (2015)…utilize state-level data, as do almost all studies of intra-national unconditional convergence worldwide. Under the institutional assumption, a municipality should tend to converge to the income of the state to which it belongs.
The socio-technological explanation would predict that municipal income convergence should occur within the cluster of municipalities that interact intensely with each other, whether or not they belong to the same state. This is due to the need for intensive social interactions for knowhow to diffuse. To form these socio-technological groupings, we utilize a unique dataset of cellphone calls to group municipalities so that most of the phone calls happen within rather than between these clusters. To facilitate comparison with the 32 states of the institutional state aggregation, we group municipalities into 32 communication clusters…Thus, communication clusters are groups of municipalities that are densely connected through phone calls, meaning that they are significantly more likely to call members of the cluster than they are to call other municipalities (pgs. 4-5).
The authors conclude,
To test these two interpretations in a more direct way, we use municipal level data for Colombia, which we aggregate using two different grouping criteria: the departamento or state to capture institutional variation; and the communication cluster to which a municipality belongs, to capture the intensity of social interaction. We use formal wages per capita as our measure of income per capita, as it can be measured at the municipal level. We use cellphone data to group municipalities into communication clusters of intense interaction.
In this setting, we find evidence of absolute convergence in Colombia at the municipal level. We find evidence that the process is accelerated when the municipality belongs to a richer communication cluster. However, we do not find evidence of a positive influence of belonging to a richer state. We interpret these results as evidence in favor of the idea that obstacles to technology diffusion may be related to the fact that the use of technology requires tacit knowledge which tends to move slowly between brains through a protracted process of imitation and repetition as occurs in learning by doing. Within communications clusters, there seems to be accelerated convergence. Obstacles to convergence in developing countries may be related to the paucity of social interactions between citizens of the same country
…From a policy perspective, the findings emphasize the fact that economic convergence requires intense social interaction, not just the presence of institutions of a certain quality. Regions that are formally part of the same nation-state but do not really interact with the more advanced parts of the country cannot expect to share similar development outcomes.(pg. 19).