Futurism’s Cultural Blindspot

There’s a thought-provoking article in the recent issue of Nautilus on futurism’s blindspot: culture. The author argues that our “innovation-obsessed present” conditions us

to overstate the impact of technology not only in the future, but also the present. We tend to imagine we are living in a world that could scarcely have been imagined a few decades ago. It is not uncommon to read assertions like: “Someone would have been unable at the beginning of the 20th century to even dream of what transportation would look like a half a century later.” And yet zeppelins were flying in 1900; a year before, in New York City, the first pedestrian had already been killed by an automobile. Was the notion of air travel, or the thought that the car was going to change life on the street, really so beyond envisioning—or is it merely the chauvinism of the present, peering with faint condescension at our hopelessly primitive predecessors?

…We expect more change than actually happens in the future because we imagine our lives have changed more than they actually have.

I think this point about technology is debatable. However, the main thesis is that “Ideas, not technology, have driven the biggest historical changes. When technology changes people, it is often not in the ways one might expect:

…Why is cultural change so hard to predict? For one, we have long tended to forget that itdoes change. Status quo bias reigns. “Until recently, culture explained why things stayed the same, not why they changed,” notes the sociologist Kieran Healy. “Understood as a monolithic block of passively internalized norms transmitted by socialization and canonized by tradition, culture was normally seen as inhibiting individuals.”

In other words, “when it comes to culture we tend to believe not that the future will be very different than the present day, but that it will be roughly the same. Try to imagine yourself at some future date. Where do you imagine you will be living? What will you be wearing? What music will you love?”

Predicting the behaviors and ideas of the future are far more difficult than predicting the technology.

The Final Countdown Is Over: Pluto Flyby

Pretty awesome read and video from The Guardian on the Pluto flyby today:

Cheers, whoops and flag waving broke out at Nasa’s New Horizons control centre as scientists celebrated the spacecraft’s dramatic flyby of Pluto, considered the last unexplored world in the solar system. The probe shot past at more than 28,000mph (45,000 km/h) at 12.49pm BST (7.49am ET) on a trajectory that brought the fastest spacecraft ever to leave Earth’s orbit within 7,770 miles of Pluto’s surface…Stephen Hawking, the Cambridge cosmologist, joined in congratulating the New Horizons team in a recorded message. “Billions of miles from Earth this little robotic spacecraft will show us that first glimpse of mysterious Pluto, a distant icy world on the edge of our solar system. The revelations of New Horizons may help us to understand better how our solar system was formed. We explore because we are human and we long to know,” he said.

The whole thing is definitely worth reading. But perhaps the best part?:

The moment, played out on Tuesday to the sound of The Final Countdown by the 1980s glam metal band Europe, marked a historic achievement for the US, which can now claim to be the only nation to have visited every planet in the classical solar system.

Want to Be More Productive? Turn Your Phone Off

Or at least the notifications. That seems to be what a recent blog post at the Harvard Business Review suggests:

Multitasking…imposes a heavy cognitive load and hurts performance on a task, because our mental resources are finite and have to be allotted to discrete tasks. That’s why you’re not supposed to talk on the phone or text while you’re driving, and why many campaigns urge drivers to wait to respond until they’re no longer behind the wheel.

In an experiment with over 200 undergrads, researchers gave the participants a Sustained Attention to Response Task (SART). This consisted of a 10-minute exercise during which the students pressed a key whenever a number flashed onscreen (unless it was the number “3”). Eventually, one-third of the students received notifications, another phone calls, and another nothing at all (see the article for more detail). The results?

When the researchers looked at the relationship between block and group, they found that the percent change between blocks was greater for participants who received notifications, compared to participants who didn’t, and this was statistically significant at the 0.05 level. However, they didn’t find any significant difference in errors between people who received phone calls and people who received texts.

So basically, just having your phone near you can distract you and negatively affect your work performance. And this distraction-by-notification might even be comparable to interacting with your phone. Stothart said that in terms of effect size, their results were consistent with those of the distracted driving literature, which has looked at the effects of texting or talking on the phone (interacting) while driving.

Probably why I can’t get anything done.

Middle-Class Salary: You’re Better Off Than You Think

The above comes from the University of Chicago’s IGM Forum, featuring numerous economists of diverse ideological views. Since our political discussions revolving around stagnant wages and upward mobility largely focus on income only, the absolute nature of wealth is often ignored. This is what a recent article in The Washington Post points out:

But even if we have less money, you know what we do have that we didn’t 15 years ago? Smartphones and social networks, Netflix and HD TVs, apps and whatever other technology you prefer to waste time on. Now, it’s true, you can’t eat an iPad, but it’s also true that these things make our lives better in ways that are hard to measure. Economists try to, but because it’s so uncertain, they’re pretty conservative with their estimates. Specifically, they try to adjust for the quality of a good when they calculate how much its price has changed. If you paid $400 for an HD TV today, for example, and $400 for a regular TV 10 years ago, did you really pay the same price? Technically, yes. But the fact remains that you got something better for the same amount of money than you would have before. And that’s even trickier when you’re talking about things that didn’t even exist back then, like smartphones, that are really every electronic device from the 1990s rolled into one pocket-sized piece. Or as economist Austan Goolsbee puts it, “so much of day is spent doing things that didn’t exist [in 1980] that it’s hard to believe the numbers fully account for new products.”

Then, the clincher:

Try this thought experiment. Adjusted for inflation, would you rather make $50,000 in today’s world or $100,000 in 1980’s? In other words, is an extra $50,000 enough to get you to give up the internet and TV and computer that you have now? The answer isn’t obvious. And if $100,000 isn’t enough, what would be? $200,000? More? This might be the best way to get a sense of how much better technology has made our lives—not to mention the fact that people are living longer—the past 35 years, but the problem is it’s particular to you and your tastes. It’s not easy to generalize.

I’ll stay right where I’m at, thank you.

Does a Welfare State Encourage Entrepreneurship?

It might, according to some economic research.[ref]Though economists like Alex Tabarrok think too much money is spent on welfare and not enough on research.[/ref] As AEI’s James Pethokoukis explains,

Image result for welfareOver at The Atlantic, Walter Frick offers economic literature roundup that suggests the latter. A strong safety net encourages startups by making the effort seem less risky, he argues. For instance, a 2014 paper found the expansion of food stamps “in some states in the early 2000s increased the chance that newly eligible households would own an incorporated business by 16 percent.” Another paper by the same author found that “the  rate of incorporated business ownership for those eligible households just below the cutoff was 31 percent greater than for similarly situated families that could not rely on CHIP to care for their children if they needed it.”

However, he notes,

Now it is one thing to argue that a more robust safety net would be good for US entrepreneurship broadly understood — I think that would be the case in some areas, though I would be careful about eliminating welfare work requirements — and quite another to make the same claim about  mimicking the Scandinavian social democracies. In “Can’t We All Be More Like Nordics?”, Daron Acemoglu, James Robinson, and Thierry Verdier argue that “technological progress requires incentives for workers and entrepreneurs [and] results in greater inequality and greater poverty (and a weaker safety net) for a society encouraging more intense innovation.” If cut-throat, inegalitarian US capitalism became more like cuddly Scandinavian capitalism, the US might no longer be as capable of pushing the technological frontierIndeed, the researchers have found a large per-capita gap between Scandinavia and the US when it comes to highly cited patents. The US also has a high-impact entrepreneurship rate three times as high as Sweden. (Of course, open economies benefit from innovation first produced elsewhere.) In short, the US has a pretty special thing going, and we should be careful not screw that up.

Worth checking out.

Economic Theory in First-Person Shooters

2014-11-14 Halo 5

In recent years there has been no shortage of interest in the intersection of video games and economics, but usually the research focuses on MMORPGs which often include extensive in-game economies and also involve virtual property that can have substantial real-world value. CCP Games (the guys behind EVE Online) even have a full-time in-house economist.

That’s not what this article is about. Nope, I want to talk about an amazing little paragraph I came across in an early preview of Halo 5. Here it is:

Halo 5 also includes an unexpectedly awesome addition: in-game audio cues for enemy locations and weapon spawns. It’s subtle and not immediately noticeable, but it’s a brilliant mechanic that I didn’t even know I wanted. If your teammate gets sniped, in-game audio from his Spartan will announce where the bullet came from using classic callouts like “Red Street” and “Mid.” It will take time to learn these cues, but I think they’ll make playing solo a much more enjoyable experience. It reminds me of audio I’d hear watching a Major League Gaming match, which makes sense considering 343 recently hired a few very accomplished Halo professionals to work on Halo 5.

I agree that it’s brilliant, but I think it’s even more brilliant than the author realizes.

Here’s the problem: one of the things that makes team-based games fun is teamwork. But teamwork requires coordination. In particular–since communication in FPS games is strictly voice-based–it requires a common vocabulary. If you see a bunch of bad guys running towards one spot on the map, you might want to tell your team. In an incredibly fast-paced game, how do you do that? What do you call the particular area they are running towards? What do you call the particular area they are running from? No one has time to bring up their map and read off grid coordinates, so you need to have a commonly accepted set of place names. There’s a name for this problem. It’s called the coordination game, and it’s one of the classical scenarios in game theory.

Coordination games are games where everybody wins by cooperating, but there’s no reason or effective method to pick one particular solution. The classic example: which side of the road should you drive on? Joking aside, it really doesn’t matter if you drive on the right (America) or the left (the UK). Either one works perfectly fine, as long as everyone agrees on the same thing.

So what might be the most important aspect of this gameplay innovation is that it will solve the coordination problem by teaching all the players a single, accepted set of place names to describe the map. These place names tend to evolve on their own over time, by the way, but that process can be really long (since it takes time for various, vague alternatives like “the red house,” vs “the farm house,” vs. “grandm’a house”[ref]Not made up, by the way.[/ref] to compete with each other for universal acceptance.

Of course there are lots of unrelated problems with voice chat in video games, but I’m excited to see if this approach leads to a measurable increase in strangers coordinating with one another to try and win games, as opposed to just slinging racist and homophobic insults. Oh, what I wouldn’t give to be an in-house analyst for a big video game publisher…

The US Military Made Your Cell Phone Possible

Business Insider has an arresting chart showing which of the major technologies that make cell phones possible are directly attributable to the United States military.

2014-11-04 DARPA Cell Phone Tech

Don’t get me wrong: I’m a big believer that government is an evil that is necessary. But there are a few things that it does well. The canonical examples are national defense and civil/criminal justice. I’m starting to think that research might be another exception to the rule, however. I’d love to see even more investment in R&D. Want to see more STEM graduates? Well, start up a few more government labs and there you go. While you’re at it, consider giving preferential access to resulting tech to companies that locate their workforces in the United States. Seems like a great way for an advanced nation to compete for private investment dollars.

Shaky Global Warming Models

2014-10-29 Global Warming

Earlier this month the Wall Street Journal ran an opinion piece by Dr. Judith Curry, former chairwoman of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology and the President of the Climate Forecast Applications Network. The gist of the article is simple: global warming predictions based on current models are predicting unrealistically high levels of climate change. The real levels–based on observational models–are much lower.

Continuing to rely on climate-model warming projections based on high, model-derived values of climate sensitivity skews the cost-benefit analyses and estimates of the social cost of carbon. This can bias policy decisions. The implications of the lower values of climate sensitivity in our paper, as well as similar other recent studies, is that human-caused warming near the end of the 21st century should be less than the 2-degrees-Celsius “danger” level for all but the IPCC’s most extreme emission scenario.

This slower rate of warming—relative to climate model projections—means there is less urgency to phase out greenhouse gas emissions now, and more time to find ways to decarbonize the economy affordably. It also allows us the flexibility to revise our policies as further information becomes available.

To me, this represents a moderate and mature approach to climate change. Curry’s work neither denies global warming nor the human factor in causing global warming. It simply suggests that climate models are biased upwards, and that we might have more time. Time that could be used to develop more sophisticated solutions to a post-carbon economy. This is really important given news like (just as an example) the announcement from Lockheed Martin that they are just 5 years away from a prototype nuclear fusion reactor.

I just finished reading Tim Flannery’s Here on Earth, which was the most eloquent and serious defense of the Gaia Hypothesis I’ve ever read, so I really  like the idea of greater human responsibility for our environment. I just think we’ll do a better job of living up to that responsibility if we have (1) a little less partisanship and (2) a deeper understanding of the relevant science. A little more time can help.

Easy Internet Privacy is a Snipe Hunt

2014-10-28 Anonabox

Earlier this month there were all kinds of stories about an infamous new product on Kickstarter: Anonabox. As Wired related, Anonabox was supposed to be a simple-to-use router that would let anyone easily use the Tor anonymity network. According to Wikipedia, “Tor directs Internet traffic through a free, worldwide, volunteer network consisting of more than five thousand relays[6] to conceal a user’s location and usage from anyone conducting network surveillance or traffic analysis.” The problem is that Tor–like a lot of techniques for anonymizing your Internet usage–requires a little bit of know-how to set up. The Anonabox was supposed to make it ultra-simple, and as a result it quickly raised hundreds of thousands of dollars.

Controversy quickly followed, however, starting with skepticism that Anonabox was built on a custom board and case; it turned out that the hardware was basically off-the-shelf. Even more serious criticisms soon followed, however:

But as the security community has taken notice of Anonabox over the last week, its analysts and penetration testers have found that the router’s software also has serious problems, ones that could punch holes in its Tor protections or even allow a user to be more easily tracked than if they were connecting to the unprotected Internet. “I’m seeing these really strange smells and poor practices in their pilot beta code,” says Justin Steven, a computer security analyst based in Brisbane, Australia. “It scares me if anyone is relying on this for their security.”

Eventually, Kickstarter decided to suspend the campaign. So the Anonabox itself is (at least for the time being) a non-issue. But here’s the bigger picture. Only three types of people are likely to create a new technology (hardware or software) to help people retain greater privacy online:

  1. Scammers
  2. The NSA
  3. Genuine privacy advocates

Scammers aren’t going to bother building serious, robust anonymity into their products and services. The NSA (or similar entities) probably would do a decent job, but obviously with a backdoor to allow them to have access when they wanted. Only genuine privacy advocates are even going to make an attempt to create a legitimately anonymous product or service, and there’s no guarantee that they would succeed. There are no shortcuts and there are no guarantees to online privacy and security.

In some ways, this isn’t news. Security (online or offline) is never actually about preventing loss, tampering, or theft. Whether it’s data or diamonds you’re trying to protect, the reality is that you can’t deny access to someone with the means and the motive to get at your stuff. All you can do is make it more expensive and hope that the expense turns out to be not worth the bother.

Still, it’s probably good to give people a dash of reality when it comes to security and privacy. Looking for easy and effective security solutions is a snipe hunt. They don’t exist. In the end you’re just gonna have to trust some software that you can’t read (because it’s closed source) or don’t have time to understand (if it’s open source). Remember Heartbleed? It was discovered in April 2014. It had been present since December 2011 and in widespread use since March 2012. That’s open-source software: anyone could read the code. For over two years, however, no one did. And this is code that was running on nearly 20% of the secure web servers on the Internet!

And Heartbleed isn’t the exception. It is, in many ways, the rule. Snapchat gained widespread fame and use because it was supposed to delete messages after they were read instead of keeping a permanent record. Great for privacy, right? Not so fast:

Snapchat has long marketed itself as a private and more secure alternative to services like Facebook and its subsidiary Instagram. The app lets users send photo and video messages that disappear once they are viewed. That self-destruct feature initially gave the app a reputation as a favorite tool for so-called sexters, or those who send sexually suggestive photos of themselves, but eventually it went mainstream…

But security researchers have long criticized Snapchat, saying it provides a false sense of security. They say the app’s disappearing act is illusory. Behind the scenes, Snapchat stores information about its users in a database, similar to data storage at other big Internet companies.

I’m not saying that you should just give up on securing your data online. But once you’ve taken the normal steps–strong passwords, 2-factor authentication, etc.–you should keep in mind that your security is not perfect. To the extent that your data remains secure it’s because you’re too boring and insignificant to attract anyone’s attention. Not because your security is so effective.

The Atlantic on “The Illusion of the Natural”

“What natural has come to mean to us in the context of medicine is pure and safe and benign. But the use of natural as a synonym for good is almost certainly a product of our profound alienation from the natural world.”

So says an article in The Atlantic titled “The Illusion of ‘Natural’.” It begins with the following:

It is difficult to read any historical account of smallpox without encountering the word filth. In the 19th century, smallpox was widely considered a disease of filth, which meant that it was largely understood to be a disease of the poor…Filth theory was eventually replaced by germ theory, a superior understanding of the nature of contagion, but filth theory was not entirely wrong or useless. Raw sewage running in the streets can certainly spread diseases, although smallpox is not one of them, and the sanitation reforms inspired by filth theory dramatically reduced the incidence of cholera, typhus, and plague.

The author draws a parallel between the 19th-century fear of filth with today’s fear of toxins:

In this context, fear of toxicity strikes me as an old anxiety with a new name. Where the word filth once suggested, with its moralist air, the evils of the flesh, the word toxic now condemns the chemical evils of our industrial world. This is not to say that concerns over environmental pollution are not justified—like filth theory, toxicity theory is anchored in legitimate dangers—but that the way we think about toxicity bears some resemblance to the way we once thought about filth. Both theories allow their subscribers to maintain a sense of control over their own health by pursuing personal purity. For the filth theorist, this means a retreat into the home, where heavy curtains and shutters might seal out the smell of the poor and their problems. Our version of this shuttering is now achieved through the purchase of purified water, air purifiers, and food produced with the promise of purity.

Purity, especially bodily purity, is the seemingly innocent concept behind a number of the most sinister social actions of the past century. A passion for bodily purity drove the eugenics movement that led to the sterilization of women who were blind, black, or poor. Concerns for bodily purity were behind miscegenation laws that persisted for more than a century after the abolition of slavery, and behind sodomy laws that were only recently declared unconstitutional. Quite a bit of human solidarity has been sacrificed in pursuit of preserving some kind of imagined purity.

This kind of thinking pervades anti-vaccine movements and alternative medicine.[ref]Also food puritans and some environmentalists.[/ref] I’m always taken back by the view of technology or medicine as somehow “unclean” and the nostalgic pining for the “natural” world. Because when people were left with all things “natural” in the past, their lives were cut short: