Further Thoughts on World Building

A map of Roshar, where the Stormlight Archive takes place.
A map of Roshar, where the Stormlight Archive takes place.

Last week I published a post contrasting world-building between J. R. R. Tolkien’s Lord of the Rings (hereafter: LotR) and the high fantasy genre that followed using Brandon Sanderson’s Stormlight Archive (which includes Way of Kings and Words of Radiance so far and which I’ll be calling just SA). The post sparked some fun and interesting discussion, but the comments (here and on Facebook) made me realize I could have been a little bit clearer about some aspects of the OP. In this post I’m going to use some simple illustrations and a few more examples (the Harry Potter and Hunger Games series) to try and provide that clarity.

The image below depicts the difference between the setting in which LotR takes place (the blue region) and the aspects of the setting that are actually conveyed directly in the LotR itself (the green region).

2014-08-21 LotR Setting-Narrative

To give an example of what I mean, consider the language Quenya. That’s one of the languages he invented, and he started work on it in 1910, more than 20 years before he started work on The Hobbit. By the time he started work on LotR, Quenya was largely complete. The entire language of Quenya (all the vocabulary, all the grammatical rules, and all the etymology that goes along with it) goes in the blue circle. Just those specific parts of Quenya (a few words, maybe whatever grammar was required for a phrase or sentence) that made it into the LotR go in the green circle. So the blue region is the entire setting (everything the author ever thought of) and the green region is just the parts of the setting that the author actually used directly in the story.

Obviously this isn’t an exact science, but what makes Tolkien’s example so helpful is that he actually made pretty details notes of his entire setting and it even has a name: the Legendarium. The fact that all his world building is collected in notes and papers that are pretty common knowledge (Christoper Tolkien used those notes to complete the manuscript for the The Silmarillion after his father’s death, for example) makes it particularly easy to envision the entire setting as something that is separate from the aspects of the setting that crop up in the LotR themselves.

So that’s what my chart shows: the setting broken down into the parts that show up within the text (the green region) and then the other stuff that might be hinted at in the text, but isn’t actually there directly (the blue region). Here’s what I imagine the charts looking like for LotR and SA side by side:

2014-08-21 LotR vs SA Narrative-Setting

The blue regions are sized identically because I don’t want to try and talk about who created more, Tolkien or Sanderson. They are both epic high fantasy authors, so they both write a lot. I suspect Tolkien created much more in his lifetime than Sanderson has created so far, but Sanderson may well surpass him. Who cares. The point is that they both do a lot of world-building so let’s just call it equal.

The difference, then, is that the proportion of Sanderson’s world-building for SA that shows up in SA is much, much higher than the amount of the Legendarium that shows up in LotR. That’s what the red lines are showing you: Tolkien’s excess world-building is thick. Sanderson’s excess world-building is thin.

Before we talk too much about what that means, let me just throw up one more image. This one adds the Harry Potter and Hunger Game series to the mix.

2014-08-21 All 4 Narrative-Setting

I don’t want to get bogged down in the exact details of who did more world-building than whom, but I think it makes sense to say that the epic high fantasy authors (Tolkien and Sanderson) did more world-building than Rowling or Collins. This isn’t to say that they did better world-building. I’m on-record as thinking that J. K. Rowling’s world-building is total genius, but she didn’t do very much of it compared to Tolkien or Sanderson.

So here’s the main point of this post: more world-building in aggregate (bigger blue circles) isn’t necessarily better, but thicker world building (more gap between the green circle and the blue circle) is better. And now the explanation/defense and some caveats.

I don’t think more world-building in aggregate is better because it’s really just a genre question. High fantasy does lots of world-building. Serious mystery novels and real-world thrillers do very little of it. Historical fiction does lots of it, but it’s research rather than invention, so it’s a very different kind of world-building. The point is, you should create enough of a world for your story to live in. If your story requires a relatively small setting or occurs in the real world, then you don’t need to do a lot of world-building. If your story has a big scope and takes place in a fantasy world, then you do need to do world-building. More, in aggregate, isn’t better. It’s a matter of fitting the world-building to the story.

So why is it bad to have only a small amount of world-building “left over” as it were? The primary answer is that, especially in stories that take place in fictional worlds, you want to preserve a sense of immersion in the world. Excess world-building helps you do that in multiple ways. The most important is that referring to events and locations that have an existence independent of the main narrative is a really powerful signal to readers that “this is a real place where lots of things happened, not just a setting I threw together for this one particular story.” When every single aspect of your story ends up being required for the plot, you strain a reader’s credibility in the same way that having too many coincidences in the plot strains credibility: it doesn’t seem natural. Your story should have places your characters don’t as much or care as much about as other people in the universe do because otherwise you’re implying that everything in the world exists merely in service of the characters. Which feels horribly fake.

The other ways are less direct, but still relevant. The work of doing more world-building is a kind of quality control on what you do show. I think even non-linguists can be struck by the way the language (especially via proper names) in LotR broke down consistently among ethnic and political groups. Most fantasy writers just pick similar-sounding names without worrying about complex etymologies, but the risk of sounding like just a jumble of made up syllables is higher when you’re just throwing out a jumble of made up syllables. Also, leaving a bigger gap between what you create for the world and what you show in the story means you have more freedom with your narrative. If you feel like you have to show off everything you create, you can end up bending the plot so that it becomes more of a guided tour of your brilliant creation rather than an independent story.

So, just to recap the graphic above, Collins does a bad job of world-building because even though her story is limited in scope, she did the absolute bare minimum to create even the relatively small setting she needed. Sure, her world building is pretty terrible in general (that’s pretty well-known), but even if you set aside the stuff that doesn’t make sense the problem remains that she just reskinned the myth of Theseus and the Minotaur with the slimmest trappings of a generic sci-fi dystopia and called it a day. She does do a little bit when it comes to the culture of the Capital, but there’s nothing about the setting scientifically, linguistically, culturally (outside the Capital), historically or in any other sense that would make you believe that this is anything other than a flimsy, disposable backdrop for her plot. In short: Collins didn’t create enough setting to fit her story.

Hogwarts. Geo-spatially, this is about all the setting Rowling needed for her story.
Hogwarts. Geo-spatially, this is about all the setting Rowling needed for her story.

Rowling also had a story with a pretty limited scope. Hogwarts, the Burrow, and the Ministry of Magic pretty much account for all the setting she needs. But Rowling did a good job of making the world fill lived in primarily through the inclusion of tantalizing books. Where Tolkien made the world seemed lived in by giving forgotten histories to all sorts of places (the Barrows, Weathertop, the Argonath just to name a few), Rowling made the world seemed live in by giving context to all the silly textbooks at Hogwarts. If you think about the number of times a book played a crucial role in Harry Potter, you’ll realize how important they were to the landscape. And the fact that their authors frequently showed up as minor characters or historical figures really deepened the sense in which these books were part of the world. Just like J. R. R. Tolkien, J. K. Rowling had plenty of material in her own Legendarium left over to make follow-up books that were based on books mentioned in the original series. There’s The Tales of Beedle the Bard, for example, and the new Harry Potter trilogy is going to be based off of Fantastic Beasts and Where to Find Them (which also exists as a book). J. K. Rowling was as profligate with the books she invented for the Harry Potter universe as Tolkien was with language or as Sanderson is with magical systems.

The SA suffers from the same basic problem as Hunger Games (not enough extra world-building) but for the opposite reason. He created plenty to tell the core story, but then he kept cramming more and more of hiw world-building into the narrative until there was barely anything left over. The end result is the same: there’s no sense of realism that comes from a reader perceiving that the world extends beyond the borders of the pages.

Tolkien, of course, is the gold standard. Although the scope of LotR is great, it is nowhere near the scale of the Legendarium from which it draws. My one wish for Sanderson—because I really do like SA—is that he would be willing to stop feeling the need to show off every idea he has in the narrative. It makes the story feel like a guided tour instead of an adventure.

First major caveat: we’re only looking at world-building. That’s just one aspect of what makes a work tick. There are all kinds of other factors: quality of prose, vibrancy of characterization, mastery of theme and tone, coherence of plot, etc. I’m not attempting to address those. This post together with the previous one are not an attempt to give some kind of comprehensive theory of fiction or high fantasy. They are not even a complete theory of world-building. One of the most important tricks that Tolkien uses, for example, that has nothing to do with green circles or blue circles is to demonstrate that actions in one work change the setting in ways that are felt in subsequent works. The best example of this is the way that Frodo stumbles upon the trolls in LotR that Bilbo had helped turn to stone in The Hobbit. The persistence of changes to the setting across works is a brilliant tool in world-building (and one I understand Sanderson may excel at) that falls totally outside the scope of this post.

Second major caveat: it’s possible that I’ve got the wrong frame of reference for Sanderson’s books. I have only read his SA series, but I am aware that all of the books he’s writing a linked up in a single world. Tolkien has the Legendarium. Sanderson has the Cosmere. That defense is not as strong as it first appears, however, because it increases the size of Sanderson’s setting substantially (the Cosmere is really big) but it also increases the size of his narrative because in addition to the two books in the SA, we’ve also got: Warbreaker, the Mistborn series, ElantrisWhite Sand, Dragonsteel, and others. In other words: I might be underestimating the scale of Sanderson’s setting, but only if I’m also underestimating the size of his narrative. This is because, unlike the relatively unrelated works of Tolkien, the whole point of the Cosmere is that all the books are actually part of one grand epic. In that case, you might have to draw a much bigger blue circle, but the green circle keeps on growing, too. You’re still let with a thin band. The problem doesn’t actually go away.

The Cosmere crosses the high fantasy genre to include an industrial setting in one of the Mistborn books.
The Cosmere crosses the high fantasy genre to include an industrial setting in one of the Mistborn books (Mistborn: The Alloy of Law).

So here’s where this leaves us—and I promise I’m done on this topic for the time being when I wrap this post up—the fundamental rule is that you want your world-building to be comfortably larger than what you’re actually going to use directly in the story you tell. This came naturally to Tolkien. Keep in mind he was working on Quenya twenty years before he started The Hobbit! I’m sure part of that was his personality, but it was also a matter of religious faith to him: world-building was a form of worship. So he did a lot of it. So, even though LotR is a big story, his setting was bigger.

High fantasy is particularly sensitive to the quality of world-building, but high fantasy authors since Tolkien have generally failed to get anywhere close to his mastery of it. Often, this is because they don’t do enough world-building. This makes sense, who wants to invest 20 years in world-building before they start a story they don’t even know if they will be able to publish or not? Sometimes it’s because they’re just not very good at it. But even when you get someone like Sanderson—someone who creates a lot and who does it quite well—they still can run into the trap of wanting to stuff all of that world-building into their story instead of leaving a nice, comfortable margin. If Sanderson included less of the world-building in the story, the narrative would have more focus and the world would feel more extensive and genuine.

Failing Tolkien: The Fall of High Fantasy

Update: I wrote a follow-up to this piece: Further Thoughts on World Building

2014-08-19 Words of Radiance
Cover illustration for Words of Radiance

I just finished reading Brandon Sanderson’s monstrous tome: Words of Radiance. It’s his second book in the Stormlight Archives and, like the first, clocks in at over 1,000 pages. The expression on the clerk’s face in Barnes and Nobles when she picked up the book to hand it to me was priceless: “Wow,” she said as she nearly dropped the book, “This is a commitment!”

I’ve never liked high fantasy taken as a genre, but I did love The Lord of the Rings (which launched the genre) and I am enjoying Sanderson’s Stormlight Archives. Despite the fact that I’m enjoying them, however, they display the systematic problems that have plagued the genre ever since (but not including) Tolkien.

High fantasy, if you’re not familiar with the term, refers to the kinds of fantasy books that have maps in them. Not to mention a glossary, pronunciation guide, appendices, and maybe an index, too. This is because high fantasy is defined largely by its setting: an imaginary world with its own history, cultures, religions, languages, and—of course—magic.

Tolkien's own cover illustration for The Fellowship of the Ring.
Tolkien’s own cover illustration for The Fellowship of the Ring.

For all practical purposes, Tolkien invented high fantasy. Of course all the pieces came from Saxon and Norse myths and folklore, but what he created when The Lord of the Rings was first published in the 1950’s was something new. The books were very successful from the early years and have gone on to sell more copies than any other novel (150 million thus far) except A Tale of Two Cities. The corpus of high fantasy has been and continues to this day to be a long line of Tolkien imitators.

The problem is that they have all learned the wrong lesson. They understand that setting defines high fantasy, and they understand that Tolkien’s mastery of world-building fueled his artistic and commercial success, but they fundamentally mistake the product (The Lord of the Rings as a narrative text) with the process (Tolkien’s actual beliefs and practices for world-building).

To correct this confusion we must start with the realization that Tolkien’s world-building was inextricable from his religious faith. He was a devout Roman Catholic and what we call world-building he called sub-creation, which is a term with obvious and deliberate religious connotations. As the Tolkien Gateway puts it:

‘Sub-creation’ was also used by J.R.R. Tolkien to refer [to the] process of world-building and creating myths. In this context, a human author is a ‘little maker’ creating his own world as a sub-set within God’s primary creation. Like the beings of Middle-earth, Tolkien saw his works as mere emulation of the true creation performed by God.

As we delve deeper into Tolkien’s theory of sub-creation, it is useful to contrast his view with that of his friend C. S. Lewis, as Professor Downing has done in a paper called “Sub-Creation or Smuggled Theology: Tolkien contra Lewis on Christian Fantasy” at the C. S. Lewis Institute. C. S. Lewis’s Chronicles of Narnia certainly deserves mention as co-founding the subgenre of high fantasy and, for the most part, his reverence for the work of sub-creation paralleled Tolkien’s. But there were important differences, and those differences are very clear in the different tones and styles of the works and also in the supremacy of The Lord of the Rings over Chronicles of Narnia in historical and literary impact.

2014-08-19 Narnian Map

Downing points out that, for Tolkien, “engaging one’s creativity is an imitation of God and a form of worship.” For Lewis, by contrast, a work of art had to have a higher purpose than the creative impulse itself. In his famous essay “Sometimes Fairy Stories May Say Best What’s to be Said,” Lewis propounded a dualistic account of artistic creation. The Author writes for the sake of writing, but The Man harnesses this impulse towards some external end. As Downing summarizes Lewis: “[A] writer can’t even begin without the Author’s urge to create, but… he shouldn’t begin without the Man’s desire to communicate his deepest sense of himself and his world.”

The Lewis-Tolkien dialogue on sub-creation is a particularly interesting one for a Mormon to enter because of theological differences over the term “creation.” As Downing notes, C. S. Lewis referred back to the orthodox Christian theology of creation ex nihilo in his discussion of artistic creativity. Lewis wrote in a letter to Sister Penelope:

‘Creation’ [as] applied to human authorship seems to me entirely misleading term. We rearrange elements He has provided. There is not a vestige of real creativity de novo in us. Try to imagine a new primary colour, a third sex, a fourth dimension, or even a monster which does not consist of bits and parts of existing animals stuck together. Nothing happens. And that surely is why our works (as you said) never mean to others quite what we intended: because we are recombining elements made by Him and already containing His meanings.

For Downing, this is a point against Tolkien. Tolkien stressed the independence of sub-created worlds but—as Downing and Lewis point out—there is no such thing as independent creation. Humans create by dividing or combining elements that are already available, not by making new elements. From a strictly orthodox Christian theological perspective, this is a fairly serious indictment of Tolkien’s theory of sub-creation because it draws a deep chasm between the kind of creation in which God engages and the kind of sub-creation in which we may participate. How can we be worshipfully imitating our Father when it turns out that the process in which we are engaged is actually a totally distinct process that only happens to share the same label by linguistic happenstance?

Tolkien's own cover art for The Two Towers.
Tolkien’s own cover art for The Two Towers.

As it turns out, however, a rejection of creation ex nihilo is one of the defining aspects of Mormon theology. As many non-Mormon Christian theologians have also observed the Creation (as depicted in Genesis) is almost exclusively a depiction of creation the way that Tolkien and Lewis and all other writers create: by re-arranging pre-existing materials. After “let there be light,” God’s work is all about separation: light from dark, sea from dry land, and so forth. He doesn’t seem to create the earth, moon, stars, sun, or anything else by calling them into being out of the void, but rather by molding unformed materials. For a Mormon like me, at least, sub-creation is more akin to the Creation of God, not less.

In any case, however, what really matters is that Tolkien viewed sub-creation not merely as just another tool in the writer’s tool belt (along with plotting and characterization, say) but rather as a stand-alone activity that had merit in and of itself. This belief is what allowed Tolkien to be such a profligate world builder. He created vastly more material than ever made it into his books. He called this trove of linguistics, geography, history, myth, culture and genealogy the Legendarium, defined by the Tokien Gateway as “the entirety of J.R.R. Tolkien’s works concerning his imagined world of Arda.”

The relationship between The Legendarium and his literary works (like The Hobbit or The Lord of the Rings) an important one in two ways. First, as noted, the Legendarium is far larger. According to Downing, for example, “Quenya, the elvish tongue… had a vocabulary of several hundred words, with consistent declensions and etymologies” by the time he completed The Lord of The Rings, but only a sparse handful of those words appear in the text. The second is that they are, to a large degree, independent. The Legendarium was not completed for the purpose of writing The Lord of the Rings but as an independent exercise undertaken for its own merits. The stories came later, not as an afterthought, but as a distinct labor with their own objectives and process.

Of course in practice the two activities—the world-building and the story-telling—were intertwined. The point is simply that there were two activities, and Tolkien loved them both.

His reckless and extravagant acts of creation are what, to a large extent, made his fiction seems to vibrant and real. Early in The Lord of the Rings, Frodo is nearly killed by a barrow-wight. If you consult Appendix A you will learn that he had been trapped in the cairn of the last prince of Cardolan. Who was that prince? What was Cardolan? I have no idea, but I also have no doubt that Tolkien’s Legendarium contains the answers to both questions. This is just one example of many—to many to count!—where the characters in The Lord of the Rings came across an abandoned place that was steeped in history and drama not directly related to the story.

Arganoth as envisioned by Ted Nasmith.
Argonath as envisioned by Ted Nasmith.

Argonath is, among these many examples, the one that has haunted me for the longest. Here’s the passage, which comes from the chapter “The Great River” near the very end of The Two Towers, that has haunted me since I first read it in a pop-up camper in Tennessee on a summer vacation when I was only 11 or 12 years old:

Upon great pedestals founded in the deep waters stood two great kings of stone: still with blurred eyes and crannied brows they frowned upon the North. The left hand of each was raised palm outwards in gesture of warning; in each right hand there was an axe; upon each head there was a crumbling helm and crown. Great power and majesty they still wore, the silent wardens of a long-vanished kingdom.

What impressed me then and has remained with me ever since is that Arganoth has basically nothing to do with the rest of the story. Sure, it marks the historic northern boundary of Gondor, but by the time we get to The Lord of the Rings, Gondor has already shrunk far from those boundaries. And sure, Strider / Aragorn is a descendent of the antecedents of Gondor, but does that really matter for the story? No, it doesn’t, and that’s why it makes Middle Earth beautiful. It is creation for creation’s sake. I knew, even as a kid, that Tolkien understood perfectly who had built these strange, forgotten pillars and why and the knowledge that he knew things that weren’t in the book is what made the book seem so real. Just like the real world: there’s always more history in Tolkien’s work than you can take in at once.

Tolkien's cover for The Return of the King
Tolkien’s cover for The Return of the King

So Tolkien loved sub-creation for its own sake, which caused him to do quite a lot of it, which in turn made the setting of The Lord of the Rings vivid beyond compare, which in turn led to the widespread popular love of those books, which in turn helped found the genre of high fantasy. Now, over a half century later, high fantasy is a genre cluttered with books full of maps of fantasy countries and continents, but none of them have remotely captured the grandeur of Tolkien’s original because they have tried to imitate his product without understanding the process that led to it. And Brandon Sanderson’s Words of Radiance (despite being a very fine book) is the perfect example of how it has all gone sideways since Tolkien.

High fantasy writers since Tolkien have created less and showed off more. The bigger problem is not that they have created less in total but rather that the ratio of what they have created for the setting to what they show you on the pages of their novels has diminished substantially. Sanderson’s Stormlight Archives are a great example of this problem because I get the feeling that he very well might, by the time he’s done, eclipse Tolkien in terms of sheer creative output, but he also seems bound and determined to shoehorn every last thought he has ever had about his creations directly into the text. This has three bad consequences.

First: it makes the stories bloated. Sanderson seems preoccupied with making sure you know exactly how the magical system he has created works. How does that help the story? Did Tolkien need to tell us how Gandalf’s magic worked in excruciating detail? And even if you argue that Sanderson’s strong suit is magical systems where Tolkien’s was language, the metaphor still holds: no one reads The Lord of the Rings and feels like someone tried to sneak a lecture on linguistics into their fantasy novel. The linguistics are there, of course, but Tolkien doesn’t feel the need to beat you over the head with them, whereas large portions of Words of Radiance revolve around nothing other than frog-marching the reader through a tour of Sanderson’s fabricated lore.

Second: it makes the worlds seem flimsy. Far from having an abundance of lost cities and forgotten heroes to populate the fringes of the story, Words of Radiance is rife with extra characters and stories (in the Interludes sections especially) that over-explain the universe. You rapidly get the impression that nothing—no religion, concept, magical power, artifact, civilization, or anything else—is going to be introduced in this book without being explained to death. Reading The Lord of the Rings feels like visiting another world because you know that there is a story underneath every stone, far more than you will actually experience in the text. Reading Words of Radiance feels like visiting a theme park ride by comparison: you have the impression that if you take even one step off the beaten path you’d see the 2×4’s holding up the painted backdrops. No matter how much you create, you have to hold something back or the reader is going to see through your creation.

Third: it requires a very specific scope. Because high fantasy authors feel the need to cram every part of their sub-creation into the stories they write and because they often invent their worlds from the very moment of first creation, they trap themselves into writing only cosmic stories. This is bad because Big Questions are easy to raise but hard to answer, and so right off the bat high fantasy writers are painting themselves into a difficult corner. But even if they can pull it off, the fact remains that they are only capable of writing mega-epics. Which, to be clear, is a category that excludes the founding high fantasy story: The Lord of the Rings. Did you notice that the definition of Legendarium included the “world of Arda.” What, exactly, is that? You wouldn’t know, based on reading The Lord of the Rings, just as you would never have heard of Eru Ilúvatar (“the supreme God of Elves and Men” and “the single omnipotent creator”) nor of the Ainur (“divine spirits, the ‘Holy Ones’” who actually shaped Middle Earth).

 

Cover illustration for Way of Kings (Stormlight Archives #1)
Cover illustration for Way of Kings (Stormlight Archives #1)

Tolkien did all the work of sub-creation back to the Big Bang of Middle Earth, and you can read all about it in The Silmarillion, but none of truly foundational lore shows up in The Lord of the Rings at all.  It’s true that Sauron is a pretty epic bad guy, but the scope of the The Lord of the Rings is actually quite limited. It’s the story of one particular time that one particular bad guy threatened the peace of one particular region of the world. Gandalf is clear that this isn’t some ultimate final battle or anything like it. He calls the last military campaign “The great battle of our time.” (emphasis added) and when Frodo says “I wish the ring had never come to me. I wish none of this had happened,” Gandalf replies: “So do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given to us” (emphasis added). Eru never shows up. Neither do the Ainur. The story of The Lord of the Rings is, compared to the majestic backstory Tolkien had available, mundane. It is almost an anti-epic. It’s emphatically not a story that tries to be about everything all at once and it’s in that specificity that it becomes singular and glorious. I generally dislike high fantasy as a genre precisely because it has lost sight of imperative of specificity that underlies the very definition of narrative.

It’s worth noting at this point an important fact: Tolkien originally tried to include The Silmarillion for publication in the same book as The Lord of the Rings. It wasn’t his foresight that saw The Lord of the Rings published as a standalone text, but rather the imposition of editors and publishers who viewed the former work as uninteresting to the public. And they were right: The Silmarillion (which I have read and very much enjoyed) is only good because The Lord of the Rings is great.

The point of this essay is therefore not that Tolkien was an omniscient genius who is the only one to do high fantasy the right way, but simply that his theory of sub-creation is deeply important to the success—both artistically and commercially—of The Lord of the Rings and that anyone who wants to emulate that aspect of his success should study it, understand it, and emulate it.

Tolkien believed in sub-creation as an independently worthy action and engaged in it as a form of worship, and that explains the creation of the vast Legendarium. This was the well from which he dipped to draw out works like The Hobbit and The Lord of the Rings, and it makes sense to think of them as two separate kinds of projects: the world-building vs. the narrative itself.

Subsequent high fantasy authors have failed to fully appreciate this distinction and especially the worthwhile endeavor of sub-creation for its own sake. This is understandable. Writers get in the business to tell stories, not to write thousands of pages of backstory and setting that no one will ever see. They see world-building as necessary to telling fantasy stories, and they see Tolkien praised for the central place his world-building played in The Lord of the Rings, but they end up emulating the final product without fully understanding the process that went into it. They build the world for the story instead of for itself.

What’s more, the process is daunting. It requires an extraordinary amount of work that, in a way, seems wasteful. Why create an entire language—grammar, vocabulary, etymology and all—when just a few fun-sounding syllables here and there will do? The temptation to short-change the world-building and to only build what you need is overwhelming for authors who are not generally flush with cash and are often working on deadlines. How is it possible to justify the kind of exorbitant labor of love that Tolkien has engaged in?

For most people, it isn’t possible, and that is one major reason why The Lord of the Rings still stands alone. No one else seems able or willing to do what Tolkien did. They keep trying to get similar results, however, and I guess that’s good enough for fantasy’s audience.

If all of this sounds a little bit too harsh, let me restate what I said at the outset: even if I hold the genre of high fantasy in low regard as a whole I love The Lord of the Rings and I also like the Stormlight Archives quite a lot. I expect to read all of them.

But I stand by my criticism. It’s not that Sanderson hasn’t invested enough in world-building (he probably has), but it’s more that he just doesn’t seem willing to view that world-building as both intrinsically valuable and distinct from the narrative. He seems to want to cram all of it into the books. And that’s a bad thing. The Stormlight Archives are still excellent, in my opinion, but they are not nearly as good as they could be if they were treated as truly independent stories rather than vehicles for delivering world-building content. An abridged treatment would really, in this case, be a better story. Sanderson could have more focus without Interludes so tangential they make you want to pull your hair out , a richer and more immersive world, and greater freedom in the scope he chose to pick. Sanderson is a great writer, but there is still only one J. R. R. Tolkien.

Income Inequality, MotherJones, and Irony

First, let me say that I promise not to make a habit out of cherry picking MotherJones articles to use as a punching bag. I realize it’s unhealthy to spend too much time recapitulating the perceived failures of one’s ideological opponents, but I’m violating my own rule for two reasons. First, I am more inclined to use any article as a rhetorical punching bag when I feel it is, itself, perpetuating partisanship in a particularly noxious way. Second, I think some of the issues raised in this post will be genuinely interesting and important.

So this is what I saw on MotherJones last week:

Daily Dose of Irony

I took a screen grab because it’s not just the article that I found so captivating. It’s the juxtaposition of an article about “how the superrich spoil it for the rest of us” with an advertisement for Patek Philippe watches. I’m no watch connoisseur, but that looks pretty expensive to me. Google would seem to agree:

Patek Phillippe

The cheapest watch on that list costs about what both of my family’s cars cost put together. And the most expensive literally costs more than our house did, back before the housing bubble burst when we owned our own home. So I have to wonder: what kind of person is concerned about “the superrich” and in the market for a watch that costs tens or even hundreds of thousands of dollars? Well, apparently some of the folks who read MotherJones fit that particular bill.

So what makes the post more than a mere gotcha? Well, I have two serious thoughts to add that I hope elevate us beyond mere rabblerousery

The first is a quick note on inflation-adjusted dollars. One of the charts claims that if median US wages had kept pace with the economy since 1970, then they would currently be at $92,000 instead of $50,000. Mathematically, I am sure that is probably correct. But can you really compare 1970s dollars to 2010s dollars so easily? Let me ask you this: would you rather have $92,000 in 1970 or $50,000 in 2014? How about $92,000 in 1714 vs. $50,000 in 2114?

Money, it turns out, isn’t everything even when you’re talking about economics.Think about some of the things you would lose by going back to the 1970’s: the Internet, cell phones, and Game of Thrones all come to mind. But it’s not just fun and games, would you trade 1970s medicine for 2014 medicine? Would you, if you had cancer? If your child did?

2014-07-30 Car Crashes

So that’s a chart that shows your likelihood of dying in a car crash per 100,000,000 vehicle miles traveled (VMT). You can see in the 1970s it was about 3 to 4. By 2009 it was about 1. So your chances of dying in a car crash (per mile traveled) were roughly triple or quadruple in the 1970s what they are today. This is just one off-the-top-of-my-head example of the kind of comparison that inflation-adjusted wages don’t capture.

My point is just that there actually isn’t a way to compare our lives in the 1970s with the 2000s in a way that allows any kind of meaningful analysis. Inflation is calculated by economists who first create a bundle of goods (gasoline, food, clothes, etc.) that is supposed to be more or less representative of what a person or household buys, and then track the change in price of that bundle of goods over time. That’s the best they can reasonably do, and for a fairly consistent commodity (like wheat) it does OK. But it fails to address increases in product quality (e.g. a laptop in 2014 is not the same animal as a laptop in 2004), not to mention new products (e.g. a laptop in 2014 vs… ? in 1974), not to mention increases in public goods (like better air quality, perhaps) that are not captured by any bundle of goods that a person purchases. In short: saying that median income has stagnated doesn’t actually tell you if people are better or worse off, or at least, not nearly as clearly as MotherJones might want you to think.

The second thing to consider, and this one will not be very popular, is the question of why the median wages haven’t risen much over the last 40 years. MotherJones specifies that it’s “the superrich,” but that’s not a theory it’s a slogan. It’s designed to make people feel better, not to explain things. Here, on the other hand, is a theory that might actually offer some plausible explanation: the increasing numbers of women in the workplace suppress wages. This is econ 101: when supply (the number of workers, in this case) goes up then prices (the wages a worker earns, in this case) go down. The HuffPo conveniently has some numbers pegged to 1970 for comparison:

In terms of sheer numbers, women’s presence in the labor force has increased dramatically, from 30.3 million in 1970 to 72.7 million during 2006-2010. Convert that to percentages and we find that women made up 37.97 percent of the labor force in 1970 compared to 47.21 percent between 2006 and 2010.

At least some of the wage stagnation therefore comes from women entering the work place. That’s not the only effect, by the way. At least one economic paper (based on dramatic changes in female participation in the workplace around World War II) found that not only did wages go down when women entered the workplace, but that wage inequality increased between high and low incomes and even that the gender pay gap increased. This might sound like ultra-right-wing propagandizing, but none less than Elizabeth Warren has contributed significantly to the understanding that women entering the work force has had negative impacts on our economy. In The Two Income Trap, Warren basically argued that (1) dual-income families are less financially secure than single-income families because if there is no insurance policy and (2) dual-income families have bid up the cost of living (especially homes) to a point where single-income families can’t really compete anymore, which creates a vicious cycle. MotherJones covered this back in 2004, by the way, and they asked the book’s coauthor Amelia Tyagi about the apparently right-wing narrative she seemed to endorse:

MJ.com: Some conservative commentators might see this as evidence that the mother should return home.

AT: [Laughs] Right. Of course, the notion that mothers are all going to run pell-mell back to the hearth and turn back the clock to 1950 is absurd. But that aside, a big part of the two-income trap is that families have basically bid up the cost of living. Housing is a big example. A generation ago, an average family could buy an average home on one income. Today you can’t do that in three-quarters of American cities. We all know that housing prices are going up, but what most people don’t realize is that this has become a family problem. Housing prices are rising twice as fast for families with kids.

It’s telling that Tyagi replies to MotherJones’ comment with laughter and then sets the question aside. That’s because the evidence is actually pretty clear: two-income families have significant costs for our society. And let me be equally clear: my point is not that we should send all women back home. As far as the evidence presented so far is concerned, an equally prudent strategy (from a socio-economic standpoint) would have been to stick with single-income family structure, but to have men become the primary caregiver in 1/2 of the families, just as one possible example.

My point isn’t that the left-wing view is wrong and the right-wing view is right. My point is that reality doesn’t care about your politics. Or mine. If we really want to do our level best to fix problems, that means we’ve got to be more willing to entertain explanations and solutions that depart from everybody’s narrative and agenda. Who benefits politically from the fact that two-income families are bad for the country? I’m not sure anybody does, but if it’s true (and it seems to be true) then it’s important to know so. Maybe it will help us invent some new policy that will make things better, but at least it will help us avoid snake oil policies that will do no good.

Something else I like about this line of argument–and this does appeal to the conservative within me–is that it underscores the tragic vision conservatives have for our world. Want to improve society by making the work place more egalitarian on gender grounds? OK, but there are going to be costs. And some of those costs will be: stagnant wages, an increased gender pay gap, and increased income inequality.

Or, you know, you could ignore research that is politically uncomfortable and just blame the superrich. While you try to sell them luxury watches.

Look, my big problem with MotherJones has nothing to do with the fact that it is liberal. I will admit that doesn’t endear them to me, but there are lots of liberals I admire and respect (some of whom comment here.) If you want to see what really gets to me about MotherJones, just glance back at the very first line of the article. Or, to save you time, I’ll quote it here:

Want more rage?

No, MJ. I don’t think that’s what America needs right now. But thanks for being open and honest about what–even more than luxury watches–it is that you’ve got on offer.

Ivy for Me, But Not for Thee: The Real Reason To Shun Elite Education

Image from Questier.com, where apparently they like to break the rules.
Image from Questier.com, where apparently they like to break the rules. (Note sign in bottom left.)

William Deresiewicz’s article for The New Republic Don’t Send Your Kid to the Ivy League definitely rubbed me the wrong way, and it didn’t take long to put my finger on it. From Deresiewicz’s Wikipedia page:

Deresiewicz attended Columbia University, where he majored in biology-psychology and graduated in 1985. He received a Masters in journalism from the same school in 1987 and a Ph.D. in English in 1998.

Not that Deresiewicz was hiding his Ivy League creds in the article itself. He wrote:

It was only after 24 years in the Ivy League—college and a Ph.D. at Columbia, ten years on the faculty at Yale—that I started to think about what this system does to kids and how they can escape from it, what it does to our society and how we can dismantle it.

See, it’s not so much that his Wikipedia entry gives away some big secret that he went to an Ivy League school. Nope, the point is that he has a Wikipedia page. So right off the bat, we’re not talking about some Joe on the street. We’re talking a notable person. For someone to spend a quarter century in the Ivy League and then (after they have become a notable person) to decide that it’s a terrible, terrible stifling place after all is a little bit rich. Consider also the fact that Deresiewicz’s primary complaints about the Ivy League are the kind of complaints that only a person without real, pressing, economic concerns can have.

“Return on investment”: that’s the phrase you often hear today when people talk about college. What no one seems to ask is what the “return” is supposed to be. Is it just about earning more money? Is the only purpose of an education to enable you to get a job? What, in short, is college for?

Deresiewicz answers: “The first thing that college is for is to teach you to think.” It’s all well and good for successful academics to talk about the supreme importance of the life of the mind. After all, that’s what they are paid to do, right? But not everyone is so lucky.

I love the life of the mind. If I won the lottery I would spend the rest of my life in college, earning degrees in one field after another. Math, physics, history, languages, linguistics, architecture, medicine, computer science: there’s almost nothing I wouldn’t love to spend a lifetime studying. But the fact is I haven’t won the lottery, and a great deal of my life therefore revolves around the struggle to keep from having to move my wife and children back into my parents house for a second time.

Those of us who aren’t looking backwards from the comfort of a secure and prosperous career, but are rather looking forward at the daunting prospect of navigating these troubled economic times with solvent households are very concerned with “return on investment.” But it’s not because we’re unenlightened barbarians with no appreciation for the life of the mind. It’s because bills don’t pay themselves. Has Deresiewicz forgotten that? Or did he simply never know?

I will give him credit for this, however: his excoriation of elite schools as propagators of social injustice is an argument that does ring true to me. I have never seriously considered that Yale or Harvard could give me or my kids a better education than a good state school. The point of elite education is not to learn more. It’s get access to a better network and a better brand. My concern about sending my kids to the Ivies (should that be a possibility) has always queasiness at the trade off between encouraging them to participate in morally noxious elitism vs. wanting them to have an easier time of it than I have.

I also have to give him credit for having an appropriately expansive definition of “elite education”:

When I speak of elite education, I mean prestigious institutions like Harvard or Stanford or Williams as well as the larger universe of second-tier selective schools, but I also mean everything that leads up to and away from them—the private and affluent public high schools; the ever-growing industry of tutors and consultants and test-prep courses; the admissions process itself, squatting like a dragon at the entrance to adulthood; the brand-name graduate schools and employment opportunities that come after the B.A.; and the parents and communities, largely upper-middle class, who push their children into the maw of this machine. In short, our entire system of elite education.

It’s not attendance at an Ivy that will turn your kid into a zombie. It’s the way parents must structure every aspect of their kids’ childhood (so called) in order to gain admittance into said school that does the damage. By the time the kids arrive, I’d argue they are about as zombified as can be.

But, it turns out, there is hope! Deresiewicz presumes–and so had I–that elite education has a significant monetary advantage. In researching this post, however, I learned that that assumption is not true at all. Alan Krueger at Princeton University and Stacy Dale at Mathematica Policy Research conducted a very clever study where they compared the earnings of kids who went to Ivy Leagues with kids who were accepted to the Ivy Leagues but opted to go to less-prestigious universities. Since both groups got in, arguably both groups are roughly commensurate in terms of ability. So if the Ivy Leagues really have a return on investment (whether its from better education, better networking, or any other factor at all) the cohort that attended should have gotten higher earnings. But they didn’t. The two groups–those who attended Ivy League schools and those who were accepted did not–earned the same over the next decades (the original cohort started school in 1976, but the findings hold for a new cohort that entered in 1989).

That, for me, is a real reason not to send your kids to the Ivies. It’s not that some intellectual who has reaped the rewards of elite education for decades patronizingly tells you to “do as I say, not as I did.” It’s because they probably aren’t worth it in most cases. There are probably exceptions, like going to Yale Law if you want to teach law, that might apply at the very top of certain fields, and data also suggests that poor kids have the most to benefit from elite education, but in general (and especially for undergrad) it looks like your kids will be better off, all things considered, going to a good state school. And hey, they might get a real childhood that way, too.

A Little on Hamas

The Israeli-Palestinian conflict is deeply personal to me. I am Israeli, and still have family in Israel. I also have Palestinian friends and acquaintances. Death and suffering are not abstract or theoretical notions. They will always affect someone that I know. As such, it can be a painful topic for me to discuss, but I do want to raise some perspectives that I feel are missing from the popular debates on blogs and social media now that violence has escalated in the Gaza Strip. Needless to say, my views are my own. Difficult Run has multiple voices, and welcomes different views. Before I proceed, I would like to direct the reader to two even-handed and reasonable pieces written by people that I know personally. While I disagree with both to some extent (the Mercurio quote can get tiresome), I appreciate the way that they frame their views, and recommend reading them. It is worth the time.

In this post I want to look at a major aspect of Hamas, the terrorist organization that became the ruling party in Gaza. Recently there have been several voices arguing that Hamas has been “horrendously misrepresented.” Most recently, Cata Charrett claimed that Hamas should be seen as a “pragmatic and flexible political actor.” This is essentially the same argument made earlier by others like Jeroen Gunning who produced pioneering research on the political side of Hamas.

Hamas’ position, though, is not merely political, but draws deeply from certain metaphysical assumptions which frame their struggle. I’ll grant that divergent opinions certainly exist amongst the Hamas leadership. Some are pragmatists, and many others are decidedly hardliners. However, they do share a certain world-view.

Hamas’ founder, chief ideologue, and spiritual leader, Sheikh Ahmed Yassin, considered Palestine a waqf, that is, something consecrated to God. He formulated this belief as article 11 of Hamas’ Covenant, its charter document.

“The Islamic Resistance Movement believes that the land of Palestine is an Islamic waqf consecrated for future Muslim generations until Judgment Day. It, or any part of it, should not be squandered: it, or any part of it, should not be given up. Neither a single Arab country nor all Arab countries, neither any king or president, nor all the kings and presidents, neither any organization nor all of them, be they Palestinian or Arab, possess the right to do that. Palestine is an Islamic waqf land consecrated for Muslim generations until Judgment Day… This is the law governing the land of Palestine in the Islamic Sharia…”

Treating the land that way means that any permanent concessions can be construed as blasphemy against God himself and Islam (which of course aren’t considered completely separate concepts). There is also no earthly authority that can do so because it cannot speak for all Muslim generations. Compromise can only be tactical, and thus, limited. It makes negotiating with Hamas to achieve a peaceful state of coexistence a decidedly tricky prospect. As the concept is part of their founding covenant, it cannot simply be laid aside, even when they somewhat moderate their stance, or express some discomfort with the wording. For example, much has been made of Hamas dropping the call to destroy Israel from its 2006 election manifesto. However, the evidence suggests that this was downplaying a fundamental position in order to focus on domestic political ambitions. The fundamental position itself did not change. This is despite Charrett’s insistence that the 1988 covenant is irrelevant to understanding the contemporary Hamas. Ghazi Hammad, a Hamas politician, said in 2006, that “Hamas is talking about the end of the occupation as the basis for a state, but at the same time Hamas is still not ready to recognise the right of Israel to exist… We cannot give up the right of the armed struggle because our territory is occupied in the West Bank and East Jerusalem. That is the territory we are fighting to liberate.”

Hamas has sought not a lasting peace, but a hudna, a temporary, multi-year cessation of violence for which it demands a very high price. Yes, Hamas has offered to recognize the June 1967 borders, but only for 10-20 years, and conditioned on Israel granting Palestinians the right of return and evacuating all settlements outside of said borders. Those terms should be worked out, but as part of a lasting, normative peace. When the twenty years are up (or less), Israel will find itself disadvantaged, its very existence considered an act of aggression. Khalid Mish’al, Hamas’ current leader, wrote in 2006 that, “We shall never recognise the right of any power to rob us of our land and deny us our national rights. We shall never recognise the legitimacy of a Zionist state created on our soil in order to atone for somebody else’s sins or solve somebody else’s problem.” In order to obtain another hudna, Israel will have to make concessions just as big. The possibility of permanent peace is vaguely left to the judgment of the next generation.

Now, there are Jewish metaphysics of the land, too. The most famous is it being the land promised by God to his people Israel. Rabbi Yaakov Moshe Charlap, a prominent member of Rabbi Kook’s circle in the first half of the 20th century, considered the land of Israel a part of the highest aspect of the Divine. ‘‘In days to come, [the land of] Israel shall be revealed in its aspect of Infinity [Ein Sof], and shall soar higher and higher… Although this refers to the future, even now, in spiritual terms, it is expanding infinitely.’’ Charlap further considered Jewish settlement of the land of Israel as an essential condition for holiness to spread throughout the world. His teachings were very influential amongst radical Jewish settlers in the West Bank and the Gaza Strip. More recently, R. Yitzchak Ginsburg taught that Chabad’s seventh rebbe was the manifestation of the Divine, and that in order to return him to this world the land of Israel must be saved from “Arab hands.”

The major difference that I see is that Israel-even under a right-wing government- has shown itself willing to act against groups with such metaphysical views. When unilaterally disengaging from the Gaza Strip in 2005, the Israeli government dismantled the Jewish settlements, and expelled the settlers. The settler ideology (particularly in the Gaza Strip), as I’ve mentioned, was highly informed by teachings like that of Charlap’s. Such metaphysics, though, do not form an integral aspect of Israeli policy. Israel may be right or wrong about many things like the Gaza disengagement, but that is beside the point. Although I love it dearly, it is certainly an imperfect state. What matters here is the ability to lay aside metaphysics of the land and carry out concessions that are unpopular with many of its constituents.

Perhaps Hamas will change into a truly moderate force. Perhaps.

 

Case Study: Winning the Argument Through Framing

2014-07-21 Explaining Conservatism

What explains conservatism? Famous left-leaning magazine MotherJones wants to know, and Chris Mooney writes about new research that might explain the puzzle in a piece that’s making the rounds on Facebook: Scientists Are Beginning to Figure Out Why Conservatives Are…Conservative. Here’s a part of the solution to the puzzle, right from the article:

The occasion of this revelation is a paper by John Hibbing of the University of Nebraska and his colleagues, arguing that political conservatives have a “negativity bias,” meaning that they are physiologically more attuned to negative (threatening, disgusting) stimuli in their environments. (The paper can be read for free here.) In the process, Hibbing et al. marshal a large body of evidence, including their own experiments using eye trackers and other devices to measure the involuntary responses of political partisans to different types of images. One finding? That conservatives respond much more rapidly to threatening and aversive stimuli (for instance, images of “a very large spider on the face of a frightened person, a dazed individual with a bloody face, and an open wound with maggots in it,” as one of their papers put it).

In other words, the conservative ideology, and especially one of its major facets—centered on a strong military, tough law enforcement, resistance to immigration, widespread availability of guns—would seem well tailored for an underlying, threat-oriented biology.

The reason I love this paper is because it’s not often that life hands me an example of prejudicial thinking so perfectly gift-wrapped for analysis. In this case, there’s absolutely no reason why the exact same underlying experimental evidence couldn’t be presented using a totally different frame. Instead of talking about a “negativity bias” and wondering why conservatives are so negative and speculating that this might explain conservatism, one could take the exact same data and talk about a “Pollyanna bias” and wonder why liberals are so unaware of threats and speculate that this might explain liberalism.

This is how political partisanship works, folks. It’s not that conservatives and liberals have different conclusions. Sure, that’s what most of the debates are about (for or against gun control, abortion, gay marriage, etc.) Those debates never get anywhere, however, because they miss the point. Conservatives and liberals see the world in different ways, and the way their conflicting world views actually compete with each other for followers is by spreading the assumptions that–if you accept them–lead logically to their policy positions. The way to win a debate is not by having more evidence or better reasoning because people don’t actually pay very much attention to evidence or reason. The way to win the debate–or at least to gin up your own side–is to frame it in such a way that you must be correct before the debate even starts.

Thus, in this case, Mooney starts out with the question: how do we explain conservatism? What he doesn’t actually come out and say–but what is actually the most important part of his piece–is the assumption that conservatism is an aberration and liberalism is the norm. There’s nothing about liberalism we have to explain; it’s just natural. But conservatism? It begs for some kind of explanation. Once you accept that premise, there’s really not much left to talk about. C. S. Lewis even invented a term for this debate style: bulverism:

The modern method [of argumentation] is to assume without discussion that [your opponent] is wrong and then distract his attention from this (the only real issue) by busily explaining how he became so silly. In the course of the last fifteen years I have found this vice so common that I have had to invent a name for it. I call it Bulverism. Some day I am going to write the biography of its imaginary inventor, Ezekiel Bulver, whose destiny was determined at the age of five when he heard his mother say to his father — who had been maintaining that two sides of a triangle were together greater than the third — ‘Oh you say that because you are a man.’ ‘At that moment’, E. Bulver assures us, ‘there flashed across my opening mind the great truth that refutation is no necessary part of argument. Assume that your opponent is wrong, and then explain his error, and the world will be at your feet. Attempt to prove that he is wrong or (worse still) try to find out whether he is wrong or right, and the national dynamism of our age will thrust you to the wall.’ That is how Bulver became one of the makers of the Twentieth [and Twenty-First] Century.

As pleased as I am to have such a clear case study of Bulverism / winning the argument through framing ready at hand from now on, the thing that makes me sad is that it isn’t just MotherJones engaging in it. The researchers, by using the term “negativity bias” without an accompanying “positivity bias”, are jumping right in as well. (The name of their paper is: “Differences in negativity bias underlie variations in political ideology.”) Although sad, it’s hardly surprising. Dr. Jonathan Haidt was quoted about this very problem in the NYTimes back in 2011. Commenting on total domination of social psychology by the political left, he has said:

Anywhere in the world that social psychologists see women or minorities underrepresented by a factor of two or three, our minds jump to discrimination as the explanation. But when we find out that conservatives are underrepresented among us by a factor of more than 100, suddenly everyone finds it quite easy to generate alternate explanations.

The article goes on to quote him again:

Dr. Haidt argued that social psychologists are a “tribal-moral community” united by “sacred values” that hinder research and damage their credibility — and blind them to the hostile climate they’ve created for non-liberals.

It’s bad enough to have MotherJones serving up the Kool-Aid, but it’s really quite sad to have academic researchers as their direct suppliers.

As a coda: I do think that there are real and interesting psychological differences to study with regards to politics. But I think that this research is most useful when, as Haidt’s own Moral Foundations Theory does, it seeks to take all sides seriously and create room for understanding and common ground. And not when, as with the articles in question, it serves as a flimsy excuse to pathologize your political opponents.

Steep Learning Curve Doesn’t Mean What You Think It Does

Earlier this morning I read an article in The Verge about the resurgence of rogue-like games, which the author characterized with three core traits: “turn-based movement, procedurally generated worlds, and permanent death that forces you to start over from the beginning.” So far so good, but the author then added one additional, non-essential characteristic:

They also often have steep learning curves that force you to spend a lot of time getting killed before you understand how things actually work.

And that’s when I lost my mind.

“Steep learning curve” does not mean what Andrew Webster thinks it does. Yes, yes, I know: someone is wrong on the Internet. Egads! We can also bring up the usual academic description: should linguistics be descriptive (merely documenting how people talk) or should it be prescriptive (laying down grammatical rules and standardized definitions). In the long run, words  mean whatever people think they mean. Nothing more and nothing less. So I might be appalled by the fact that everyone uses “enormity” as though it meant “enormousness” these days, but as a general rule I sigh, shake my head, and get on with my life.

The reality, of course, is that most of us understand grammar as a mixture of following the rules and knowing when to ignore or break them. The Week published a list of 7 bogus grammar ‘errors’ you don’t need to worry about, and their last category was “7. Don’t use words to mean what they’ve been widely used to mean for 50 years or more.” For example, the word “decimate” originally meant to kill 1/10th. It derives from a particularly brutal form of Roman military discipline so it’s not a happy word, but it certainly doesn’t mean “to kill just about everyone.” Except that these days, it pretty much does, and you’ll get strange looks if you used it in any other way. How long before “enormity” goes on a similar list, and only out-of-touch cranks cling to its older definition and go on rants about ancient military laws?

It’s also worth pointing out that a lot of linguistic innovation is pretty fun and cool. For example: English Has a New Preposition, Because Internet.

But I draw the line at “steep learning curve” because we’re not just talking about illiteracy. We’re talking about innumeracy.

A learning curve is a graph depicting the relationship between time (or effort) and ability. Time is the input: we put time into studying, practicing, and learning. And ability is the output: it’s what we get in return for our efforts. Generally we assume that there will be a positive relation between the two: the more you practice piano the better you’ll be able to play. The more you study your German vocabulary, the more words you will learn.

Basic Learning Curve

The graph right above this paragraph shows a completely ordinary, run-of-the-mill learning curve. What units are we measuring ability and time in? Doesn’t matter. It would depend on the situation. Time would usually be measured in seconds or minutes or whatever, but in physical practice maybe you’d want to measure it in calories burned or some other metric. And ability would vary to: number of vocab words learned, percent of a song played without errors, etc. Now let’s take a look at two more learning curves:

Learning Curve Comparison

The learning curve on the left is shallow. That means that for every unit of time you put into it, you get less back in terms of ability. The learning curve on the right is steep. That means that for every unit of time you put into it, you get more back in terms of ability. So here’s the simple question: if you wanted to learn something, would you prefer to have a shallow learning curve or a steep learning curve? Obviously, if you want more bang-for-the-buck, you want a steep learning curve. In this example, the steep learning curve gets you double the ability for half the time!

You might note, of course, that this is exactly the opposite of what Andrew Webster was trying to convey. He said that these kinds of games require “you to spend a lot of time getting killed before you understand how things actually work.” In other words: lots of time for little learning. In other words, he literally described a shallow learning curve and then called it steep. Earlier this morning my son tried to tell me that water is colder than ice, and that we make ice by cooking water. That’s about the level of wrongness in how most people use the term “steep learning curve.”

It’s not hard to see why people get confused on this one. We associate steepness with difficulty because it’s harder to walk up a steep incline than a shallow one. Say “steep” and people think you mean “difficult.” But visualizing a tiny person on a bicycle furiously peddling to get up the steep line on that graph is a fundamental misapprehension of what graphs represent and how they work. By convention, we put independent variables (that’s the stuff we can control, where such a category exists) on the x-axis  and dependent variables on the y-axis (that’s the response variable). Intuitions about working harder to climb graphs don’t make any sense.

Now, yes: it’s by convention that we organize the x- and y-axis that way. And conventions change, just like definitions of words change. And the convention isn’t always useful or applicable. So you could argue that folks who use the term “steep learning curve” are just flipping the axes. Right?

Wrong. First, I just don’t buy that folks who use the term have any such notion of what goes on which axis. They are relying on gut intuition, not graph transformations. Second, although the placement of data on charts is a convention, it’s not a convention that is changing. When people get steep learning curve wrong, they are usually not actually talking about charts or data at all, so they are just borrowing a technical term and getting it backwards. It’s not plausible to me that this single instance of getting the term backwards is actually going to cause scientists and analysts around the world to suddenly reverse their convention of which data goes where.

People getting technical concepts wrong is a special case of language where it does make sense to say that the usage is not just new or different, but is actually wrong. It is wrong in the sense that there’s a subpopulation of experts who are going to preserve the original meaning even if conventional speakers get it wrong and it’s wrong in the sense of being ignorant of the underlying rationale behind the term. Consider the idea of a quantum leap. This concept derives from quantum mechanics, and it refers to the fact that electrons inhabit discrete energy levels within atoms‘. This is–if you understand the physics at all–really very surprising. It means that when an electron changes its energy level it doesn’t move continuously along a gradient. It jumps pretty much directly from one state to the new state. This idea of “quanta“–of discrete quantities of time, distance, and energy–is actually at the heart of the term “quantum mechanics” and it’s revolutionary because, until then, physics was all about continuity, which is why it relied so heavily on calculus. If you use “quantum leap” to mean “a big change” you aren’t ushering in a new definition of a word the way that you are if you use “enormity” to mean “bigness”. In that case, once enough people get it wrong they start to be right. But in the case of quantum mechanics, you’re unlikely to reach that threshold (because the experts probably aren’t changing their usage) and in the meantime you’re busy sounding like a clueless nincompoop to anyone who is even passingly familiar with quantum mechanics. Similarly, if you say “steep learning curve” when you mean “shallow learning curve” then  you aren’t innovating some new terminology, you’re just being a dope.

Maybe it doesn’t matter, and maybe it’s even mean to get so worked up about technicalities. Then again, lots of people think the world would be a better place if people were more numerate, and I think there’s some truth to that. In any case, I’m a nerd and the definition of a nerd is a person who cares more about a subject than society thinks is reasonable.

Most importantly, however, if you get technical terms wrong you’re missing out. Because the term were chosen for a reason, and taking the time to learn that reason will always broaden your mind. One example, which I’ll probably write about soon, is the idea of a technological singularity. It’s a trendy buzzword you probably here futurists and sci-fi aficionados talking about all the time, but if you don’t know where the term originates (i.e. from black holes) then you won’t really understand the ideas that led to the creation of the term in the first place. And they are some pretty cool ideas.

So yeah: on the one hand this is just a rant from a crotchety old man telling the kids to get off his lawn. But my fundamental motivation is that I care about ideas and about sharing them with people. I rant with care. It’s a love-rant.

Silencing Dissenters or: The World Gone Mad

This post will be a little bit more free-form than what I usually write. So buckle up, we’ve got some ground to travel.

What The Elders Know

I read a story as a kid that stuck with me. It was about a team of 1990s archaeologists who decided to excavate a 1950s landfill just to get an objective measurement for what ordinary, everyday life was really like 4 decades before. When they dug up the trash, they found human remains. Skeletons. First it was just a couple, and they thought it might have been mob violence, but then they found more and more. Something horrible had happened in this town, just 40 years ago. But it was forgotten to history. The elderly folks of the community, the only ones who would know the secret of what had happened, came to the digging site and stood staring at the excavation. Saying nothing. Whatever had happened, no one would ever know because they never broke their silence.

At the time, the story mostly just made me reconsider what we know, really know, based on our limited first-hand experience. But it also planted this idea of a group of people who are bound together by some common knowledge that they have that nobody else does.

I realize there’s a sinister spin to that tale, and that’s not what I’m going for. It’s just that idea that there are experiences that no one can tell you about. You can’t understand unless you’ve  been there yourself. And, if you have been there, no explanation is necessary. From what I understand, combat is like that. I’ve read books, like On Killing, that describe some of the effects, but it’s really just enough for me to know that I don’t really get it. And, as a non-military guy, never will. Veterans understand something I can’t comprehend.

In my experience, being married is like that, too. Marriage, for me and my beloved wife, has been really, really hard at times. From talking to our close friends we’ve learned that that’s pretty common. All the couples that I know well enough to have discussed this with describe going through harrowing bad times that shook their faith in themselves, their spouse, their marriage, and pretty much everything they believed in. And nobody warned us. Nobody told us how bad it could get, and probably would get. I think partially that’s because we just forget–the bad times are already receding into memory for me–but I think it’s also just because you can’t convey what it’s like to someone who hasn’t been there. And you certainly can’t simultaneously convey how much it’s going to hurt and how much it’s still going to be worth it. There’s just nothing to say.

It’s true of raising kids, too. There are a lot of parenting jokes, and even before I was  parent I more or less got them, but the most traumatic, mundane experiences of being a parent–like the sheer terror of holding a little baby that is sick and can’t tell you what’s wrong–there’s just absolutely no way to convey that feeling to someone who hasn’t been there. There’s deep connection between parents that crosses pretty much every other social boundary you can think of. I’m reminded of Jerry Holkins’ description of the birth of his son. Holkins writes often excessively vulgar comics about video games for a living. He’s a West Coast atheist with a troubled family history who jokes about porn, who never went to college, and who has a multi-million dollar company that runs giant conventions in Seattle, Boston, Australia, and now San Antonio. So, other than that we’re both geeks, we don’t have a lot in common. But when he described the way he felt after watching his wife deliver their son, I knew that there was one deep, defining experience we had in common:

I am not trying to jostle for primacy over the birth act, the utter valor of which is indelible – I’m fairly certain the credit is going to the right people. There is, however, a parallel experience that I never hear much about, something amazing and profound about the helplessness, the desperation of events which are perhaps a million long miles beyond your control. I just want to find other fathers and, looking at them across the aisles in the grocery store, hold my right fist aloft. I am with you.

There are lots more experiences like this, as well. I think of all the times I got advice from mentors–friends and family with more experience than me–about life decisions. What to study in school, whether to buy a house, what to do with my career, how to follow my passions. Time and time again I’ve found that some of the most important advice was always the advice that, no matter how much I earnestly wanted to learn from these people, I just couldn’t follow. I couldn’t follow it because I couldn’t even understand it. It didn’t compute. I might have thought I understood, but I lacked the perspective and the context to see when and how it applied in my life. I only figured out, years and mistakes later, what it was that they had been trying to tell me.

All of this means that the older I get the more I respect my elders. They’ve been there. They’ve been through a lot of the big experiences but also just the accumulated weight of life under uncertainty. They’ve been on the ride longer. The highs, the lows, what changes, what stays the same. I think they know things, things that maybe I won’t be able to understand until I get there myself. My father’s father passed away too young. As the years go by, I find that I miss him more. Not less. I wish he were here.

Maybe he could help me make sense of this crazy world.

Sound and Fury

Let me be clear about what I mean when I say “this crazy world.” I mean the world is full of people who hold such absolutely wildly divergent opinions and perspectives that if you try to get out there and really understand what’s motivating them all your brain feels like it might break under the strain. Humans handle complexity primarily through abstraction. We find patterns, drop the details, and hold onto the narratives. But when the thing that interests and concerns you is precisely the narratives and paradigms that other people are seeing the world through, abstraction is easier said than done.

Here, enough generalities, lets get to some specifics.

Just a few hours ago, a friend posted an article from Mother Jones about How Gun Extremists Target Women. It starts with the experiences of Jennifer Longdon, a woman who uses a wheelchair because she was permanently paralyzed when a random assailant shot her and her fiancee for no apparent reason. Since becoming a vocal advocate for more gun control laws, she has been spat upon, cursed at, threatened, and even had some guy jump out of the bushes at night and spray her with a realistic-looking water gun. My friend’s comment when he posted the article was just, “Wow. Um, wow.” I guess he believes this is accurate of a small but vocal minority of gun rights activists? My first reaction was that, hey, I’ve been involved in this movement for years (only loosely, but still) and I’ve never seen any behavior like that.

Except that, hours later, I realized that I kind of had. On one particular gun forum I used to hang out at things got way out of hand in a heated debate and next thing you know people are trying to use the real world to intimidate their ideological foes, everything from digging up personal photos to threatening civil and criminal action. I don’t think death threats were involved–and none of the participants were women–but it was ugly enough that I still have screenshots saved on my computer more than 5  years later just in case I ever need to defend myself.

Let’s move on rather than analyze. What else have we got? Oh, how about this gem from the Daily Mail about how a respected climate scientist with over 200 publications joined the board of a skeptical organization (the Global Warming Policy Foundation) because:

I thought joining the organisation would provide a platform for me to bring more common sense into the global climate change debate. ‘I have been very concerned about tensions in the climate change community between activists and people who have questions.

So, he tried to bring some reason and cross-partisan talk to a contentious and serious debate? Big mistake. Next thing you know he’s being harassed online and it got to a point where an American co-author of a paper pulled out because he refused to be associated with someone who was associated with a skeptical organization, even if the person had joined the skeptical organization to try and temper it. Not good enough. Professor Lennart Bengtssen lasted a grand total of three weeks in his new position before the pressure forced him to resign.

Or how about the rash of colleges that have withdrawn invitations to commencement speakers because students protested against allowing anyone who was insufficiently ideologically pure to contaminate their ears. It’s gotten to a point where The Daily Beast[rerf]Not exactly a bastion of conservative sensibilities.[/ref] published an article with the headline proclaiming that The Oh-So-Fragile Class of 2014 Needs to STFU And Listen to Some New Ideas. Olivia Nuzzi writes about how Christine Legarde (head of the IMF) got uninvited from Smith College’s commencement in the same month that Condoleeza Rice pulled out of a speaking gig at Rutgers. (Nuzzi doesn’t mention a third example that we covered at Difficult Run: Brandeis decided that Ayaan Hirsi Ali didn’t deserve an honorary degree after all.)

Wait, wait. There’s more. How about Neil deGrasse Tyson slamming philosophy–yes, the entire discipline of philosophy as ‘useless’. A quick review of his comments is instructive. He frames it as an objective, and pragmatic stance (i.e. non-ideological) but seems to lack the philosophical sophistication to realize that far from brushing philosophy off, what he’s actually doing is engaging in a purge of the wrong kind of philosophy. Materialist reductionism? That’s fine. It’s just all those other kinds of philosophy that are useless. I guess he has so dogmatically accepted his own particular philosophical stance that he’s forgotten it isn’t an unyielding element of the fabric of the objective universe. It’s just the particular brand of philosophy he happens to prefer.

Meanwhile, the UN is trying to get pro-life perspectives classified as “torture.” No, really. The Center for Reproductive Rights submitted a letter stating that:

CRR respects the right of each individual to freedom of religion and acknowledges the importance of religious institutions in the lives of people, including the role they may play in ensuring respect for human dignity. As with any party to an international human rights treaty, however, the Holy See is bound to respect, protect, and fulfill a range of human rights through its policies and its actions. As such, this letter focuses on violations of key provisions of the Convention against Torture associated with the Holy See’s policies on abortion and contraception, as well as actions taken by the Holy See and its subsidiary institutions to prevent access to reproductive health information and services in countries around the world.

So, freedom of religion is a nice idea, but if it entails opposition to abortion then you’re in contravention of the Convention against Torture. Uh… OK?

And, as long as we’re hitting pretty much every hot-button issue of the day, let’s move right along to gay rights and Hollywood’s Sex Abuse Cover Up. Describing the wall of silence about growing allegations of sexual abuse of children by Holywood elites, conservative writer Andrew Klavan observed simply that:

If these [people accused of pedophilia] were conservatives, if these were priests, if they were religious people, this would be a huge story. But as it is, it’s gonna get swept under the rug unless more people come forward.

The article describes sexual abuse detailed by Corey Feldman (of Boy Meets World) in his new memoir, abuse that started when he was 11, along with allegations of abuse against director Bryan Singer and then an absurdly white-washed version of history in the film Kill Your DarlingsThe film is supposedly a biopic centering on Lucien Carr, who assembled the original Beat Generation. It portrays Carr’s professor David Kammerer as a kind of mentor and possible romantic interest. The reality? Kammerer was a pedophile stalker who sexually abused Carr to such an extent that, when Carr finally fatally stabbed his tormenter with a Boy Scout knife, the history of abuse convinced the judge to be lenient in sentencing him. That history–the real history, according to Carr’s family–is swept under the rug. Maybe this is about preserving the image of the gay community during the height of the gay rights movement, but hey: Hollywood has been a safe haven for child rapists of the heterosexual persuasion too, so maybe it’s just a generic “Your rules don’t apply here,” kind of thing.

I started with a kind of anti-conservative example, then moved onto a series of anti-liberal examples, so now let’s get back to conservative nuttiness. This YouTube video hails from 2007, so it’s not new, but it was stomach churning for me to watch.

In it, a Hindu guest chaplain tries to offer the opening prayer in the Senate when he is shouted down by Christians saying stuff like “forgive us Father for allowing the prayer of the wicked which is an abomination in your sight.” You’d think folks who tend to think God had a hand in founding this nation might have more reverence for the principles of tolerance and religious freedom that went into it. Well, I’d think that. If I weren’t so cynical.

Here’s what these examples all seem to have in common to me. It’s not about politics. It’s not even about a particular issue. It’s about the idea that we shouldn’t be tolerant of views that contradict our own. It’s about the idea that we should squelch views that we disagree with, rather than engage them. Gun rights proponents issue death threats to paralyzed women who disagree. Climate scientists sabotage the careers and reputations of one of their own when he so much as appears to depart from the orthodox view. College kids block speakers who might disagree with them from being able to speak. The Catholic Church (and, by extension, anyone who is pro-life) gets labeled as a torturer in contravention of international norms and human decency. Hollywood directors silence their critics and rewrite history to protect the reputation of favored groups and individuals. Christians won’t even let a man pray just because he has a different faith.

Look, I’m on all sides of these issues (and maybe off the charts on a couple of them), but that’s just my point. It’s not about the issues. This is not civilized, rational, healthy behavior. Here’s my absolute favorite one, though. It delves deep, deep into crazy town to showcase a meeting of Anarcho-Syndicalists getting shut down because Students of Unity refused to allow one of the anarchist professors to speak. (Warning: video has lots of swearing.)

I actually got curious to figure out what was behind the kerfuffle. Apparently Students of Unity are mad that anarchist Kristian Williams wrote some stuff that included “survivor shaming” and “survivor doubt” and that constitutes “violence.” Williams isn’t feminist enough, and needs to be “accountable for all the people who feel unsafe by the words [she chooses].” I did a little Googling to find the offending piece. It’s called The Politics of Denunciation. It’s absolutely fascinating and spooky to read, because in it Williams writes against exactly what I’ve been describing. She takes a stand against those who try to pre-empt differing views from ever being expressed at all.

The particular target she has in mind is the idea that a survivor of an attack (like sexual assault) must be the only voice allowed to speak at all:

Under this theory, the survivor, and the survivor alone, has the right to make demands, while the rest of us are duty-bound to enact sanctions without question. One obvious implication is that all allegations are treated as fact.

So what she’s saying is that, “Hey, just because someone accuses someone else of sexual assault, it doesn’t automatically mean that whatever that person said is automatically true and that no other perspective is relevant.” Seems pretty tame. But the more general argument she makes is that quashing differing views is a bad thing to do:

While attempting to elevate feminism to a place above politics, the organizers’ statement in fact advances a very specific kind of politics.  Speaking authoritatively but anonymously, the “Patriarchy and the Movement” organizers declare certain questions off-limits, not only (retroactively) for their own event, but seemingly altogether. These questions cannot be asked because, it is assumed, there is only one answer, and the answer is already known. The answer is, in practice, whatever the survivor says that it is.

It seems like a very obscure, tiny, fringe discussion, but it’s actually not. It’s the same pattern as every single example I’ve expressed so far. Someone claims to be above politics (like Neil deGrasse Tyson is above philosophy) but in fact they are just trying to elevate a very particular political statement beyond question and thereby silence all dissenting views. Williams argued that we should make room for multiple viewpoints. And for that she and her whole panel were shouted down and silenced.

Nowhere To Turn

It may seem that I’m focusing on some weird, esoteric issues. And I’ll definitely admit that what dragged me down this rabbit hole in the first place was my attempt to delve deeper into the SFWA controversy I wrote about a couple of weeks ago.  That, in turn, spawned the article that I wrote about trigger warnings. I took a mostly conservative view in those posts because that’s who I am, but maybe the saddest thing about this whole controversy is that, from where I’m standing, there are no good guys and bad guys. I’d love to just toss my hat in with the conservatives and feel like I have a home, but I can’t. I can’t because–much as I have no beef with folks like Wright, and Torgersen, and even Correia–the more I dug into what Vox Day had said (and he’s the conservative who really got the liberals angry in the first place) the more I decided that a lot of what the liberals said about him is true. The central allegation is that he’s a racist, and it’s based on comments that he made about an African-American writer N. K. Jemisin. So I found the blog post in question, and I read it. Here’s the money paragraph:

Unlike the white males she excoriates, there is no evidence to be found anywhere on the planet that a society of NK Jemisins is capable of building an advanced civilization, or even successfully maintaining one without significant external support from those white males.  If one considers that it took my English and German ancestors more than one thousand years to become fully civilized after their first contact with advanced Greco-Roman civilization, it should be patently obvious that it is illogical to imagine, let alone insist, that Africans have somehow managed to do the same in less than half the time at a greater geographic distance.  These things take time.

In other blog posts, Vox Day denies being a racist. I can see he might be trying to get off on a technicality, something like “it’s not about race, per se, it’s just that civilization takes time, and Africans have been exposed to (Greco Roman) civilization for a shorter period of time.” Yeah, I’m not buying it because Vox Day is obviously not arguing in good faith. He talks about the “greater geographic distance” it would take for Africa to become civilized (let’s just not even touch that one for a moment) when the person he is calling “half-savage” was born in Iowa.

So there’s your microcosm of what is wrong with the world. We’ve got just enough folks like Vox Day to enable folks like the Students of Unity who shouted down Kristian Williams to feel justified in trying to intimidate anyone into silence who disagrees with them.

And that’s why I want to ask those old folks–those elderly men and women with decades’ worth of life lessons I haven’t experienced yet–is it always like this? I wish someone could tell me it’s gonna get better–or at least that it’s been worse–because it’s kind of lonely and scary to feel that not only have the loonies taken over the asylum, but they broken down the walls, invaded city hall, and taken over there, too.

Because, yeah, I’d love to chalk this up to upstart young idiots not knowing any better and how every generation always thinks the generation after them is going to destroy the world. And that might work for all those stories of college undergraduates protesting against speakers they don’t want to hear or Students of Unity shouting down anyone they don’t like, but Vox Day is not a kid. The gun control opponents who spit on Longdon are not kids. The Center for Reproductive Rights is not, to my knowledge, run by kids. The Christians who shouted down the Hindu chaplain didn’t sound like kids. These are grown ups, in theory at least, and they are occasionally in positions of real power.

And  yeah, every generation thinks the world is going to hell in handbasket once the next generation starts to take over, but every now and then they’re right, aren’t they? Sometimes the sky is falling.

Me, I guess I’ll just keep on doing what I’m doing. I’ve got my views, and they are mostly conservative, but I also believe in tolerance and intellectual diversity. Maybe it’s foolish and naive, but I like the idea of having noble ideological adversaries that I oppose, but that engage in a fight that has rules and principles. So, although it’s not as loud or as exciting or as clear-cut as what other sites can offer, that’s what we’ll keep doing here at Difficult Run.

One More Thing: About that Right to Free Speech

XKCD recently had a comic about the right to free speech.

Mouse over text: “I can’t remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you’re saying that the most compelling thing you can say for your position is that it’s not literally illegal to express.”

Technically, of course, it’s correct. But it’s a deeply disturbing view. The right to free speech has always been more than a strict legalism in American culture. It’s always been about more than just the freedom from government censorship. It has always involved a culture of tolerance. A view that not only is the government legally prohibited from regulating speech, but also that we as Americans ought to relish our chaotic, free-wheeling, marketplace of ideas. No, there’s no law that says people have to listen to you and there never should be. No, there’s no law that protects people from being free from other people telling them that they think their speech is crap. And again: there never should be. But when a bunch of people get together and use their own freedom of speech to silence someone they don’t like: it’s a violation of who we are as Americans even if it’s not technically a violation of someone’s legal rights.

Randall Munroe (author of XKCD) is right on the letter of the law, but he’s wrong on the spirit. I don’t know exactly where to draw the line, and I’m not saying we should never boycott. We have a comment policy here at Difficult Run, after all. Communities need to regulate what is and is not considered acceptable for that community. But I just wish that tolerance–real tolerance of genuinely conflicting ideas–was something that more communities would actively choose to embrace. Not because the law requires them to do so, but just because it’s the right thing to do.

 

Trigger Warnings and Sanity

2014-05-02 Trigger Warnings

If you aren’t familiar with the term, trigger warnings are disclaimers that folks put at the top of blog posts (or other written materials) which they believe may cause post-traumatic stress reactions in some readers. As The Guardian describes it:

In the early days of feminist blogging, trigger warnings were generally about sexual assault, and posted with the understanding that lots of women are sexual assault survivors, lots of women read feminist blogs, and graphic descriptions of rape might lead to panic attacks or other reactions that will really ruin someone’s day. Easy enough to give readers a little heads up – a trigger warning – so that they can decide to avoid that material if they know that discussion of rape triggers debilitating reactions.

This makes sense to me. Although I do not have traumatic experiences in my own past, and am grateful for that fact, even I have been seriously affected by particularly tragic or graphic news stories. I have also seen people have real-world reactions to topics of rape or sexual assault that have convinced me that there is a legitimate concern. I don’t know that I”m really up-to-speed on exactly when and how to issue a trigger warning, but the general principle seems both sensible and compassionate. Recently, however, I came across a piece in my exploration of the sci-fi political controversy that had nine trigger warnings: “slurs, ableism, racism, sexism, transmisogyny, homophobia, xenophobia, anti-semitism, colonialism.”

I found the whole list a bit odd. I try to be empathic and compassionate, but this seemed to be pushing it. When I got to to “colonialism” there at the end I found I had passed my limit. I just can’t take that seriously. In fact, I think such absurd over-sensitivity is downright counter-productive. For starters, it seems disrespectful to those suffering with Rape Trauma Syndrome to put them in the same category as people who are sad about the history of colonialism. It turns the whole thing into a joke. And that’s not just bad for victims of rape. It’s bad for all of us because it makes people who care about these kinds of issues look totally insane. Which is why we get pieces like  Big Boy Panties from the Mad Genius Club (a group blog run by conservative sci-fi writers):

Seriously. You now need to put a warning label on a blog post or something because somewhere, somehow, someone might have a reaction to something that may or may not cause them to react in a way… that’s a lot of stinking cow excremental right there. Aside from our usual society response to any sort of speech which might deemed “racist” (oh yeah, I used air quotes when I typed that), we now have this burning need to placate individuals who forgot their big boy panties and now must be warned before reading something.

See, if trigger warnings were used exclusively for discussion of rape and sexual assault I would respond to someone like this by saying, “No, you don’t really get it. There’s a legitimate reason for this.” But I can’t really do that now, because this person will just point to “trigger warning: colonialism” and collapse in a fit of hysterical laughter. I want to stake out a moderate middle position, but it’s hard when the left and right are both doing the absolute darnedest to live down to their stereotypes: irrational sentimentality on the one hand and unflinching callousness on the other. Not that conservatives are the only ones to complain that the trigger warning thing has gone way, way too far. The article from the Guardian that I quoted at the top is actually headlined: We’ve gone too far with ‘trigger warnings’, and it has an even more impressive list of trigger warnings then the one I found, including:

misogyny, the death penalty, calories in a food item, terrorism, drunk driving, how much a person weighs, racism, gun violence, Stand Your Ground laws, drones, homophobia, PTSD, slavery, victim-blaming, abuse, swearing, child abuse, self-injury, suicide, talk of drug use, descriptions of medical procedures, corpses, skulls, skeletons, needles, discussion of “isms,” neuroatypical shaming, slurs (including “stupid” or “dumb”), kidnapping, dental trauma, discussions of sex (even consensual), death or dying, spiders, insects, snakes, vomit, pregnancy, childbirth, blood, scarification, Nazi paraphernalia, slimy things, holes and “anything that might inspire intrusive thoughts in people with OCD“.

Seriously. We’ve gone from “rape” to trigger warnings for spiders, holes, and slimy things. But it’s much worse than just over-sensitivity. Colleges are starting to either require trigger warnings or just encourage teachers to remove material from the curricula that might be triggering.

Oberlin College recommends that its faculty “remove triggering material when it does not contribute directly to the course learning goals”. When material is simply too important to take out entirely, the college recommends trigger warnings.

2014-05-02 Things Fall ApartAnd we’re not talking hardcore stuff, here. The classical work Things Fall Apart (which I read as a freshman) is listed as an example, and requires trigger warnings for: “racism, colonialism, religious persecution, violence, suicide, and more.” I realize that a trigger warning is not the same thing as outright censoring, but the trend is deeply disturbing and illustrates the conservative case that even if your intentions are laudable the end result can be sinister. The trigger warning logic isn’t just about adding disclaimers to what you read, it’s about reading less. It’s about removing objectionable work (and all work can be classified as objectionable on the basis of triggering someone somewhere). It’s not about individuals opting out of particular works as a matter of conscience (as conservatives sometimes do), but about applying rigidly overprotective standards for everyone.

It is deeply and tragically ironic that important literary works by minority voices who come from cultures that have suffered under colonial imperialism are now on the verge of being suppressed by the folks who claim to be the most concerned with colonial imperialism. Shouldn’t we be encouraging more  people to read a book by Africa’s leading literary voice that includes discussions of the impact of colonialism on Africa precisely because the history is so tragic that it can be distressing? Is this what it looks like when radical ideology begins to eat its own tail?

Even The New Republic can see that this trend, especially when it comes to colleges, is both absurd and ominous: Trigger Happy: The “trigger warning” has spread from blogs to college classes. Can it be stopped? The article starts with another collegiate example:

Last week, student leaders at the University of California, Santa Barbara, passed a resolution urging officials to institute mandatory trigger warnings on class syllabi. Professors who present “content that may trigger the onset of symptoms of Post-Traumatic Stress Disorder” would be required to issue advance alerts and allow students to skip those classes.

Sounds OK in principle, but in practice it makes you wonder if there’s anything that won’t need a trigger warning. Over at Rutgers, a “sophomore suggested that an alert for F. Scott Fitzgerald’s The Great Gatsby say, “TW: suicide, domestic abuse and graphic violence.” Really, is there a single work of serious literature that wouldn’t require a whole slew of trigger warnings? Off hand, I can’t think of one.

This, my fellow moderates and sane people on the left and right, is why we can’t have nice things. And let it no longer be said that censorship is primarily a hobby of the right! On the plus side, evangelical Christians who protested that Harry Potter promoted satanism will at least now have some company out there in loony town. Let this be further evidence that the real conflict is not left vs. right, but rather reasonable vs. not-reasonable.

Now, maybe this kind of insanity is just part of life on Earth. I’m sure there’s some truth to that. This is hardly the first time ever that someone has taken an otherwise good idea too far. That’s pretty much what history is all about. But in conjunction with the political infighting that is splitting the sci-fi community apart and partisanship in the US to all-time highs I have to wonder if there’s something about social media and the way it lets us democratize the spread of ideas that is turning what used to be a nuisance into a major hazard. Think about the way improving technology led ancient societies to gradually shift from rural to urban communities. A lot of good came from that, but sticking so many people in confined areas created new problems for the spread of disease. Well, on the Internet, “some dumb idea” is the effective equivalent of disease. The whole trigger warning nonsense (and it has become nonsense, even if it didn’t start out that way) is no dumber than stupid ideas of the past, but it has a chance to spread much more widely and quickly.

That’s the downside to free flow of information, folks. You all get to go and watch TED talks and get your minds expanded, but stupid fads like trigger warnings for spiders and holes get a chance to infect our brains, too. It’s all fun and games when your relatives send stupid stories that they should have checked on Snopes first, but once these infectious idiocies start sprouting up as official policies on college campuses we’re officially in trouble. As long as this is an Internet-only phenomena, it’s just one more thing to complain about. Once it hits the real world

The Barbarians at the Gate of Sci-Fi

You might not have heard, but the sci-fi community is currently embroiled in a civil war. Then again, you might actually have heard. Things have gotten so bad that the story hit both USA Today and then the Washington Post this week. I want to share the story, and my perspective on it, for two reasons. First: I love sci-fi. But second, and more relevant to a broader audience, the way that political partisanship has torn the sci-fi community apart is pretty good case study of how partisanship damages the fabric of larger communities.

The Literature of Ideas

2014-04-30 Tom Swift Jr

I love science fiction because it is, as Pamela Sargent called it, “the literature of ideas.” For me the animating spirit of sci-fi is the spirit of inquiry. The genre has less to do with with outer trappings of spaceships or robots and more to do with the simple question: “What if?” This has been true since some of the earliest science fiction works, like Mary Shelley’s Frankenstein. Dr. Frankenstein’s futuristic technology for reanimating corpses is central to the plot and intrinsically interesting, but it’s there in order to support moral questions about the duty of the creator to the created. That setup, envisioning alternate technology in order to frame questions that couldn’t be examined so clearly without the imaginary technology, is the essence of science fiction. That is the sense in which it is truly the literature of ideas.

I understand that Frankenstein isn’t necessarily the first book that people think of when they think of science fiction and that my definition isn’t universal. But it’s not just my random personal opinion, either. In addition to Sargent, sci-fi legend Ursula K. Le Guin recently told the Smithsonian Magazine (speaking about the impact of science fiction on real-world society), “The future is a safe, sterile laboratory for trying out ideas in, a means of thinking about reality, a method.” That method is exactly what I fell in love with.

The first science fiction that I ever read was 1950’s era series Tom Swift, Jr. Even though it was mostly ghost-written fluff for little kids, it couldn’t help but convey a sense of the importance of ideas. Ideas matter. Ideas change how see the world, and can therefore change what we make of the world. Science fiction isn’t about predicting the future. It is, in a small but real way, about shaping the future.

Politics in Sci-Fi

2014-04-30 John Harris

Of course you can’t get all poetic about the literature of ideas without expecting to find a good deal of politics along for the ride. The Tom Swift, Jr. novels all conveyed a surplus of good ole American patriotism in the best tradition of 1950s Cold War sentiment. Meanwhile, Le Guin’s The Left Hand of Darkness is an inquiry into the social role of gender by examining a hermaphroditic alien race while The Dispossessed explores the tension between human nature and left-wing, Utopian political ideals.

Part of what I love about the genre is that it explicitly talks about all that stuff that you’re not supposed to bring up in polite company: politics, religion, and morality. Sci-fi has always been full of wildly divergent ideas for better and for worse. Mostly for better, in my experience. When writers care more about their craft–about engaging the audience and telling a good story–this usually forces them to be at least a little nuanced and careful in their politics. Most of us don’t like preachy characters or message fiction, even when we might largely agree with them. That’s why I’ve always viewed Heinlein’s writings with a mixture of admiration and tolerant patience, sort of like a crazy uncle who can be forgiven his occasional political ranting because he otherwise tells a good story. A lot of my favorite authors (Le Guin, but also C. J. Cherryh and Lois McMaster Bujold) are folks who, I’m pretty sure, have politics that are far, far away from mine. I love their stuff anyway. And for writers who are closer to my world view, like Larry Correia, there’s no guarantee that I will like what they write just because I agree with some of their political views.

I have no idea how right-wing and left-wing authors got along in decades past, but as far as I can tell they managed just fine. I’m basing this on the fact that some of the old guard have reacted with annoyance and disdain to the politicizing of the current crop of sci-fi authors, but we’ll get back to that in a bit. Meanwhile, the audience had no trouble picking and choosing from a variety of authors. If you wanted to find authors who agreed with your politics, you probably could. Traditional, conservative folks like me might have to work a little harder to find a common voice, but they were out there. And, most importantly, you probably had no real strong desire for political conformity. The audience was perfectly happy to go along with a wide-range of political, religious, and ethical viewpoints. It was part of the experience, and usually a beneficial one all around.

The Political Polarization of Sci-Fi

2014-05-01 Stranger-in-a-Strange-Land

In recent years, this happy little equilibrium has collapsed. It’s impossible for me to know if the reason is technological or political, but it’s probably both. The technological changes include social media and self-publishing. Social media makes it easier for sci-fi authors to interact directly with their fans and also to interact with each other in impersonal and public ways. Self-publishing forces more and more authors to do just that. It’s called platform building, and the idea is that authors have an obligation to get out there and build a brand name. This is especially true for self-published authors, but even traditionally published authors feel the pressure to get out there and be as visible as possible to boost their sales.

It is absolutely not a coincidence that two of the central figures in the current civil war–John Scalzi and Larry Correia–are both relative newcomers to the genre and both at the forefront of those respective technologies. For his part, John Scalzi runs the incredibly popular blog The Whatever, which he’s maintained since 1998 (before “blogging” was even a thing). The most prominent recurring feature on The Whatever is a segment called “The Big Idea” in which Scalzi turns over the mic to authors with a book coming out to discuss and promote their work. Scalzi’s blog promotes his own stuff, too, of course. He mentions his Hugo-eligible books and stories whenever nomination time comes around and announces new projects, too, but he also uses his platform to help out others. It is a very big platform by now. Scalzi also self-published his first sci-fi novel, Old Man’s War before it was bought by Tor. Scalzi is an outspoken liberal who penned the incredibly famous article about white-male privilege: Straight White Male: The Easiest Difficulty Setting There Is.

On the other side we’ve got Larry Correia. Whereas Scalzi has been a professional writer throughout his entire career (he did film reviews, non-fiction, and corporate writing before he broke into sci-fi in 2005), Correia is an even more recent entrant to the field of professional writing. His breakout hit, Monster Hunter International, was self-published in 2007 before it was picked up by Baen and republished in 2009. He currently has 9 sci-fi books in print (which is about the same as Scalzi, I think) and runs his own blog incredibly popular blog called Monster Hunter Nation. Where Scalzi is an outspoken liberal, Correia is a Mormon and proud gun-nut with generally conservative views.

Both of these gentleman have a lot of readers and fans, both of their books and also of their blogs. As you can imagine, this is a recipe for trouble. In the old days, neither Correia nor Scalzi would have been so well known for their political views because for the most part the sci-fi audience had to guess at politics from what the fiction that author published. If they even thought about it at all. Which, unless the book appeared to take an overt stance, they probably didn’t. Now there is more awareness but, unfortunately, there are also sides. The blogs are not just a source of information, but also a virtual space for like-minded fans to congregate. It’s no surprise that Scalzi’s blog is stuffed to the gills with commenters who generally agree with his views and applaud his willingness to write about them publicly, and the same goes for Correia, although perhaps less-so since Correia takes a more hands-off approach to comment moderation. It’s hard to imagine that this homogeneity doesn’t radicalize the views of Scalzi and Correia at least a little bit simply because human nature is what it is, and it definitely radicalizes the communities themselves. When contrarians come around to pick a fight,they are seen not just as someone who disagrees, but as representative of the other tribe. And then, last but not least, there’s the fact that these politicized leaders of politicized tribes can shout at each other in public. Thanks to social media, authors not only interact with their fans more (and in public) but also with each other more (and in public).

None of this is anybody’s fault, really. There are no villains thus far. It’s just the way things are. Technology has consequences for society, and one of the consequences of social media in all its many forms is to make it easier for people to sort themselves into like-minded groups, whether that’s their intent or not. This is why the civil war in sci-fi is perhaps just a smaller example of the larger trends taking place across our society as a whole.

Of course, it’s also possible that Scalzi, Correia, or both are just more politicized than the Old Guard. (I’ve noticed, for example, that a much more experienced writer like C. J. Cherryh definitely has her opinions, but has so far kept completely out of internecine combat, even when it touches on issues that she personally cares about.) I’m not as interested in the theory that newer writers are just more political for two reasons. First: I just don’t think there’s any way to judge. Second: even if they are, you might still wonder if there’s some reason for that fact. In other words: it might still come back to a consequence of coming into your own as a professional writer in an era of social networking and platform building. In any case, I’m sticking with a technology-based explanation of political radicalization in the sci-fi community for this post.

What Hath Partisanship Wrought?

2014-05-01 Fahrenheit 451

So what is actually going on? Well, if you ask the liberals, they are just trying to make the sci-fi community safe for minorities by chasing out hatemongers and bigots. If you ask the conservatives, they are trying to keep the sci-fi community safe for free-thinkers by resisting political correctness. The sad irony is that a central strategy of the conservatives is to say intentionally provocative things (you can’t keep a right if you don’t exercise it) and a chief strategy of the liberals is to interpret whatever conservatives say in the worst possible light (to validate their claim that the racists, sexists, etc. have got to go). It looks almost as though the two sides just decided to have the biggest, nastiest, most convoluted fight they could possible have, and came up with the perfect strategy to escalate and perpetuate it. Call it cooperation or call it co-dependency; it’s ugly by any name. The saddest part? Both sides contain a mix of decent people who really think they are trying to do the right thing, people who seem to have some serious issues unrelated to politics, and plain old trolls. Good luck sorting that out.

In practical terms, there’s an ongoing feud at the SFWA (that’s the professional organization for sci-fi writers, artists, and editors) that culminated in a particularly controversial conservative named Theodore Beale (who writes under Vox Day) being formally ejected from the organization. You can read a self-declared liberal-slanted recap of that mess here. Lest that make you think that the conservatives were being unreasonable, the liberals worked hard to show they could be just as insane when they went bananas over the announcement that Jonathan Ross was going to host the 2014 Hugo awards. Jonathan Ross, a British comedian who is married to Hugo-winner Jane Goldman, had no idea that he was walking into a minefield because (like most of humanity) he didn’t know about the ongoing SFWA feuds. So when a couple of liberals protested (based on no evidence at all that I can see) that he would make fat-jokes and this would make the ceremony hostile for overweight people, he didn’t handle their concerns with kid gloves. The spat blew up on Twitter with sci-fi author Seanan McGuire getting into a fight with Ross’s daughter who tried to defend her dad by saying: “I’m Jonathan’s overweight daughter and I assure you that there are few men more kind & sensitive towards women’s body issues.” Yeah, it was that ugly. Understandably, Ross said “the Hell with this” and backed out. Liberals, as a general rule, celebrated their victory although there were exceptions. Neil Gaiman, for example, said that he was:

seriously disappointed in the people, some of whom I know and respect, who stirred other people up to send invective, obscenities and hatred Jonathan’s way over Twitter (and the moment you put someone’s @name into a tweet, you are sending it to that person), much of it the kind of stuff that they seemed to be worried that he might possibly say at the Hugos, unaware of the ironies involved.

But things really blew up when the Hugo nominations were announced on April 19. It didn’t take long for folks to notice that there were a lot of unexpected names on the list, and that those names corresponded to a slate of nominations from conservative-leaning authors Larry Correia had promoted on his own site starting back on March 25. Worst of all? The list included a novellete by none other than Vox Day / Theodore Beale. Scalzi responded immediately, although in stark contrast to his polemics during the controversy so far he took a moderate, calming approach, headlining his piece: No, The Hugo Nominations Were Not Rigged. Other than throwing a bone to his political allies, dog-whistle style, Scalzi has essentially gone radio dark on politics since then. My theory is that Scalzi is smart enough to realize that the fight is now getting to a point where it’s going to start threatening the genre as a whole. Or, he might just be biding time to unleash after the Hugos, which is what others have explicitly stated that they are doing. Meanwhile others, like Tor author John C. Wright (who is friends with Correia and other conservatives) isn’t waiting. He publicly resigned from SFWA in an open letter.

So now the sci-fi community has officially lost its mind. What bothers me the most, however, is to see that even the publishers are starting to get involved. John Scalzi’s editor and friend Patrick Nielsen Hayden has been a loud voice criticizing the conservative side (you have to go into the comments to find it, try #501 or #502). He is also senior editor at Tor, which recently published a controversial article arguing that sci-fi writers should stop using binary gender in their books. Tor is also where Scalzi publishes. Meanwhile Baen, where Correia lives, is cultivating it’s reputation as the place where conservatives can flee oppressive liberal Manhattan editors. This sentiment is reflected by Baen author Brad Torgersen along with Larry Correia himself. Meanwhile Baen editor Toni Weisskopf (guest-posting at Baen author and conservative Sarah Hoyt’s website) gives the impression that Baen as a corporate entity is at least marginally OK with their status as political refuge.

Let’s recap: back in the day authors put their politics in the books. Fans, editors, and publishing houses, as far as I can tell, didn’t have any stark partisan divides. Today, authors put their politics out there in blog posts and tweets, which become rallying cries for groups of like-minded fans. Then the fans and the authors get into fights with each other over politics. And, because the community is so small, these fights get personal and nasty very quickly.

Where Does It End?

2014-05-01 Bradbury Cover

I don’t want politics out of sci-fi books, but I do wish we could get politics out of the sci-fi publishing world. I’m not really sure if we can just roll the clock back to where things were before. I suspect not, and that makes me sad. On the other hand, this is one of those situations where markets and profit-seeking tend to make people behave more decently rather than less. I suspect money, consciously or unconsciously, has a lot to do with Scalzi’s sudden moderation. And that it had to do with Wright quitting the SFWA but not Tor. Baen publishes Lois McMaster Bujold (who I suspect is not conservative) and Tor publishes David Weber (who most definitely is), and these refugees give hope that we can stop things from sliding into full-on, open warfare with the publishers as intentional ideological mouthpieces.

My perspective is one of both fan and hopeful author. I hope that when I’m ready to start submitting my stories, probably in a couple of years, sci-fi will still boast the free-wheeling intellectual, religious, and political diversity that I’ve always loved. Look, I know that as a conservative I will always be viewed with faint suspicion and find myself the odd-man-out, but part of being a conservative is being willing to deal with bad luck (like finding yourself in a minority position) without complaining. I’m willing to accommodate myself to that reality. All I want is a chance to participate in one, big, giant conversation. I don’t want it to be my turn to try writing and find that instead of this chaotic tapestry of audience and texts I’ve got a regimented set of ideologically homogeneous boxes, and that I’ve got to pick just one.

Sci fi, as the literature of ideas, cannot survive under those conditions.

Disclaimer (added as an update)

I mentioned a lot of individuals by name in this post. I do not know any of them personally, nor do I have any inside information. When it comes to my guesses as to the motivations of named individuals, I’ve tried to be generous and conservative (not in the political sense) but I might still get it wrong. In any case: talking about people individually is not the point. It’s just there to provide the story, as best I understand it. The overall trend of political polarization is unmistakable and is based, as I suggest, primarily on general technological trends rather than the actions of any particular author or editor. I really do think there are good and decent people on both sides of the political divide here. And I really think there are people who have behaved very poorly, but I have not focused on that in this post.