I’m writing this post not as commentary but as one who doesn’t really know what to think about a topic. I’ve been contemplating for weeks now what exactly defines the limits of the law. We have various actions we consider immoral. Within immoral actions, some are illegal, and some are legal. Where do we draw the line? How do we decide which immoral actions to tolerate and which ones to outlaw? I will take for granted that no one here is Lord Devlin (or Judge Dredd) and simply does not believe the law has limits.
As the SEP entry on the limits of the law notes, the harm principle, as articulated by John Stuart Mill, is the best known answer. I know Nathaniel isn’t a big fan of the harm principle, but it does seem like a good place to start a conversation on the limits of the law. The Harm Principle may not end up fully capturing why we make some actions illegal and others not, but it does seem to define the most basic level of law. If absolutely nothing else, we have to keep Jones from murdering Smith.
If we accept the harm principle, the big question then is defining what exactly constitutes harm. Murder once more is a good place to start. Direct, grievous, bodily harm seems easy enough to identify as harm. But what about more diffuse harm like societal harm? If we do recognize diffuse, societal harm, how does one draw a line? We obviously cannot prevent all societal harm. How we do decide what to combat and what to leave alone? Is it simply another utility calculation of greater and lesser harms? And what of self-harm?
A further consideration is when, in attempting to prevent harm, we end up creating more harm instead. The SEP entry addresses this idea as well. Prohibition of alcohol increases alcohol consumption and adds criminal elements to a previously legal enterprise. Complete illegalization of prostitution drives vulnerable women further underground and away from law enforcement. To some extent, these problems can be avoided by more intelligent lawmaking (like the Nordic Model on prostitution, which protects prostitutes by making the purchase of sex illegal while the sale of sex is legal), but intelligent lawmaking can only go so far in the continual struggle with human nature. For example, I don’t think any amount of intelligent lawmaking would have made prohibition work. In effect, there seems to be a certain amount of pragmatism to lawmaking. We only have so many resources. We cannot change human nature. At what point do we surrender and accept a certain tolerable amount of harm?
Stepping away from the harm principle, what role does morally right action and social cohesion play in lawmaking? The idea (apparently called legal moralism) seems to have been totally banished from the modern mind (particularly proponents of the harm principle), but I don’t know if we can simply throw it out without a second thought. We shouldn’t go so far as Plato to suggest that the state should regularly and actively enforce the cultivation of moral virtue, to the point that there simply is no distinction between the moral and the legal, but I think there’s room in between Plato’s Republic on one end and a society whose laws are totally indifferent to virtue and social cohesion on the other end. Here I think Lord Devlin is more on target even if I don’t agree with his overall view:
For society is not something that is kept together physically; it is held by the invisible bonds of common thought. If the bonds were too far relaxed the members would drift apart. A common morality is part of the bondage. The bondage is part of the price of society; and mankind, which needs society, must pay its price.
I also suppose I take the ancient view that virtue and social cohesion are one and the same. Moral virtue produces harmony both in the individual person and society at large. Immorality produces disharmony in both the soul and society. But, as mentioned earlier, I recognize the need for pragmatism in this matter. The attempted enforcement of virtue can and often does produce the opposite effect or simply no effect at all. So we get neither virtue nor social cohesion and we waste state resources to boot.
On the topic of legal moralism, the SEP entry starts hitting on a subject that does conflict me greatly, the topic of marriage. I realize people are a bit tired of this topic, but I think it’s greater fodder for contemplation on the limits of the law. On the one hand, I recognize marriage as having specific characteristics (monogamous, heterosexual, and permanent) and purposes (the good of the spouses and procreation of children). These characteristics and purposes are central to the well-being of both individual persons, especially children, and society in general. But then I jump over to another topic, contraception, where I believe a simultaneously personal and societal harm exists, and I have zero interest in making contraception illegal. So what gives? How do I differentiate any legal objections I might have to gay marriage or divorce or polygamy from my total legal acceptance of contraception?
I can already tell this entry is a bit of a mess. I’m jumping around, mixing and matching moral outlooks (like utilitarianism and more virtue-based outlooks), not defining or exploring presuppositions, etc. It’s pretty bad. But I think this mess is still useful for generating a discussion. Let me know what y’all think.
After listening to [Benjamin] Ginsberg‘s lecture, do you agree with his assessment that politics is all about interests and power?
The Stuff I Said
Kevin Simler and Robin Hanson’s recent book The Elephant in the Brain demonstrates that these underlying desires for power and status inform many of our decisions and behaviors in everyday life. Politicians certainly do not transcend these selfish motives by virtue of their office. I would actually add a subcategory to “status”: moral grandstanding. We want to paint ourselves as “good people” by signaling to others our superior moral quality. This allows us to enjoy the social capital that comes along with the improved reputation. We not only gain status, but we can also think of ourselves as do-gooders; crusaders who fight the good fight. Unsurprisingly, evidence suggests that we have inflatedviews of our own moral character and that acts of moral outrage are largely self-serving. What’s unfortunate is that social media may be exacerbating moral outrage by making signaling both easier and less costly to the individual.
I think the rise of populism in both America and Europe is a timely example of interests at play. While various elements contribute to the populist mindset, economic insecurity is the water it swims in. And this insecurity has been exploited by politicians of more extreme ideologies across multiple countries. For example, the Great Recession eroded European trust in mainstream political parties: a one percentage point increase in unemployment was associated with a 2 to 4 percentage point increase in the populist vote. A 2016 study looked at the political results of financial crises in Europe from 1870 to 2014 and found that far-right parties were the typical outcome. In America, President Trump made “Make America Great Again” his rallying cry, feeding off the public’s distrust of “the Establishment” during the post-crisis years. In doing so, he advocated protectionism and tighter borders. Oddly enough, you find comparable populist sentiments on the Left: Bernie Sanders has been very anti-trade and iffy on liberalized immigration (open borders is “a Koch Brothers proposal“), all in the name of helping the American worker. One of his former campaign organizers–the newly-elected Congresswoman Ocasio-Cortez–has also expressed similar concerns over trade deals (especially NAFTA). This is why The Economist sees less of a left/right divide today and more of an open/close divide. Skepticism of trade and immigration wrapped in “power to the people” sentiments may be invigorating in rhetoric, but it’s asinine in practice. And it’s doing nothing more than riding the wave of voter anxiety. What’s worse, it’s hiding these politicians’ accumulation of power, attainment of status, and moral self-aggrandizement behind what Ginsberg so aptly calls “the veneer of public spiritedness.”
A classmate asked if I believed that politicians always acted in self-interest or if there were moral lines that some would not cross. In response, I pointed out that Simler and Hanson are largely arguing against what they see as the tendency for people to tiptoe around hidden motives and self-deception. It’s not that we’re only motivated by selfish motives. We just tend to gloss over them. But they are deeply embedded. Failing to acknowledge them not only has personal consequences, but public ones as well (their chapter on medicine is especially on point). I think we should consider moral motivations through all possible means available, including life experience and behavior. However, I think a healthy dose of skepticism is necessary. It can certainly help protect us against intentional deception. But perhaps more importantly, it helps protect us against unintentional deception. It’s easy to give more weight to life experience, moral principles, and the like when it’s a politician on “our side,” all while harshly judging those on “the other side” as unscrupulous. Political skepticism or cynicism can aid in keeping our own selfish motives and emotional highs in check. And it can lead us to seek out more information, improve our understanding, and refine our beliefs. Otherwise, we end up being consumed by our own good intentions and moral principles without actually learning how to implement these principles.
My classmate also put forth a hypothetical to get a feel for my position: if legislative districts were redrawn so that legislators now represented districts with a different ideological makeup, how many would change their positions on issues just to stay in power? Personally, I think we would see a fair number of politicians shift their position because it is more advantageous. However, there is considerable evidence that political deliberation with ideological opposites actually backfires. Political philosopher Jason Brennan reviews the evidence in chapter 3 of his book Against Democracyand finds that political deliberation:
Exacerbates conflict when groups are different sizes
Avoids debates about facts and is instead driven by status-seeking and positions of influence
Uses language in biased and manipulative ways, usually by painting the opposition as intrinsically bad
Avoids controversial topics and sticks to safe subjects
Amplifies intellectual biases
There’s more, but that should make my point. So even if some politicians did not flip flop in their newly-drawn districts, the above list should give us pause before we conclude that their doubling down is proof of disinterest in status or moral grandstanding.
I certainly believe that people have moral limits and lines they will not cross. My skepticism (which I prefer to the word cynicism, but I’m fine with interchanging them) is largely about honest self-examination and the examination of others. For example, consider something that is generally of no consequence: Facebook status updates. My Facebook feed is often full of political rants, social commentaries, and cultural critiques. Why do we do that? Why post a political tract as a status? It can’t be because of utility. A single Facebook status isn’t going to fix Washington or shift the course of society. It’s unlikely to persuade the unsaved among your Facebook friends. In fact, it’s probably counterproductive given our tendency for motivated reasoning. When we finally rid ourselves of the high-minded rationales that make next to zero sense, we find that it boils down to signaling: we are signaling our tribe. And that feels good. We get “Likes.” We get our worldview confirmed by others. We gain more social capital as a member of the group. We even get to moral grandstand in the face of that friend or two who hold (obviously) wrong, immoral beliefs. Sure, some of it may be about moral conviction and taking a stand. That certainly sounds and feels better. But I think we will all be better off if we realize that’s really what those behaviors are about: sounding and feeling good. And I think our politics will be better off if we apply a similar lens to it.
And More Stuff
A classmate drew on Dan Ariely’s work to argue that people–including politicians–have a “personal fudge factor“: most people will cheat a little bit without feeling they’ve compromised their sense that they are a “good person.” When people are reminded of moral values (in the case of the experiments, the honor code or 10 commandments), they don’t cheat, including atheists. So while politicians may compromise their values here and there, they still have a moral sense of self that they are unlikely to violate.
In response, I pointed out that a registered replication report last year was unable to reproduce Ariely’s results. That doesn’t mean his results were wrong, just that we need to be cautious in drawing any strong conclusions from them.
When discussing his priming with the 10 Commandments on pg. 635, Ariely references Shariff and Norenzayan’s well-known 2007 study. This found that people behave more prosocially (in this case, generosity in experimental economic games) when primed with religious concepts. They offered a couple explanations for this. One hypothesis suggested that “the religious prime aroused an imagined presence of supernatural watchers…Generosity in cooperative games has been shown to be sensitive to even minor changes that compromise anonymity and activate reputational concerns” (pg. 807). They then cite studies (which later studies confirm) that found people behaving more prosocially in the presence of eye images. “In sum,” the authors write, “we are suggesting that activation of God concepts, even outside of reflective awareness, matches the input conditions of an agency detector and, as a result, triggers this hyperactive tendency to infer the presence of an intentional watcher. This sense of being watched then activates reputational concerns, undermines the anonymity of the situation, and, as a result, curbs selfish behavior” (pg. 807-808). In short, religious priming makes us think someone upstairs is watching us. This has more to do with being seen as good.
However, religious priming obviously doesn’t work for the honor code portion. Yet, Shariff and Norenzayan’s other explanation is actually quite helpful in this regard: “the activation of perceptual conceptual representations increases the likelihood of goals, plans, and motor behavior consistent with those representations…Irrespective of any attempt to manage their reputations, subjects may have automatically behaved more generously when these concepts were activated, much as subjects are more likely to interrupt a conversation when the trait construct ‘‘rude’’ is primed, or much as university students walk more slowly when the ‘‘elderly’’ stereotype is activated (Bargh et al., 1996)” (pg. 807). Being primed with the “honorable student” stereotype, students were more likely to behave honorably (or honestly).
In short, Ariely’s study I think shows a mix of motivations when it comes to behaving morally: (1) maintaining our self-concept as a good person, (2) fear of being caught and having our reputation (and the benefits that come with along with it) damaged, and (3) our susceptibility to outside influence.
My point about moral grandstanding is not that we should interpret all behaviors by politicians through the lens of self-delusion and status seeking. But being aware of it can help us cut through a lot of nonsense and avoid being swept up in a collective self-congratulation. To quote Tosi and Warmke, “thinking about grandstanding is a cause for self-reflection, not a call to arms. An argument against grandstanding shouldn’t be used as a cudgel to attack people who say things we dislike. Rather, it’s an encouragement to reassess why and how we speak to one another about moral and political issues. Are we doing good with our moral talk? Or are we trying to convince others that we are good?” And as philosopher David Schmidtz is said to have quipped, if your main goal is to show that your heart is in the right place, then your heart is not in the right place.
I started my MA program in Government at John Hopkins University this past month. Homework is therefore going to take up a lot of my time and cut into my blogging. Instead of admitting defeat, I’ve decided to share excerpts from various assignments in a kind of series. I was inspired by the Twitter feed “Sh*t My Dad Says.” While “Sh*t I Say at School” is a funnier title, I’ll go the less vulgar route and name it “Stuff I Say at School.” Some of this material will be familiar to DR readers, but presenting it in a new context will hopefully keep it fresh. So without further ado, let’s dive in.
A recent Pew study showed that millennials are less religiously affiliated than any other previous cohort of Americans (sometimes called the rise of the “nones”). Given the emphasis Tocqueville places on the role religion plays in creating a culture that helps to keep democracy in America anchored, analyze these developments through Tocqueville’s viewpoint[.]
The Stuff I Said
Tocqueville would likely have a strong affinity for Baylor sociologist Rodney Stark’s research on religion. Stark’s sociological analysis of religion takes a similar approach to Tocqueville, acknowledging that the religious competition and pluralism (i.e., religious free market) that resulted from religion’s uncoupling from the state produces a robust, dynamic religious environment. He puts it bluntly in his book The Triumph of Faith: “the more religious competition there is within a society, the higher the overall level of individual participation” (pg. 56). It is the state sponsorship of churches, he claims, that has contributed to Europe’s religious decline.
I was struck by the claim in the lecture that 95% of Americans attended church weekly in the mid 19th-century because it contradicts the data collected by Stark and Finke:
On the eve of the Revolution only about 17 percent of Americans were churched. By the start of the Civil War this proportion had risen dramatically, to 37 percent. The immense dislocations of the war caused a serious decline in adherence in the South, which is reflected in the overall decline to 35 percent in the 1870 census. The rate then began to rise once more, and by 1906 slightly more than half of the U.S. population was churched. Adherence rates reached 56 percent by 1926. Since then the rate has been rather stable although inching upwards. By 1980 church adherence was about 62 percent (pg. 22).
Tocqueville might also be more optimistic about the state of America’s religious pulse. For example, Stark has criticized the narrative that often accompanies the “rise of the nones”:
The [Pew] findings would seem to be clear: the number of Americans who say their religious affiliation is “none” has increased from about 8 percent in 1990 to about 22 percent in 2014. But what this means is not so obvious, for, during this same period, church attendance did not decline and the number of atheists did not increase. Indeed, the percentage of atheists in America has stayed steady at about 4 percent since a question about belief in God was first asked in 1944. In addition, except for atheists, most of the other “nones” are religious in the sense that they pray (some pray very often) and believe in angels, in heaven, and even in ghosts. Some are also rather deeply involved in “New Age” mysticisms.
So who are these “nones,” and why is their number increasing–if it is? Back in 1990 most Americans who seldom or never attended church still claimed a religious affiliation when asked to do so. Today, when asked their religious preference, instead of saying Methodist or Catholic, now a larger proportion of nonattenders say “none,” by which most seem to mean “no actual membership.” The entire change has taken place within the nonattending group, and the nonattending group has not grown.
In other words, this change marks a decrease only in nominal affiliation, not an increase in irreligion. So whatever else it may reflect, the change does not support claims for increased secularization, let alone a decrease in the number of Christians. It may not even reflect an increase in those who say they are “nones.” The reason has to do with response rates and the accuracy of surveys (pg. 190).
Finally, Tocqueville was right to recognize the benefits of religion to society. As laid out by Stark in his America’s Blessings (pg. 4-5),the religious compared to irreligious Americans are:
Less likely to commit crimes.
More likely to contribute to contribute to charities, volunteer their time, and be active in civic affairs (a recent Pew study provides support for this last one).
Happier, less neurotic, less likely to commit suicide.
More likely to marry, stay married, have children, and be more satisfied in their marriage.
Less likely to abuse their spouse or children.
Less likely to cheat on their spouse.
Performing better on standardized tests.
More successful in their careers.
Less likely to drop out of school.
More likely to consume “high culture.”
Less likely to believe in occult and paranormal phenomena (e.g., Bigfoot, UFOs).
Overall, I think Tocqueville would be pleased to see data back up his observations.
A classmate pointed to a recent study claiming that when one controls for social desirability, the amount of atheists in America possibly rises to over a quarter of the population. The study is certainly interesting, though I wonder if this would hold up in other countries. Based on Stark’s The Triumph of Faith, these are the following average percentages of atheists across the world:
Latin America: 2.5%
Western Europe: 6.7%
Eastern Europe: 4.6%
Islamic Nations: 1.1%
Sub-Saharan Africa: 0.7%
Other (Australia, Canada, Iceland, New Zealand): 8.4%
As for the unaffiliated Millennials, unchurched and irreligious are two different things. A Pew study from last year found that 72% of the “nones” believe in some kind of higher power, with 17% believing in the “God of the Bible.” Even 67% of self-identified agnostics believe in a higher power, with 3% believing in the “God of the Bible.” But unchurching can lead to other forms of spirituality. The Baylor Religion Survey has found, perhaps surprisingly to some, that traditional forms of religion and high church attendance have strong negative effects on belief in the occult and paranormal. In other words, a regular church-goer is less likely than a non-attendee to believe things like Atlantis, haunted houses, UFOs, mediums, New Age movements, alternative medicine, etc. This is probably why Millennials are turning to things like astrology, alternative medicine, healing crystals, and the like.
I just finished reading A River in Darkness, the autobiography of a Korean-Japanese man who escaped from North Korea. It’s a tragic and engrossing read, but one detail that stuck out was the way that North Korean bureaucrats forced North Korean farmers to grow rice in ways that even a kid from urban Japan who had never studied agriculture knew were incorrect. (Basically, they planted the rice much too close together.)
It reminded me of another book, The Three-Body Problem, which depicted some of the real-life events of the Cultural Revolution in China, in particular “struggle sessions” in which Chinese professors were publicly humiliated and tortured by a mob in part for refusing to recant scientific principles that had been deemed incompatible with political doctrine.
The communists in North Korea and China were in good company. Matt Ridley, in The Origins of Virtue, recounts the Soviet Union’s own peculiar war on science:
Trofim Lysenko argued, and those who gainsaid him were shot, that wheat could be made more frost-hardy not by selection but by experience. Millions died hungry to prove him wrong. The inheritance of acquired characteristics remained in official doctrine of Soviet biology until 1964.
So in North Korea they insisted on disregarding ancient agricultural knowledge because the Party knew best, up to and including triggering massive starvation. In China they executed, exiled, and fired an entire generation of trained scientists because the Party knew best. And in the Soviet Union they insisted on trying to create frost-resistant wheat by freezing the seeds first and created even more massive starvation. Genetics, quantum mechanics, and common sense: why did the Party think they knew so much?
Let me tell you what got me thinking about this. A friend of mine posted a link to this article from Duke University’s The Chronicle detailing that a graduate program director who urged foreign students studying at Duke to speak English has been forced to step down as a result of her advice. Now, I don’t have enough information about the outrage du jour to have a strong opinion about it. As a matter of basic ethics and common sense, it’s rude and counterproductive to go to a foreign country to study and work and then hang around other people speaking your own language instead of adopting the language of the country you’ve moved to. Of course there are exceptions and I don’t generally think it’s a good idea to enforce every aspect of etiquette and common sense with formal policies, but that’s not really the point. I don’t want to take a strong position on the Duke case because I don’t know or care that much about it.
On the other hand, my friend who posted the article knew everything there is to know about it. I will not quote from the post (it was not shared publicly), but she interpreted everything through the standard lens or racism / colonialism / privilege / etc. and as a result she had zero doubts about anything. She spoke with absolute confidence and black-and-white judgment. Then all of her like-minded friends piled on, congratulating her. She knew and they knew that there was one and only explanation, one and only one answer, and that it was obvious.
I tried to engage in some discussion, leading with a simple question: have you ever lived in a foreign country and did you insist on speaking your language there? Do you even speak a foreign language? She hasn’t, so she couldn’t, and she doesn’t. (I have, I could but I did not, and I do.) Instead of considering that her view might be wrong, however, she just called for another friend to come in because they were a specialist in linguistic imperialism. So, as far as I know, this friend also has zero relevant experience but has a bigger ideological toolbox to whack people over the head with. Other commenters–even when they were polite–were just as clueless, sharing stories about growing up in bilingual homes or teaching English as a second language at the elementary school level. What do either of these things–interesting as they may be in themselves–have to do with speaking English in a graduate program? Not a single thing.
There are two things going on.
First, radical ideologies are incredibly dangerous things because they enable stupidity on a massive scale. People embrace radical ideologies because they are powerful explanation-machines. Life confronts all of us with ambiguity, complexity, and uncertainty. Also, disappointment and difficulty. Radical ideologies are a perfect antidote to the ambiguity, complexity, and uncertainty. They are, functionally speaking, fulfilling the same role that conspiracy theories do. They don’t improve your life, they aren’t meaningfully accurate, but they make your life explicable. They turn all of the randomness into order. This doesn’t actually make your situation objectively better, but it makes it feel better.
This can be relatively harmless. Radical ideology, conspiracy theories, and superstition have harmless manifestations where they don’t really do anything except waste time in exchange for a false feeling of control. Sure, you might be throwing away money to get your palm read, but it’s not really hurting anyone, right?
Sure, but things get dicier when your kooky explanation-machine happens to target, say, vaccines. Or all of modern psychiatry. Or, heck, modern medicine from start to finish. Even in these cases, the damage is limited to mostly yourself and, in particularly tragic cases, maybe your kids.
But when the explanation-machine that you’ve adopted is a political ideology, we go through a kind of phase-change and things get much, much worse.
Unlike micro explanation machines–superstition and conspiracy theories, for example–political ideologies are macro explanation machines. They have two functions. The first is the same as micro explanation machines: to quickly and easily make your life experiences intelligible. But they don’t stop there. They have a second function, and that function is to accumulate power. And that’s where things go off the rails and we get industrial-scale stupidity enabling.
To illustrate this, we have to understand why it was that Marxists in North Korea planted rice too close together, or Marxists in China executed physicists, or Marxists in the USSR kept using psuedo-science to try and grow frost-resistant wheat. You see, it wasn’t just some kind of weird accident that happened to be harmful, in the way that some people cling to harmless conspiracy theories like Bigfoot and others cling to harmful ones like the anti-vax crowd. Nope, the Marxists in North Korea, China, and the USSR were following a script laid down intentionally and inevitably by Lenin and Stalin.
Here’s philosopher Steven L. Goldman’s recounting:
This imperialism of the scientific world view—that there is such an imperialism—has a kind of, let’s call it, acute support that one doesn’t ordinarily encounter from an odd quarter, and that is from V.I. Lenin and Joseph Stalin. Before he was preoccupied with becoming the head of the government of the Union of Soviet Socialist Republics, Lenin wrote a book called Materialism and Imperio-Criticism in which he harshly criticized Ernst Mach’s philosophy of science, and other philosophies of science influenced by Mach, that denied that the object of scientific knowledge was reality—that denied that scientific knowledge was knowledge of what is real and what is true.
Lenin strongly defended a traditional—not a revolutionary—but a traditional conception of scientific knowledge, because otherwise Marxism itself becomes merely convention. In order to protect the truth of Marxist philosophy of history and of society—in order to protect to the idea that Marxist scientific materialism is “True” with a capital “T,” Lenin attacked these phenomenalistic theories, these conventionalistic theories—that we have seen defended by not just Mach, but also by Pierre Duhem, Heinrich Hertz, Henri Poincare, at about the same time that Lenin was writing Materialism and Imperio-Criticism.
Stalin in the 1930s made it clear that the theory of relativity and quantum theory, with its probability distributions as descriptions of nature—”merely” probabilistic descriptions of nature—”merely” I always say in quotation marks—that these were unacceptable in a Communist environment. Again, this is for the same reasons, because Marxist scientific materialism must be true. So, scientific theories that settle for probabilities and that are relative, are misunderstanding that special and general theories of relativity are in fact absolute and deterministic theories.
The willful stupidity of Marxist-Leninist ideology is not an accidental byproduct. It is a direct consequence of the fact that radical political ideologies are not content to be one explanation-machine among many but–as organized political movements in a battle for power–have to fight to be the explanation machine. This leads directly towards conflict between Marxist-Leninist ideology and any other contender, including both science and religion.
When these macro explanation machines aren’t killing millions of people, the absurdity can be hilarious. Here’s Goldman again:
A curious thing happened, namely that Russian physicists of the 1930s, 1940s and even into the 1950s, in books that they published on relativity and quantum theory, had to have a preface in which they made it clear that this book was not about reality—that these theories were not true, but they were very interesting and useful. It was okay to explore, but of course they’re not true because if they were, they would contradict Marxist scientific materialism.
This is quite funny because back in the 16th century, when Copernicus’ On the Revolution of the Heavenly Spheres was published in 1543, it was accompanied—unbeknownst to Copernicus, who was dying at the time—that the man who saw it through publication was a Protestant named Andreas Osiander—who stuck in a preface in order to protect Copernicus, because he knew that the Church would be upset if this theory of the heavens were taken literally. We know Galileo was in trouble for that. We talked a lot about that. So Osiander stuck in a preface saying, “You know, I don’t mean that the Earth really moves, but if you assume that it does, then look how nice and less complicated astronomy is.”
Now, I’m a religious person. I don’t think there’s any unavoidable conflict between religion and science. But when religion becomes a political ideology–as it was in the days of Copernicus and Galileo–then it is functionally equivalent to any other macro explanation machine (like Marxist-Leninism) and you will get the same absurd results (and, more often than not, the same horrific death tolls).
So here’s what I’ve learned. Human evil is never dangerous when it’s obvious. All of the great evils that we recognize today–fascism, slavery, Marxist-Leninism–were attractive in their day. And not to cackling, sinister villains rubbing their hands together with glee at the thought of inflicting evil misery on the world. Ordinary people thought that each of these monstrous evils was reasonable and, in many cases, even preferable.
If you roll that logic forward, it implies that the greatest evils of our time will be non-obvious. The movement that in 40 or 50 years from now we will revile and disavow is a movement that seems respectable and even attractive to many decent and intelligent people today. It is a macro explanation engine that appeals to people individually because it brings order to their personal narratives and–because it is functioning in the political realm–it is a macro explanation engine that will seek to crowd out all competitors and will therefore be hostile not only to alternative political ideologies but also to micro explanation engines that function in totally disparate realms like religion in science.
And, precisely because it seeks to undermine all other explanation engines even when operating in domains where it has zero utility or applicability, it will be most easily recognized in one way: as a massive enabler of stupidity.
Because that’s what happens when you have a mighty hammer. You start to see nothing but nails.
We spent that time writing academic papers and publishing them in respected peer-reviewed journals associated with fields of scholarship loosely known as “cultural studies” or “identity studies” (for example, gender studies) or “critical theory” because it is rooted in that postmodern brand of “theory” which arose in the late sixties. As a result of this work, we have come to call these fields “grievance studies” in shorthand because of their common goal of problematizing aspects of culture in minute detail in order to attempt diagnoses of power imbalances and oppression rooted in identity.
How did they come up with ideas for papers?:
Sometimes we just thought a nutty or inhumane idea up and ran with it. What if we write a paper saying we should train men like we do dogs—to prevent rape culture? Hence came the “Dog Park” paper. What if we write a paper claiming that when a guy privately masturbates while thinking about a woman (without her consent—in fact, without her ever finding out about it) that he’s committing sexual violence against her? That gave us the “Masturbation” paper. What if we argue that the reason superintelligent AI is potentially dangerous is because it is being programmed to be masculinist and imperialist using Mary Shelley’s Frankensteinand Lacanian psychoanalysis? That’s our “Feminist AI” paper. What if we argued that “a fat body is a legitimately built body” as a foundation for introducing a category for fat bodybuilding into the sport of professional bodybuilding? You can read how that went in Fat Studies.
At other times, we scoured the existing grievance studies literature to see where it was already going awry and then tried to magnify those problems. Feminist glaciology? Okay, we’ll copy it and write a feminist astronomy paper that argues feminist and queer astrology should be considered part of the science of astronomy, which we’ll brand as intrinsically sexist. Reviewers were very enthusiastic about that idea. Using a method like thematic analysis to spin favored interpretations of data? Fine, we wrote a paper about trans people in the workplace that does just that. Men use “male preserves” to enact dying “macho” masculinities discourses in a way society at large won’t accept? No problem. We published a paper best summarized as, “A gender scholar goes to Hooters to try to figure out why it exists.” “Defamiliarizing,” common experiences, pretending to be mystified by them and then looking for social constructions to explain them? Sure, our “Dildos” paper did that to answer the questions, “Why don’t straight men tend to masturbate via anal penetration, and what might happen if they did?” Hint: according to our paper in Sexuality and Culture, a leading sexualities journal, they will be less transphobic and more feminist as a result.
We used other methods too, like, “I wonder if that ‘progressive stack’ in the news could be written into a paper that says white males in college shouldn’t be allowed to speak in class (or have their emails answered by the instructor), and, for good measure, be asked to sit in the floor in chains so they can ‘experience reparations.’” That was our “Progressive Stack” paper. The answer seems to be yes, and feminist philosophy titan Hypatia has been surprisingly warm to it. Another tough one for us was, “I wonder if they’d publish a feminist rewrite of a chapter from Adolf Hitler’s Mein Kampf.” The answer to that question also turns out to be “yes,” given that the feminist social work journal Affilia has just accepted it. As we progressed, we started to realize that just about anything can be made to work, so long as it falls within the moral orthodoxy and demonstrates understanding of the existing literature.
What were the results? 7 papers were accepted (including one recognition of excellence), 2 were revised and resubmitted, 1 was still under review, 4 were in limbo, and 6 were rejected. Here are a few highlights:
The put it crudely, the paper argued that men should have the “rape culture” trained out them in ways similar to dogs. Reviewers described it as an “incredibly innovative, rich in analysis, and extremely well-written and organized given the incredibly diverse literature sets and theoretical questions brought into conversation.” More telling, the editor wrote to them,
As you may know, GPC is in its 25th year of publication. And as part of honoring the occasion, GPC is going to publish 12 lead pieces over the 12 issues of 2018 (and some even into 2019). We would like to publish your piece, Human Reactions to Rape Culture and Queer Performativity at Urban Dog Parks in Portland, Oregon, in the seventh issue. It draws attention to so many themes from the past scholarship informing feminist geographies and also shows how some of the work going on now can contribute to enlivening the discipline. In this sense we think it is a good piece for the celebrations. I would like to have your permission to do so.”
To sum up, the paper argues that social justice warriors shouldn’t be made fun of, but that they maintain the right to make fun of others. One reviewer wrote, “Given the emphasis on positionality, the argument clearly takes power structures into consideration and emphasizes the voice of marginalized groups, and in this sense can make a contribution to feminist philosophy especially around the topic of social justice pedagogy.” Another thought it was an “Excellent and very timely article!”
Bottom-line: feminazi is apparently a thing. The reviewers found it “interesting,” stating that the “framing and treatment of both neoliberal and choice feminisms well grounded.” In their view, the paper had “potential to generate important dialogue for social workers and feminist scholars.”
If you will excuse the language, this is why others have referred to this brand of scholarship as scholarsh*t.
You can see what other academics are saying about the hoax here.
When David Hume said that “reason is…the slave of the passions, and can never pretend to any other office than to serve and obey them.”, he thought it would “appear somewhat extraordinary.” Maybe it did in the mid-18th century, but a 21st century audience takes this assertion in stride. It’s not that human nature has changed. Humans have always held opinions and they’ve always been held for non-rational reasons. What’s changed is that we’re more aware of the extent of our opinions and of their frequently irrational nature.
We’re more aware of this for two reasons. First, the narcissism of social media and the tribally partisan nature of our society make us painful aware of everybody else’s opinions. As a group, we can’t shut up about the things we think are obviously true, even though things that really are obviously true (like the sky being blue) don’t generally require frequent reminders in the form of snarky memes.
Second, there’s a growing body of research into the reasons and mechanisms by which humans acquire and maintain their beliefs. It’s become so trendy to talk about cognitive biases, for example, that the Wikipedia list of them is becoming a bit of a joke. Still, the underlying premise–that human reason is about convenience and utility rather than about truth–is increasingly undeniable and books like Thinking, Fast and Slow or Predictably Irrational make that undeniable reality common knowledge.
In fact, we can now go farther than Hume and say that not only is reason the slave of the passions, but that it is only thanks to the passions that humans evolved the capacity for reason at all. This is known as the Argumentative Theory, which researchers Hugo Mercier and Dan Sperber summarized like this:
Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation.
Oddly enough, I can’t find a Wikipedia article to summarize this theory, but it’s been cited approvingly by researchers I respect like Frans de Waal and Jonathan Haidt, who summarized it this way: “Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments.”
If the theory is right, then the human tendency to believe what is useful and then to express those beliefs in ways that are farther useful is part of the story of how humanity came to be. This might have been deniable in Hume’s day, requiring an iconoclastic genius to spot it, but it’s becoming a humdrum fact of life in our day.
Our beliefs are instrumental. That is, we believe things because of the usefulness of holding that belief, and that usefulness is only occasionally related to truth. If the belief is about something that’s going to have a frequent and direct effect on our lives–like whether cars go or stop when the light is red–then it is very useful to have accurate beliefs and so our beliefs rapidly converge to reality. But if the belief is about something that is going to have a vague or indeterminate effect on our lives–and almost all political beliefs fall into this category–then there is no longer any powerful, external incentive to corral our beliefs to match reality. What’s more, in many cases it would be impossible to reconcile our beliefs with reality even if we really wanted to because the questions at play are too complicated for anyone to answer with certainty. In those cases, there is nothing to stop us from believing whatever is convenient.
And it’s not just privately-held beliefs that are instrumental. Opinions–the expression of these beliefs–add an additional layer of instrumentality. Not only do we believe what we find convenient to believe, but we also express those beliefs in ways that are convenient. We choose how, when, and where to express our opinions so as to derive the most benefit for the least amount of effort. Benefits of opinions include:
maintaining positive self-image: “I have such smart, benevolent political opinions. I’m such a good person!”
reinforcing community ties: “Look at these smart, benevolent political opinions we have in common!”
defining community boundaries: “These are the smart, benevolent political opinions you have affirm if you want to be one of us!”
the buzz of moral superiority: “We have such smart, benevolent political opinions. Not like those reprehensible morons over there!”
Opinions aren’t just tools, however. They are also weapons. If you want to understand what I’m talking about, just think of all the political memes you see on your Facebook or Twitter feeds. They are almost always focused on ridiculing and delegitimizing other people. This is about reinforcing community ties and getting high off of moral superiority, but it is also about intimidating the targets of our (ever so righteous) contempt and disdain. We live in an age of weaponized opinion.
Which brings me to the idea of a demilitarized zone.
A demilitarized zone is an “is an area in which treaties or agreements between nations, military powers or contending groups forbid military installations, activities or personnel.” The term is also used in the context of computers and networking. In that case, a DMZ is a part of a private network that is publicly accessible to other networks, usually the Internet. It’s a tradeoff between accessibility and security, allowing interaction with anonymous, untrusted computers but restricting that access to only specially designated computers in your network that are placed in the DMZ, while the rest of your computers are stored behind a defensive firewall.
The same concepts make sense in an ideological framework.
A typical partisan might have a range of beliefs that looks something like this:
The green section doesn’t represent what is actually good / correct. It represents what a person asserts to be correct / good. The same applies for the red portion. So, these will be different for different people. If you are, for example, someone who is pro-life then the green category will include beliefs like “all living human beings deserve equal rights” and the red portion will include beliefs like “consciousness and self-awareness are required for personhood”. If you are pro-choice, then the chart will look the same but the beliefs will be located in the opposite regions.
And here’s what it looks like if you introduce an ideological DMZ:
The difference here is that we have this whole new region where we are refusing to categorize something as correct / good or incorrect / bad. This may seem like an obvious thing to do. If, for example, you hear a new fact for the first time and you don’t know anything about it, then naturally you should not have an opinion about it until you find out more, right? Well, if humans were rational that would be right. But humans are not rational. We use rationality as a tool when we want to, but we’re just as happy to set it aside when it’s convenient to do so.
And so what actually happens is that when you hear a new proposition, you (automatically and without thinking about it consciously) determine if the new proposition is relevant to any of your strongly-held political opinions. If it is, you identify if it helps or hurts. If it helps, then you accept it as true. Maybe you use the same “fact” in your next debate, or share the article on your timeline, or forward it to your friends. In other words, you stick it into the green bucket. If it hurts, you reject it as false. You attack the credibility of the person who shared the fact or thrust the burden of proof on them or even jump straight to attacking their motives for sharing it in the first place. You stick it in the red bucket.
If you’re following along so far, you might notice that what we’re talking about is certainty. One of the popular and increasingly well-known facts about human beings and certainty is that certainty and ignorance go hand in hand. The technical term for this is the Dunning-Kruger effect, “a cognitive bias in which people of low ability have illusory superiority and mistakenly assess their cognitive ability as greater than it is.” Even if you’ve never heard that term, however, you’ve probably seen webcomics like this one from Saturday Morning Breakfast Cereal:
The idea of a DMZ is related to these concepts, but it’s not the same. These comics are about the vertical ignorance/certainty problem. Lack of knowledge combined with instrumental beliefs cause people to double-down on convenient beliefs they already have. That’s a real problem, but it’s not the one I’m tackling. I’m talking about a horizontal ignorance/certainty problem. Instead of pouring more and more certainty into (ignorant, but convenient) beliefs that we already have, this problem is about spreading certainty around to different, neighboring beliefs that are new to us.
How does that play out in practice? Well, as a famous study revealed recently, “people who are otherwise very good at math may totally flunk a problem that they would otherwise probably be able to solve, simply because giving the right answer goes against their political beliefs.” That’s because–without a consciously defined and maintained DMZ–they immediately categorize new information into the red or green region even if it means magically becoming bad at math. That’s how strong the temptation is to sort all new information into friend/foe categories is, and it’s the reason we need a DMZ.
So what does having a DMZ mean? It means, as I mentioned earlier, that you can easily list of several arguments or propositions which might work against your beliefs, but that you don’t reject out of hand because you simply don’t know enough about them. It doesn’t mean you have to accept them. It doesn’t mean you have to reject the belief that they threaten. It doesn’t even mean you have investigate them right away.. It just means you refrain from categorizing them in the red bucket. And you do the same with new information that helps your cause. If it is about a topic you know little about, then you go ahead and put it in that blue bucket. You say, “That sounds good. I hope it’s true. But i’m not sure yet.”
There’s another aspect to this as well. So far I’ve been talking about salient propositions, that is: propositions that directly relate to some of your political beliefs. I’ve been leaving aside irrelevant facts. That’s because–although it’s easy for anyone to stick irrelevant facts in the blue bucket–the distinction between relevant and irrelevant facts is not actually stable or clear cut.
One of the problems with our increasingly political world is that more and more apparently unrelated facts are being incorporated into political paradigms. There’s a cottage industry for journalists to fill quotas by describing apparently innocuous things as racist. A list of Things college professors called ‘racist’ in 2017 includes math, Jingle Bells (the song), and punctuality. This is a controversial topic. Sometimes, articles like this really do reveal incisive critiques of racial inequality that’s not obvious at first. Sometimes conservatives misrepresent or dumb-down these arguments just to make fun of them. But sometimes–like when a kid in my high school class complained that it was sexist to use the term for a female hero (heroine) as the name for a drug (heroin)–the contention really is silly. And so part of the DMZ is also just being a little slower to see new information in a political light. Everything can be political–with a little bit of rhetorical ingenuity–but there’s a big difference between “can” and “should”.
If you don’t have an ideological DMZ yet, I encourage you to start building one today. In networking, a DMZ is a useful way to allow new information to come into your network. An ideological DMZ can fill the same function. It’s a great way to start to start to dig your way out of an echo chamber or avoid getting trapped in one in the first place. In geopolitics, a DMZ is a great way to deescalate conflict. Once again, an ideological DMZ can fill a similar role. It’s a useful habit to reduce the number of and lower the stakes in the political disagreements that you have.
Even after all these years, North and South Korea are technically still at war. A DMZ is not nearly as good as a nice, long, non-militarized border (like between the US and Canada). And so I have to admit that calling for an ideological DMZ feels a little bit like aiming low. It’s not asking for mutual understanding or a peace treaty, let alone an alliance.
Anger is toxic, and it has no place in ordinary political disputes. I’m very reluctant to add to it.
And yet, it is less with anger and more with a sense of bone-deep bewilderment that I–reluctantly–read a few articles about Alfie Evans.
Aflie is a baby with a severe neurological affliction that–according to doctors–has left him in a vegetative state with no conceivable chance of recovery. This is tragic, and no one is to blame for Alfie’s condition.
The UK courts have decided that no further care should be given to Alfie because there’s no hope of his recovery. This is tragic, but also defensible. It’s not possible to expend unlimited resources on every tragic case, and hard calls have to be made.
But where things stop making sense to me is where the UK government has refused to allow Alfie to be transported to Italy for additional care. Alfie has been granted Italian citizenship, the Italian military sent a plane to UK to fly him to a hospital in Italy, and all of this was done–one guesses–largely in response to the Pope’s public support for Alfie.
The UK government’s response is, essentially, that Alfie’s parents don’t know what they’re doing. The doctors know better. That may be true. Even the Italian hospital admits it can do no more than keep Alfie alive while doctors study his case. No one things there is a miracle cure.
But here’s the thing: why does the UK government, or any group of doctors, get to decide?
It gets more baffling still. Now Alfie’s parents, haven given up on the Italian option, just want to take him home. But even that they cannot do unless the doctors say so. In what universe is that a morally defensible position to take? Quoting an anonymous British father:
When my son was born nearly 16 months ago, I found to my amazement that I could not take him home until a paediatrician had signed a small slip of paper, to be handed in at the exit, authorising his release. I joked to my wife that we were only parenting under licence from the State. It seems less of a joke now.
The last straw–and the cause of the anger I can’t deny I feel about this–is the insufferable arrogance of the UK politicians and medical experts. For example:
Lord Justice McFarlane said parents, like those of Alfie Evans, could be vulnerable to receiving bad medical advice, adding that there was evidence that the parents made decisions based on incorrect guidance.
Hospital officials at Alder Hey say they have received “unprecedented personal abuse” from the global backlash to Alfie’s case. The Liverpool hospital has faced several protests in recent weeks, organized by a group calling itself “Alfie’s Army.”
“Having to carry on our usual day-to-day work in a hospital that has required a significant police presence just to keep our patients, staff and visitors safe is completely unacceptable,” the hospital’s chairman, Sir David Henshaw, and chief executive Louise Shepherd said.
Oh, is it “completely unacceptable” for people to protest what is essentially government-sanctioned kidnapping? I’m so sorry! I come from this crazy moral universe where parents–and not the government–are the guardians of their own children.
Or here’s another one:
Sometimes, the sad fact is that parents do not know what is best for their child,” Wilkinson said. “They are led by their grief and their sadness, their understandable desire to hold on to their child, to request treatment that will not and cannot help.
The UK was, in many ways, the birthplace of our political heritage of individual liberty and rights. It’s mystifying–and tragic–to see the sorry state of decay it has fallen into today.
So tell me, folks, am I missing some really vital aspects to this story that make it something other than a micro-dystopia?
I recently had an interesting political exchange–as have basically all of us, these days–in which I was called out for not being nice enough. At least, that’s how I interpreted it. My interlocutor suggested that my argument was deficient because I hadn’t started out by finding something we could agree on before launching my critique. A critique that was, just for the record, entirely civil and on-point. At no point did I get personal and there was no allegation that I had. The problem wasn’t that I had been rude, uncivil, or anything like that. The problem was that I hadn’t been nice enough.
Now, OK, it never hurts to be nice, right? Speaking as a purely practical matter, shouldn’t we always try to express our beliefs in as non-abrasive a way as possible? You get more flies with honey, and all that. So, what’s the harm in accepting as a new rule of debate the general principle that we should always find a point of common ground first and only then engage the issues directly. What kind of a person disagrees with this? Surely only a heartless and soulless person, and why would we want to listen to what someone like that has to say, anyway?
And that, my friends, is why I dislike the tyranny of kindness.
The problem with it is that it’s only a tiny jump from saying, “Why not be nice?” to then saying, “If you’re not nice, nothing you say matters.” And “nice” is an awfully subjective term. There is no logical reason why a general rule of thumb to look for common ground should lead to exiling some people from discussion for not following arbitrary rituals, but–given the incentives of political discourse–the outcome is inevitable.
I realize I’m swimming upstream here, so let me try a different tack and see if I can make some headway.
Requiring people to be nice enough in their debates is discriminatory against non-neurotypical people. The term “neurotypical” is one of those neologisms like “cissexual” that is invented to describe the category of people who didn’t need a description before because they’re just, well, normal baseline humans. A cissexual is someone who identifies as the gender that matches their birth sex. Neurotypical means “not displaying or characterized by autistic or other neurologically atypical patterns of thought or behavior.” So, people who aren’t on the autism spectrum are neurotypical.
Neurotpyical people have no problem conforming with this new minimum requirement to engage in public discourse. They are, by definition, able to conform with expected social conventions. It is easy and natural for them to both interpret ordinary social cues and conform their own behavior–including written communication–to standard expectations. A neurotypical can easily come across as nice with minimal effort. Someone who is not neurotypical, well, they might have a harder time. For them, the requirement to be “just be nice” is not actually something incidental. It’s something that requires an awful lot of conscious effort and attention, if it’s attainable at all.
So our seemingly benign call to emphasize niceness in discourse functions–whether we intended to or not–as a form of bigotry that excludes a certain class of people from discussion.
Which doesn’t sound very nice, does it?
I am not merely playing games here. This isn’t a theoretical problem, it’s a real one. Gender, as the saying goes, is performative. So is all human speech. And we’re not all equally good at it. Tying the validity of a person’s argument–the worth of their viewpoint–to their capability and/or willingness to perform well enough is not a benign requirement. It’s not a case that it might lead to unfair applications, it is intrinsically exclusionary and debilitating. Which is exactly why it’s so increasingly popular. Calling on people to be nice isn’t neutral. It’s a power-play. Which is why–in other contexts–minorities have long-rejected it as “tone policing”.
Look at that, I’m agreeing with an aspect of social justice ideology. Will wonders never cease?
I’ll be clear about what I’m saying here: refraining from personal attacks and incendiary language is a reasonable minimum standard for any discussion. You should be able to avoid meanness. Don’t insult people. Don’t troll. Don’t humiliate or mock people. These things we can expect, and should expect, because the toxicity ruins discourse.
But that’s it. That’s the extent of what it makes sense to require from people in a debate. The “thou shalt nots” are sufficient. There’s no reason–or excuse–to start adding “thou shalts” to the mix as well. Don’t expect people to proactively express their empathy. Don’t express them to follow rules like, “always start every disagreement by first finding common ground.” Don’t get me wrong, these things can be great practices. I’m not saying anyone shouldn’t do them. They can be very powerful, practically speaking, and certainly can make debate more pleasant.
I’m just saying that they shouldn’t be transmuted from “nice-to-haves” into “minimum requirements” because when we do that we engage in the tyranny of kindness. We insinuate prejudice and bigotry into our discussions, and we make it inevitable for perverse incentives to lead to defining “nice” in such a way that a person cannot disagree without violating the norm. This is already commonplace. To have a different opinion on certain hot-button social issues–abortion, sexuality, transgenderism, gun-rights, etc.–is defined as being not-nice. After all, the best way to win a debate is to bar your opponent for showing up, and that’s what happens as soon as we start imposing any kind of ritualistic performance requirements.
I try very, very hard to be civil. I also try to be emapthic although, for me, that’s not easy. It does require a lot of effort. I have worked deliberately and conscientiously for many, many years to come across better in online communication (political or not) and I’m still a work in progress. I don’t want anyone to misunderstand me as calling for worse behavior online. We’ve got enough toxicity.
I’m just calling for moderation. Expect your opponents to not be abusive.
But don’t expect–or attempt to require–that they validate you, either.
I’m once again behind on my book reviews, so here’s a list of the books I’ve read recently, their descriptions, and accompanying videos.
Stephen Prothero, Religious Literacy: What Every American Needs to Know–And Doesn’t (HarperCollins, 2007): “The United States is one of the most religious places on earth, but it is also a nation of shocking religious illiteracy.
Only 10 percent of American teenagers can name all five major world religions and 15 percent cannot name any.
Nearly two-thirds of Americans believe that the Bible holds the answers to all or most of life’s basic questions, yet only half of American adults can name even one of the four gospels and most Americans cannot name the first book of the Bible.
Despite this lack of basic knowledge, politicians and pundits continue to root public policy arguments in religious rhetoric whose meanings are missed—or misinterpreted—by the vast majority of Americans. “We have a major civic problem on our hands,” says religion scholar Stephen Prothero. He makes the provocative case that to remedy this problem, we should return to teaching religion in the public schools. Alongside “reading, writing, and arithmetic,” religion ought to become the “Fourth R” of American education. Many believe that America’s descent into religious illiteracy was the doing of activist judges and secularists hell-bent on banishing religion from the public square. Prothero reveals that this is a profound misunderstanding. “In one of the great ironies of American religious history,” Prothero writes, “it was the nation’s most fervent people of faith who steered us down the road to religious illiteracy. Just how that happened is one of the stories this book has to tell.” Prothero avoids the trap of religious relativism by addressing both the core tenets of the world’s major religions and the real differences among them. Complete with a dictionary of the key beliefs, characters, and stories of Christianity, Islam, and other religions, Religious Literacy reveals what every American needs to know in order to confront the domestic and foreign challenges facing this country today” (Amazon).
Steven Reiss, The 16 Strivings for God: The New Psychology of Religious Experience (Mercer University Press, 2015): “This ground-breaking work will change the way we understand religion. Period. Previous scholars such as Freud, James, Durkheim, and Maslow did not successfully identify the essence of religion as fear of death, mysticism, sacredness, communal bonding, magic, or peak experiences because religion has no single essence. Religion is about the values motivated by the sixteen basic desires of human nature. It has mass appeal because it accommodates the values of people with opposite personality traits. This is the first comprehensive theory of the psychology of religion that can be scientifically verified. Reiss proposes a peer-reviewed, original theory of mysticism, asceticism, spiritual personality, and hundreds of religious beliefs and practices. Written for serious readers and anyone interested in psychology and religion (especially their own), this eminently readable book will revolutionize the psychology of religious experience by exploring the motivations and characteristics of the individual in their religious life” (Amazon).
Alfred R. Mele, Free: Why Science Hasn’t Disproved Free Will (Oxford University Press, 2014): “Does free will exist? The question has fueled heated debates spanning from philosophy to psychology and religion. The answer has major implications, and the stakes are high. To put it in the simple terms that have come to dominate these debates, if we are free to make our own decisions, we are accountable for what we do, and if we aren’t free, we’re off the hook. There are neuroscientists who claim that our decisions are made unconsciously and are therefore outside of our control and social psychologists who argue that myriad imperceptible factors influence even our minor decisions to the extent that there is no room for free will. According to philosopher Alfred R. Mele, what they point to as hard and fast evidence that free will cannot exist actually leaves much room for doubt. If we look more closely at the major experiments that free will deniers cite, we can see large gaps where the light of possibility shines through. In Free: Why Science Hasn’t Disproved Free Will, Mele lays out his opponents’ experiments simply and clearly, and proceeds to debunk their supposed findings, one by one, explaining how the experiments don’t provide the solid evidence for which they have been touted. There is powerful evidence that conscious decisions play an important role in our lives, and knowledge about situational influences can allow people to respond to those influences rationally rather than with blind obedience. Mele also explores the meaning and ramifications of free will. What, exactly, does it mean to have free will — is it a state of our soul, or an undefinable openness to alternative decisions? Is it something natural and practical that is closely tied to moral responsibility? Since evidence suggests that denying the existence of free will actually encourages bad behavior, we have a duty to give it a fair chance” (Amazon).
Brink Lindsey, Human Capitalism: How Economic Growth Has Made Us Smarter–and More Unequal (Princeton University Press, 2013): “What explains the growing class divide between the well educated and everybody else? Noted author Brink Lindsey, a senior scholar at the Kauffman Foundation, argues that it’s because economic expansion is creating an increasingly complex world in which only a minority with the right knowledge and skills–the right “human capital”–reap the majority of the economic rewards. The complexity of today’s economy is not only making these lucky elites richer–it is also making them smarter. As the economy makes ever-greater demands on their minds, the successful are making ever-greater investments in education and other ways of increasing their human capital, expanding their cognitive skills and leading them to still higher levels of success. But unfortunately, even as the rich are securely riding this virtuous cycle, the poor are trapped in a vicious one, as a lack of human capital leads to family breakdown, unemployment, dysfunction, and further erosion of knowledge and skills. In this brief, clear, and forthright eBook original, Lindsey shows how economic growth is creating unprecedented levels of human capital–and suggests how the huge benefits of this development can be spread beyond those who are already enjoying its rewards” (Amazon).
Gretchen Rubin, Better Than Before: What I Learned About Making and Breaking Habits–to Sleep More, Quit Sugar, Procrastinate Less, and Generally Build a Happier Life (Broadway Books, 2015): “How do we change? Gretchen Rubin’s answer: through habits. Habits are the invisible architecture of everyday life. It takes work to make a habit, but once that habit is set, we can harness the energy of habits to build happier, stronger, more productive lives. So if habits are a key to change, then what we really need to know is: How do we change our habits? Better than Before answers that question. It presents a practical, concrete framework to allow readers to understand their habits—and to change them for good. Infused with Rubin’s compelling voice, rigorous research, and easy humor, and packed with vivid stories of lives transformed, Better than Before explains the (sometimes counter-intuitive) core principles of habit formation. Along the way, Rubin uses herself as guinea pig, tests her theories on family and friends, and answers readers’ most pressing questions—oddly, questions that other writers and researchers tend to ignore:
• Why do I find it tough to create a habit for something I love to do?
• Sometimes I can change a habit overnight, and sometimes I can’t change a habit, no matter how hard I try. Why?
• How quickly can I change a habit?
• What can I do to make sure I stick to a new habit?
• How can I help someone else change a habit?
• Why can I keep habits that benefit others, but can’t make habits that are just for me?
Whether readers want to get more sleep, stop checking their devices, maintain a healthy weight, or finish an important project, habits make change possible. Reading just a few chapters of Better Than Before will make readers eager to start work on their own habits—even before they’ve finished the book” (Amazon).
Drew Magary, The Hike: A Novel (Penguin, 2016): “When Ben, a suburban family man, takes a business trip to rural Pennsylvania, he decides to spend the afternoon before his dinner meeting on a short hike. Once he sets out into the woods behind his hotel, he quickly comes to realize that the path he has chosen cannot be given up easily. With no choice but to move forward, Ben finds himself falling deeper and deeper into a world of man-eating giants, bizarre demons, and colossal insects. On a quest of epic, life-or-death proportions, Ben finds help comes in some of the most unexpected forms, including a profane crustacean and a variety of magical objects, tools, and potions. Desperate to return to his family, Ben is determined to track down the “Producer,” the creator of the world in which he is being held hostage and the only one who can free him from the path. At once bitingly funny and emotionally absorbing, Magary’s novel is a remarkably unique addition to the contemporary fantasy genre, one that draws as easily from the world of classic folk tales as it does from video games. In The Hike, Magary takes readers on a daring odyssey away from our day-to-day grind and transports them into an enthralling world propelled by heart, imagination, and survival” (Amazon).
I recently came across a 2012 paper by philosopher Michael Huemer titled “In Praise of Passivity.” Given our current political climate, I found the paper to be rather wise:
When it comes to political issues, we usually should not fight for what we believe in. Fighting for something, as I understand the term, involves fighting against someone. If one’s goal faces no (human) opposition, then one might be described as working for a cause (for instance, working to reduce tuberculosis, working to feed the poor) but not fighting for it. Thus, one normally fights for a cause only when what one is promoting is controversial. And most of the time, those who promote controversial causes do not actually know whether what they are promoting is correct, however much they may think they know…[T]hey are fighting in order to have the experience of fighting for a noble cause, rather than truly seeking the ideals they believe themselves to be seeking.
Fighting for a cause has significant costs. Typically, one expends a great deal of time and energy, while simultaneously imposing costs on others, particularly those who oppose one’s own political position. This time and energy is very likely to be wasted, since neither side knows the answer to the issue over which they contend. In many cases, the effort is expended in bringing about a policy that turns out to be harmful or unjust. It would be better to spend one’s time and energy on aims that one knows to be good.
Thus, suppose you are deciding between donating time or money to Moveon.org (a left-wing political advocacy group) and donating time or money to the Against Malaria Foundation (a charity that fights malaria in the developing world). For those concerned about human welfare, the choice should be clear. Donations to Moveon.org may or may not affect public policy, and if they do, the effect may be either good or bad–that is a matter for debate. But donations to Against Malaria definitely save lives. No one disputes that.
There are exceptions to the rule that one should not fight for causes. Sometimes, people find it necessary to fight for a cause, despite that the cause is obviously and uncontroversially good–as in the case of fighting to end human rights violations in a dictatorial regime. In this case, one’s opponents are simply corrupt or evil. Occasionally, a person knows some cause to be correct, even though it is controversial among the general public. This may occur because the individual possesses expertise that the public lacks, and the public has chosen to ignore the expert consensus. But these are a minority of the cases. Most individuals fighting for causes do not in fact know what they are doing.
Popular wisdom often praises those who get involved in politics, who vote in democratic elections, fight for a cause they believe in, and try to make the world a better place. We tend to assume that such individuals are moved by high ideals and that, when they change the world, it is usually for the better.
The clear evidence of human ignorance and irrationality in the political arena poses a serious challenge to the popular wisdom. Lacking awareness of basic facts of their political systems, to say nothing of the more sophisticated knowledge that would be needed to reliably resolve controversial political issues, most citizens can do no more than guess when they enter the voting booth. Far from being a civic duty, the attempt to influence public policy through such arbitrary guesses is unjust and socially irresponsible. Nor have we any good reason to think political activists or political leaders to be any more reliable in arriving at correct positions on controversial issues; those who are most politically active are often the most ideologically biased, and may therefore be even less reliable than the average person at identifying political truths. In most cases, therefore, political activists and leaders act irresponsibly and unjustly when they attempt to impose their solutions to social problems on the rest of society.
…Political leaders, voters, and activists are well-advised to follow the dictum, often applied to medicine, to “first, do no harm.” A plausible rule of thumb, to guard us against doing harm as a result of overconfident ideological beliefs, is that one should not forcibly impose requirements or restrictions on others unless the value of those requirements or restrictions is essentially uncontroversial among the community of experts in conditions of free and open debate. Of course, even an expert consensus may be wrong, but this rule of thumb may be the best that such fallible beings as ourselves can devise.
So, the next time you get the itch to raise awareness about some controversial political issue, Huemer suggests…