Anger is toxic, and it has no place in ordinary political disputes. I’m very reluctant to add to it.
And yet, it is less with anger and more with a sense of bone-deep bewilderment that I–reluctantly–read a few articles about Alfie Evans.
Aflie is a baby with a severe neurological affliction that–according to doctors–has left him in a vegetative state with no conceivable chance of recovery. This is tragic, and no one is to blame for Alfie’s condition.
The UK courts have decided that no further care should be given to Alfie because there’s no hope of his recovery. This is tragic, but also defensible. It’s not possible to expend unlimited resources on every tragic case, and hard calls have to be made.
But where things stop making sense to me is where the UK government has refused to allow Alfie to be transported to Italy for additional care. Alfie has been granted Italian citizenship, the Italian military sent a plane to UK to fly him to a hospital in Italy, and all of this was done–one guesses–largely in response to the Pope’s public support for Alfie.
The UK government’s response is, essentially, that Alfie’s parents don’t know what they’re doing. The doctors know better. That may be true. Even the Italian hospital admits it can do no more than keep Alfie alive while doctors study his case. No one things there is a miracle cure.
But here’s the thing: why does the UK government, or any group of doctors, get to decide?
It gets more baffling still. Now Alfie’s parents, haven given up on the Italian option, just want to take him home. But even that they cannot do unless the doctors say so. In what universe is that a morally defensible position to take? Quoting an anonymous British father:
When my son was born nearly 16 months ago, I found to my amazement that I could not take him home until a paediatrician had signed a small slip of paper, to be handed in at the exit, authorising his release. I joked to my wife that we were only parenting under licence from the State. It seems less of a joke now.
The last straw–and the cause of the anger I can’t deny I feel about this–is the insufferable arrogance of the UK politicians and medical experts. For example:
Lord Justice McFarlane said parents, like those of Alfie Evans, could be vulnerable to receiving bad medical advice, adding that there was evidence that the parents made decisions based on incorrect guidance.
Hospital officials at Alder Hey say they have received “unprecedented personal abuse” from the global backlash to Alfie’s case. The Liverpool hospital has faced several protests in recent weeks, organized by a group calling itself “Alfie’s Army.”
“Having to carry on our usual day-to-day work in a hospital that has required a significant police presence just to keep our patients, staff and visitors safe is completely unacceptable,” the hospital’s chairman, Sir David Henshaw, and chief executive Louise Shepherd said.
Oh, is it “completely unacceptable” for people to protest what is essentially government-sanctioned kidnapping? I’m so sorry! I come from this crazy moral universe where parents–and not the government–are the guardians of their own children.
Or here’s another one:
Sometimes, the sad fact is that parents do not know what is best for their child,” Wilkinson said. “They are led by their grief and their sadness, their understandable desire to hold on to their child, to request treatment that will not and cannot help.
The UK was, in many ways, the birthplace of our political heritage of individual liberty and rights. It’s mystifying–and tragic–to see the sorry state of decay it has fallen into today.
So tell me, folks, am I missing some really vital aspects to this story that make it something other than a micro-dystopia?
I recently had an interesting political exchange–as have basically all of us, these days–in which I was called out for not being nice enough. At least, that’s how I interpreted it. My interlocutor suggested that my argument was deficient because I hadn’t started out by finding something we could agree on before launching my critique. A critique that was, just for the record, entirely civil and on-point. At no point did I get personal and there was no allegation that I had. The problem wasn’t that I had been rude, uncivil, or anything like that. The problem was that I hadn’t been nice enough.
Now, OK, it never hurts to be nice, right? Speaking as a purely practical matter, shouldn’t we always try to express our beliefs in as non-abrasive a way as possible? You get more flies with honey, and all that. So, what’s the harm in accepting as a new rule of debate the general principle that we should always find a point of common ground first and only then engage the issues directly. What kind of a person disagrees with this? Surely only a heartless and soulless person, and why would we want to listen to what someone like that has to say, anyway?
And that, my friends, is why I dislike the tyranny of kindness.
The problem with it is that it’s only a tiny jump from saying, “Why not be nice?” to then saying, “If you’re not nice, nothing you say matters.” And “nice” is an awfully subjective term. There is no logical reason why a general rule of thumb to look for common ground should lead to exiling some people from discussion for not following arbitrary rituals, but–given the incentives of political discourse–the outcome is inevitable.
I realize I’m swimming upstream here, so let me try a different tack and see if I can make some headway.
Requiring people to be nice enough in their debates is discriminatory against non-neurotypical people. The term “neurotypical” is one of those neologisms like “cissexual” that is invented to describe the category of people who didn’t need a description before because they’re just, well, normal baseline humans. A cissexual is someone who identifies as the gender that matches their birth sex. Neurotypical means “not displaying or characterized by autistic or other neurologically atypical patterns of thought or behavior.” So, people who aren’t on the autism spectrum are neurotypical.
Neurotpyical people have no problem conforming with this new minimum requirement to engage in public discourse. They are, by definition, able to conform with expected social conventions. It is easy and natural for them to both interpret ordinary social cues and conform their own behavior–including written communication–to standard expectations. A neurotypical can easily come across as nice with minimal effort. Someone who is not neurotypical, well, they might have a harder time. For them, the requirement to be “just be nice” is not actually something incidental. It’s something that requires an awful lot of conscious effort and attention, if it’s attainable at all.
So our seemingly benign call to emphasize niceness in discourse functions–whether we intended to or not–as a form of bigotry that excludes a certain class of people from discussion.
Which doesn’t sound very nice, does it?
I am not merely playing games here. This isn’t a theoretical problem, it’s a real one. Gender, as the saying goes, is performative. So is all human speech. And we’re not all equally good at it. Tying the validity of a person’s argument–the worth of their viewpoint–to their capability and/or willingness to perform well enough is not a benign requirement. It’s not a case that it might lead to unfair applications, it is intrinsically exclusionary and debilitating. Which is exactly why it’s so increasingly popular. Calling on people to be nice isn’t neutral. It’s a power-play. Which is why–in other contexts–minorities have long-rejected it as “tone policing”.
Look at that, I’m agreeing with an aspect of social justice ideology. Will wonders never cease?
I’ll be clear about what I’m saying here: refraining from personal attacks and incendiary language is a reasonable minimum standard for any discussion. You should be able to avoid meanness. Don’t insult people. Don’t troll. Don’t humiliate or mock people. These things we can expect, and should expect, because the toxicity ruins discourse.
But that’s it. That’s the extent of what it makes sense to require from people in a debate. The “thou shalt nots” are sufficient. There’s no reason–or excuse–to start adding “thou shalts” to the mix as well. Don’t expect people to proactively express their empathy. Don’t express them to follow rules like, “always start every disagreement by first finding common ground.” Don’t get me wrong, these things can be great practices. I’m not saying anyone shouldn’t do them. They can be very powerful, practically speaking, and certainly can make debate more pleasant.
I’m just saying that they shouldn’t be transmuted from “nice-to-haves” into “minimum requirements” because when we do that we engage in the tyranny of kindness. We insinuate prejudice and bigotry into our discussions, and we make it inevitable for perverse incentives to lead to defining “nice” in such a way that a person cannot disagree without violating the norm. This is already commonplace. To have a different opinion on certain hot-button social issues–abortion, sexuality, transgenderism, gun-rights, etc.–is defined as being not-nice. After all, the best way to win a debate is to bar your opponent for showing up, and that’s what happens as soon as we start imposing any kind of ritualistic performance requirements.
I try very, very hard to be civil. I also try to be emapthic although, for me, that’s not easy. It does require a lot of effort. I have worked deliberately and conscientiously for many, many years to come across better in online communication (political or not) and I’m still a work in progress. I don’t want anyone to misunderstand me as calling for worse behavior online. We’ve got enough toxicity.
I’m just calling for moderation. Expect your opponents to not be abusive.
But don’t expect–or attempt to require–that they validate you, either.
I’m once again behind on my book reviews, so here’s a list of the books I’ve read recently, their descriptions, and accompanying videos.
Stephen Prothero, Religious Literacy: What Every American Needs to Know–And Doesn’t (HarperCollins, 2007): “The United States is one of the most religious places on earth, but it is also a nation of shocking religious illiteracy.
Only 10 percent of American teenagers can name all five major world religions and 15 percent cannot name any.
Nearly two-thirds of Americans believe that the Bible holds the answers to all or most of life’s basic questions, yet only half of American adults can name even one of the four gospels and most Americans cannot name the first book of the Bible.
Despite this lack of basic knowledge, politicians and pundits continue to root public policy arguments in religious rhetoric whose meanings are missed—or misinterpreted—by the vast majority of Americans. “We have a major civic problem on our hands,” says religion scholar Stephen Prothero. He makes the provocative case that to remedy this problem, we should return to teaching religion in the public schools. Alongside “reading, writing, and arithmetic,” religion ought to become the “Fourth R” of American education. Many believe that America’s descent into religious illiteracy was the doing of activist judges and secularists hell-bent on banishing religion from the public square. Prothero reveals that this is a profound misunderstanding. “In one of the great ironies of American religious history,” Prothero writes, “it was the nation’s most fervent people of faith who steered us down the road to religious illiteracy. Just how that happened is one of the stories this book has to tell.” Prothero avoids the trap of religious relativism by addressing both the core tenets of the world’s major religions and the real differences among them. Complete with a dictionary of the key beliefs, characters, and stories of Christianity, Islam, and other religions, Religious Literacy reveals what every American needs to know in order to confront the domestic and foreign challenges facing this country today” (Amazon).
Steven Reiss, The 16 Strivings for God: The New Psychology of Religious Experience (Mercer University Press, 2015): “This ground-breaking work will change the way we understand religion. Period. Previous scholars such as Freud, James, Durkheim, and Maslow did not successfully identify the essence of religion as fear of death, mysticism, sacredness, communal bonding, magic, or peak experiences because religion has no single essence. Religion is about the values motivated by the sixteen basic desires of human nature. It has mass appeal because it accommodates the values of people with opposite personality traits. This is the first comprehensive theory of the psychology of religion that can be scientifically verified. Reiss proposes a peer-reviewed, original theory of mysticism, asceticism, spiritual personality, and hundreds of religious beliefs and practices. Written for serious readers and anyone interested in psychology and religion (especially their own), this eminently readable book will revolutionize the psychology of religious experience by exploring the motivations and characteristics of the individual in their religious life” (Amazon).
Alfred R. Mele, Free: Why Science Hasn’t Disproved Free Will (Oxford University Press, 2014): “Does free will exist? The question has fueled heated debates spanning from philosophy to psychology and religion. The answer has major implications, and the stakes are high. To put it in the simple terms that have come to dominate these debates, if we are free to make our own decisions, we are accountable for what we do, and if we aren’t free, we’re off the hook. There are neuroscientists who claim that our decisions are made unconsciously and are therefore outside of our control and social psychologists who argue that myriad imperceptible factors influence even our minor decisions to the extent that there is no room for free will. According to philosopher Alfred R. Mele, what they point to as hard and fast evidence that free will cannot exist actually leaves much room for doubt. If we look more closely at the major experiments that free will deniers cite, we can see large gaps where the light of possibility shines through. In Free: Why Science Hasn’t Disproved Free Will, Mele lays out his opponents’ experiments simply and clearly, and proceeds to debunk their supposed findings, one by one, explaining how the experiments don’t provide the solid evidence for which they have been touted. There is powerful evidence that conscious decisions play an important role in our lives, and knowledge about situational influences can allow people to respond to those influences rationally rather than with blind obedience. Mele also explores the meaning and ramifications of free will. What, exactly, does it mean to have free will — is it a state of our soul, or an undefinable openness to alternative decisions? Is it something natural and practical that is closely tied to moral responsibility? Since evidence suggests that denying the existence of free will actually encourages bad behavior, we have a duty to give it a fair chance” (Amazon).
Brink Lindsey, Human Capitalism: How Economic Growth Has Made Us Smarter–and More Unequal (Princeton University Press, 2013): “What explains the growing class divide between the well educated and everybody else? Noted author Brink Lindsey, a senior scholar at the Kauffman Foundation, argues that it’s because economic expansion is creating an increasingly complex world in which only a minority with the right knowledge and skills–the right “human capital”–reap the majority of the economic rewards. The complexity of today’s economy is not only making these lucky elites richer–it is also making them smarter. As the economy makes ever-greater demands on their minds, the successful are making ever-greater investments in education and other ways of increasing their human capital, expanding their cognitive skills and leading them to still higher levels of success. But unfortunately, even as the rich are securely riding this virtuous cycle, the poor are trapped in a vicious one, as a lack of human capital leads to family breakdown, unemployment, dysfunction, and further erosion of knowledge and skills. In this brief, clear, and forthright eBook original, Lindsey shows how economic growth is creating unprecedented levels of human capital–and suggests how the huge benefits of this development can be spread beyond those who are already enjoying its rewards” (Amazon).
Gretchen Rubin, Better Than Before: What I Learned About Making and Breaking Habits–to Sleep More, Quit Sugar, Procrastinate Less, and Generally Build a Happier Life (Broadway Books, 2015): “How do we change? Gretchen Rubin’s answer: through habits. Habits are the invisible architecture of everyday life. It takes work to make a habit, but once that habit is set, we can harness the energy of habits to build happier, stronger, more productive lives. So if habits are a key to change, then what we really need to know is: How do we change our habits? Better than Before answers that question. It presents a practical, concrete framework to allow readers to understand their habits—and to change them for good. Infused with Rubin’s compelling voice, rigorous research, and easy humor, and packed with vivid stories of lives transformed, Better than Before explains the (sometimes counter-intuitive) core principles of habit formation. Along the way, Rubin uses herself as guinea pig, tests her theories on family and friends, and answers readers’ most pressing questions—oddly, questions that other writers and researchers tend to ignore:
• Why do I find it tough to create a habit for something I love to do?
• Sometimes I can change a habit overnight, and sometimes I can’t change a habit, no matter how hard I try. Why?
• How quickly can I change a habit?
• What can I do to make sure I stick to a new habit?
• How can I help someone else change a habit?
• Why can I keep habits that benefit others, but can’t make habits that are just for me?
Whether readers want to get more sleep, stop checking their devices, maintain a healthy weight, or finish an important project, habits make change possible. Reading just a few chapters of Better Than Before will make readers eager to start work on their own habits—even before they’ve finished the book” (Amazon).
Drew Magary, The Hike: A Novel (Penguin, 2016): “When Ben, a suburban family man, takes a business trip to rural Pennsylvania, he decides to spend the afternoon before his dinner meeting on a short hike. Once he sets out into the woods behind his hotel, he quickly comes to realize that the path he has chosen cannot be given up easily. With no choice but to move forward, Ben finds himself falling deeper and deeper into a world of man-eating giants, bizarre demons, and colossal insects. On a quest of epic, life-or-death proportions, Ben finds help comes in some of the most unexpected forms, including a profane crustacean and a variety of magical objects, tools, and potions. Desperate to return to his family, Ben is determined to track down the “Producer,” the creator of the world in which he is being held hostage and the only one who can free him from the path. At once bitingly funny and emotionally absorbing, Magary’s novel is a remarkably unique addition to the contemporary fantasy genre, one that draws as easily from the world of classic folk tales as it does from video games. In The Hike, Magary takes readers on a daring odyssey away from our day-to-day grind and transports them into an enthralling world propelled by heart, imagination, and survival” (Amazon).
I recently came across a 2012 paper by philosopher Michael Huemer titled “In Praise of Passivity.” Given our current political climate, I found the paper to be rather wise:
When it comes to political issues, we usually should not fight for what we believe in. Fighting for something, as I understand the term, involves fighting against someone. If one’s goal faces no (human) opposition, then one might be described as working for a cause (for instance, working to reduce tuberculosis, working to feed the poor) but not fighting for it. Thus, one normally fights for a cause only when what one is promoting is controversial. And most of the time, those who promote controversial causes do not actually know whether what they are promoting is correct, however much they may think they know…[T]hey are fighting in order to have the experience of fighting for a noble cause, rather than truly seeking the ideals they believe themselves to be seeking.
Fighting for a cause has significant costs. Typically, one expends a great deal of time and energy, while simultaneously imposing costs on others, particularly those who oppose one’s own political position. This time and energy is very likely to be wasted, since neither side knows the answer to the issue over which they contend. In many cases, the effort is expended in bringing about a policy that turns out to be harmful or unjust. It would be better to spend one’s time and energy on aims that one knows to be good.
Thus, suppose you are deciding between donating time or money to Moveon.org (a left-wing political advocacy group) and donating time or money to the Against Malaria Foundation (a charity that fights malaria in the developing world). For those concerned about human welfare, the choice should be clear. Donations to Moveon.org may or may not affect public policy, and if they do, the effect may be either good or bad–that is a matter for debate. But donations to Against Malaria definitely save lives. No one disputes that.
There are exceptions to the rule that one should not fight for causes. Sometimes, people find it necessary to fight for a cause, despite that the cause is obviously and uncontroversially good–as in the case of fighting to end human rights violations in a dictatorial regime. In this case, one’s opponents are simply corrupt or evil. Occasionally, a person knows some cause to be correct, even though it is controversial among the general public. This may occur because the individual possesses expertise that the public lacks, and the public has chosen to ignore the expert consensus. But these are a minority of the cases. Most individuals fighting for causes do not in fact know what they are doing.
Popular wisdom often praises those who get involved in politics, who vote in democratic elections, fight for a cause they believe in, and try to make the world a better place. We tend to assume that such individuals are moved by high ideals and that, when they change the world, it is usually for the better.
The clear evidence of human ignorance and irrationality in the political arena poses a serious challenge to the popular wisdom. Lacking awareness of basic facts of their political systems, to say nothing of the more sophisticated knowledge that would be needed to reliably resolve controversial political issues, most citizens can do no more than guess when they enter the voting booth. Far from being a civic duty, the attempt to influence public policy through such arbitrary guesses is unjust and socially irresponsible. Nor have we any good reason to think political activists or political leaders to be any more reliable in arriving at correct positions on controversial issues; those who are most politically active are often the most ideologically biased, and may therefore be even less reliable than the average person at identifying political truths. In most cases, therefore, political activists and leaders act irresponsibly and unjustly when they attempt to impose their solutions to social problems on the rest of society.
…Political leaders, voters, and activists are well-advised to follow the dictum, often applied to medicine, to “first, do no harm.” A plausible rule of thumb, to guard us against doing harm as a result of overconfident ideological beliefs, is that one should not forcibly impose requirements or restrictions on others unless the value of those requirements or restrictions is essentially uncontroversial among the community of experts in conditions of free and open debate. Of course, even an expert consensus may be wrong, but this rule of thumb may be the best that such fallible beings as ourselves can devise.
So, the next time you get the itch to raise awareness about some controversial political issue, Huemer suggests…
I have always found the ocean to be frightening and incredibly alien. The temperature and lack of oxygen in space are certainly scary, but add creatures that are weird and often predatory to the mix? No thank you. But this makes Godfrey-Smith’s exploration all the more absorbing. He weaves together philosophy, science, and personal anecdotes (he’s an avid scuba diver) in a way that causes the reader to reflect on the strangeness of life and especially the oddity of consciousness. He explains,
Cephalopods are an island of mental complexity in the sea of invertebrate animals. Because our most recent common ancestor was so simple and lies so far back, cephalopods are an independent experiment in the evolution of large brains and complex behavior. If we can make contact with cephalopods as sentient beings, it is not because of a shared history, not because of kinship, but because evolution built minds twice over. This is probably the closest we will come to meeting an intelligent alien (pg. 9).
Yet, the neurons of an octopus operate differently than those of vertebrates, spanning the creature’s entire body:
“Smart” is a contentious term to use, so let’s begin cautiously. First, these animals evolved large nervous systems, including large brains…A common octopus…has about 500 million neurons in its body…Humans have many more–something like 100 billion–but the octopus is in the same range as various smaller mammals, close to the range of dogs, and cephalopods have much larger nervous systems than all other invertebrates…When biologists look at a bird, a mammal, even a fish, they are able to map many parts of one animal’s brain onto another’s. Vertebrate brains all have a common architecture. When vertebrate brains are compared to octopus brains, all bets–or rather, all mappings–are off. There is no part-by-part correspondence between the parts of their brains and ours. Indeed, octopuses have not even collected the majority of their neurons inside their brains; most of the neurons are found in the their arms (pg. 50-51).
And that’s just getting started. These scientific and philosophical reflections go back to some of the deepest questions that have been with humanity for thousands of years:
What is it to be alive?
What is to be?
What is it to be conscious?
While I would have preferred a little more philosophy (even some speculation), the book is nonetheless an eye-opening read. You can see Godfrey-Smith speaking on the subject at Google below.
I read an interesting book called The Swerve: How the World Became Modern last week. It won a Pulitzer and National Book Award, but I wasn’t that impressed. There were some really interesting points, however, and a couple of them reinforced this lesson that I feel like I keep learning again and again and again but never fully internalize: the world isn’t the way you think it is. Let me give you two examples.
First, the book introduced me to Thomas Harriot. Who’s he? Well, you’ve never heard of him, but in a nutshell he came up with all of the ideas that Galileo and others are credited with before they did but–since he didn’t want to get vilified–he kept his ideas to himself. Here’s the passage from the book describing him:
Thomas Harriot…constructed the largest telescope in England, observed sunspots, sketched the lunar surface, observed the satellites of planets, proposed that planets moved not in perfect circles but in elliptical orbits, worked on mathematical cartography, discovered the sine law of refraction, and achieved major breakthroughs in algebra. Many of these discoveries anticipated ones for which Galileo, Descartes, and others became famous. Bu Harriot isn’t credited with any of them. They were found only recently in the mass of unpublished papers he left at his death. Among those papers was a careful list that Harriot, an atomist, kept of the attacks upon him as a purported atheist. He knew that the attacks would only intensify if he published any of his findings, and he preferred life to fame. Who can blame him?
I know this isn’t new, but it just reinforces this notion I have that if we ever got access to a giant library in the sky where we could see who came up with what when, we’d find that the list of famous people credited with major discoveries and the list of people who actually thought them up first would be almost entirely distinct. But it’s not as simple as just lazily saying, “everything’s been thought of before.” As far as I can tell there really are a few singular geniuses–Newton and Einstein come to mind–who made breakthroughs that are unambiguously their own. So there is such a thing as being the first person to discover something. It’s just that the record we have is really, really inaccurate.
Another example was the long, long list of ideas from Epicureanism that show modernity is a hoax. I talked about this in my review, and here’s what I said:
I was also utterly shocked–once again–at how many of the core tenets of modernity from evolution by natural selection to materialism are actually retreads on philosophy that’s thousands of years old. I don’t know if they still teach this way, but when I was in school we learned about progress. In order to make the progress narrative stick, they had to go out of their way to ridicule caricatures of Greek thought that–without the ridicule and the caricature–would be so similar to modern thought that the progress narrative would go out the window. So, while we believe in atoms today, of course that’s much different than the atomism of Democritus, right? Well, yes and then again no.
I transcribed a lot of the list of core principles from Epicureanism (in The Swerve) today, and on top of evolution by natural selection and materialism, we’ve got all the core tenets of New Atheism (e.g. ” The universe has no creator or designer,” “The soul dies,” ” All organized religions are superstitious delusions,” and ” Religions are invariably cruel.”) and many more basic scientific tenets, including the idea that there is an underlying set of physical law that govern the interactions of atoms to generate all material phenomena.
I think some of this is overblown. My biggest complaint about the book is that it’s too partisan in favor of New Atheism, and so it’s easy to suspect that Stephen Greenblatt read his own ideology back on top of the ancient Epicureans (intentionally or not). I completely lack the training to have a strong opinion on that. But it seems abundantly clear that–of not a carbon copy of New Atheism–quite a lot of the raw material for cutting-edge pop philosophy is literally thousands and thousands of years old. Which, again, is not the message that I got in school.
So–like I said–nothing is the way you think it is. The more you read and learn, the more you realize just how fragile and provisional all your beliefs truly are.
These readings have made me more interested in ways of knowing and–as the publishers should indicate–our institutions of knowledge. What do those within these institutions (and thus those typically generating new knowledge) think and believe?
According to sociologist Elaine Howard Ecklund’s study, 47% of elite scientists in U.S. have a religious tradition, while 34% of American scientists profess atheism, 30% profess agnosticism, and 36% profess at least some form of belief in a “higher power” (God or otherwise). Furthermore, she explains, “Nearly 60 percent of scientists I interviewed displayed a spirituality that scholars might call “thin.””
Philosopher Helen De Cruz summarizes the sociological data on spirituality within academia:
Atheism and agnosticism are widespread among academics, especially among those working in elite institutions. A survey among National Academy of Sciences members (all senior academics, overwhelmingly from elite faculties) found that the majority disbelieved in God’s existence (72.2%), with 20.8% being agnostic, and only 7% theists (Larson and Witham 1998). Ecklund and Scheitle (2007) analyzed responses from scientists (working in the social and natural sciences) from 21 elite universities in the US. About 31.2% of their participants self-identified as atheists and a further 31 % as agnostics. The remaining number believed in a higher power (7%), sometimes believed in God (5.4%), believed in God with some doubts (15.5%), or believed in God without any doubts (9.7%). In contrast to the general population, the older scientists in this sample did not show higher religiosity—in fact, they were more likely to say that they did not believe in God. On the other hand, Gross and Simmons (2009) examined a more heterogeneous sample of scientists from American colleges, including community colleges, elite doctoral-granting institutions, non-elite four-year state schools, and small liberal arts colleges. They found that the majority of university professors (full-time tenured or tenure-track faculty) had some theistic beliefs, believing either in God (34.9%), in God with some doubts (16.6%), in God some of the time (4.3%), or in a higher power (19.2%). Belief in God was influenced both by type of institution (lower theistic belief in more prestigious schools) and by discipline (lower theistic belief in the physical and biological sciences compared to the social sciences and humanities).
These latter findings indicate that academics are more religiously diverse than has been popularly assumed and that the majority are not opposed to religion. Even so, in the US the percentage of atheists and agnostics in academia is higher than in the general population, a discrepancy that requires an explanation. One reason might be a bias against theists in academia. For example, when sociologists were surveyed whether they would hire someone if they knew the candidate was an evangelical Christian, 39.1% said they would be less likely to hire that candidate—there were similar results with other religious groups, such as Mormons or Muslims (Yancey 2012). Another reason might be that theists internalize prevalent negative societal stereotypes, which leads them to underperform in scientific tasks and lose interest in pursuing a scientific career. Kimberly Rios et al. (2015) found that non-religious participants believe that theists, especially Christians, are less competent in and less trustful of science. When this stereotype was made salient, Christian participants performed worse in logical reasoning tasks (which were misleadingly presented as “scientific reasoning tests”) than when the stereotype was not mentioned.
It is unclear whether religious and scientific thinking are cognitively incompatible. Some studies suggest that religion draws more upon an intuitive style of thinking, distinct from the analytic reasoning style that characterizes science (Gervais and Norenzayan 2012). On the other hand, the acceptance of theological and scientific views both rely on a trust in testimony, and cognitive scientists have found similarities between the way children and adults understand testimony to invisible entities in religious and scientific domains (Harris et al. 2006). Moreover, theologians such as the Church Fathers and Scholastics were deeply analytic in their writings, indicating that the association between intuitive and religious thinking might be a recent western bias. More research is needed to examine whether religious and scientific thinking styles are inherently in tension.
How about philosophers? A 2014 study came up with the following numbers (from pgs. 14-16 of the ungated version). I’ve highlighted a few that stand out to me:
1. A priori knowledge: yes 71.1%; no 18.4%; other 10.5%.
2. Abstract objects: Platonism 39.3%; nominalism 37.7%; other 23.0%.
3. Aesthetic value: objective 41.0%; subjective 34.5%; other 24.5%.
4. Analytic-synthetic distinction: yes 64.9%; no 27.1%; other 8.1%.
5. Epistemic justification: externalism 42.7%; internalism 26.4%; other 30.8%.
6. External world: non-skeptical realism 81.6%; skepticism 4.8%; idealism 4.3%; other
7. Free will: compatibilism 59.1%; libertarianism 13.7%; no free will 12.2%; other
8. God: atheism 72.8%; theism 14.6%; other 12.6%.
9. Knowledge claims: contextualism 40.1%; invariantism 31.1%; relativism 2.9%;
10. Knowledge: empiricism 35.0%; rationalism 27.8%; other 37.2%.
11. Laws of nature: non-Humean 57.1%; Humean 24.7%; other 18.2%.
12. Logic: classical 51.6%; non-classical 15.4%; other 33.1%.
13. Mental content: externalism 51.1%; internalism 20.0%; other 28.9%.
14. Meta-ethics: moral realism 56.4%; moral anti-realism 27.7%; other 15.9%.
15. Metaphilosophy: naturalism 49.8%; non-naturalism 25.9%; other 24.3%.
16. Mind: physicalism 56.5%; non-physicalism 27.1%; other 16.4%.
17. Moral judgment: cognitivism 65.7%; non-cognitivism 17.0%; other 17.3%.
18. Moral motivation: internalism 34.9%; externalism 29.8%; other 35.3%.
19. Newcomb’s problem: two boxes 31.4%; one box 21.3%; other 47.4%.
20. Normative ethics: deontology 25.9%; consequentialism 23.6%; virtue ethics 18.2%;
21. Perceptual experience: representationalism 31.5%; qualia theory 12.2%; disjunctivism
11.0%; sense-datum theory 3.1%; other 42.2%.
22. Personal identity: psychological view 33.6%; biological view 16.9%; further-fact
view 12.2%; other 37.3%.
23. Politics: egalitarianism 34.8%; communitarianism 14.3%; libertarianism 9.9%;
24. Proper names: Millian 34.5%; Fregean 28.7%; other 36.8%.
25. Science: scientific realism 75.1%; scientific anti-realism 11.6%; other 13.3%.
26. Teletransporter: survival 36.2%; death 31.1%; other 32.7%.
27. Time: B-theory 26.3%; A-theory 15.5%; other 58.2%.
28. Trolley problem: switch 68.2%; don’t switch 7.6%; other 24.2%.
29. Truth: correspondence 50.8%; deflationary 24.8%; epistemic 6.9%; other 17.5%.
30. Zombies: conceivable but not metaphysically possible 35.6%; metaphysically possible
23.3%; inconceivable 16.0%; other 25.1%
What’s interesting is that while nearly 73% of philosophers are atheist, only about half are naturalists or physicalists when it comes to the mind. Furthermore, nearly 40% would consider themselves Platonists, indicating the possibility of a Platonic atheism. Yet, when you consider philosophers of religion, the numbers reverse:
As mentioned before, the newest issue of Dialogue was just released. The first article of the new issue is Robert Rees’ “Reimagining the Restoration: Why Liberalism is the Ultimate Flowering of Mormonism.” Rees attempts to redeem the word from its current negative connotations in American society, reviewing its meaning in the Middle Ages to the Enlightenment. He further connects to Joseph Smith’s statement that God “is more liberal in His views, and boundless in His mercies and blessings, than we are ready to believe or receive” (pg. 4). Rees goes on to emphasize liberal commitments to earth stewardship, gender equality, the poor, peace, education, etc.
The article reminded me of a recent essay by economic historian Deirdre McCloskey titled “Manifesto for a New American Liberalism, or How to Be a Humane Libertarian.” As McCloskey notes, “Outside the United States libertarianism is still called plain “liberalism,” as in the usage of the president of France, Emmanuel Macron, with no “neo-” about it” (pg. 1). “Liberals 1.0 don’t like violence,” she continues. “They are friends of the voluntary market order, as against the policy-heavy feudal order or bureaucratic order or military-industrial order. They are, as Hayek declared, “the party of life, the party that favors free growth and spontaneous evolution,” against the various parties of left and right which wish “to impose [by violence] upon the world a preconceived rational pattern.” In McCloskey’s view, “humane liberals are very far from being against poor people. Nor are they ungenerous, or lacking in pity. Nor are they strictly pacifist, willing to surrender in the face of an invasion. But they believe that in achieving such goods as charity and security the polity should not turn carelessly to violence, at home or abroad, whether for leftish or rightish purposes, whether to help the poor or to police the world. We should depend chiefly on voluntary agreements, such as exchange-tested betterment, or treaties, or civil conversation, or the gift of grace, or a majority voting constrained by civil rights for the minority” (pg. 2). She explains,
Such a humane liberalism has for two centuries worked on the whole astonishingly well. For one thing it produced increasingly free people, which (we moderns think) is a great good in itself. Slaves, women, colonial people, gays, handicapped, and above all the poor, from which almost all of us come, have been increasingly allowed since 1776 to pursue their own projects consistent with not using physical violence to interfere with other people’s projects. As someone put it: In the eighteenth century kings had rights and women had none. Now it’s the other way around. And—quite surprisingly—the new liberalism, by inspiriting for the first time in history a great mass of ordinary people, produced a massive explosion of betterments.
…The Enrichment was, I say again in case you missed it, three thousand percent per person, near enough, utterly unprecedented. The goods and services available to even the poorest rose by that astounding figure, in a world in which mere doublings, increases of merely 100 percent, had been rare and temporary, as in the glory of fifth-century Greece or the vigor of the Song Dynasty. In every earlier case, the little industrial revolutions had reverted eventually to a real income per head in today’s prices of about $3 a day, which was the human condition since the caves. Consider trying to live on $3 a day, as many people worldwide still do (though during the past forty years their number has fallen like a stone). After 1800 there was no reversion. On the contrary, in every one of the forty or so recessions since 1800 the real income per head after a recession exceeded what it had been at the previous peak. Up, up, up. Even including the $3- a-day people in Chad and Zimbabwe, world real income per head has increased during the past two centuries by a factor of ten, and by a factor of thirty as I said, in the countries that were lucky, and liberally wise. Hong Kong. South Korea. Botswana. The material and cultural enrichment bids fair to spread now to the world.
And the enrichment has been equalizing. Nowadays in places like Japan and the United States the poorest make more, corrected for inflation, than did the top quarter or so two centuries ago (pgs. 4-5).
[W]e seek to draw useful parallels between Hasidic Judaism and Mormonism by presenting the former’s concept of “worship through corporeality” as a theologically rich source for understanding and describing Mormonism’s materialist merging of heaven and earth, sacred and mundane. If, as one scholar has stated, “an examination of other revival movements and their characteristics will also provide a new background against that which is distinctive in Hasidism will stand out in clear relief,” the same holds true for the study of early Mormonism. In this paper, we will outline Hasidism’s concept of “worship through corporeality” and its roots in Enochian folklore. We will also briefly touch on the Mussar movement’s connection to these Enoch stories and how it shaped their ethics and worldview. Finally, we will explore multiple sources throughout early Mormonism that similarly demonstrate an overlap of the spiritual and temporal in the minds of many Saints, leading them to view their labors as sacred tasks in the building of Zion (pgs. 59-60).
It’s a relief to finally see this in print. The seeds of it were sown with a comment by Allen on a 2013 post at The Slow Hunch. The idea eventually became a two–part blog post at Worlds Without End, which evolved into a presentation at the 2014 conference for the Mormon Transhumanist Association and later the 2015 Faith & Knowledge conference. It sat at Dialogue for a long time due to management changes. We withdrew it and submitted to BYU Studies Quarterly, which deemed it “too specialized and not right for a large enough segment of our target audience.” So we resubmitted a more focused version to Dialogue, much to the enthusiastic support of the editor.