Some Sad Puppy Data Analysis

915 - Hugo Article Cover
Image from Flickr user Bill Lile. Click for link to original.

 

The list of fairly big-name outlets covering the 2015 Hugos / Sad Puppies controversy has gotten pretty long[ref]Slate, Salon, Entertainment Weekly, the Guardian, the Telegraph, the Daily Dot, i09 along with Breitbart (twice) and the National Review[/ref], but here’s how you know this is Officially a Big Deal: George R. R. Martin has been in a semi-polite back-and-forth blog argument with the Larry Correia for days. That’s thousands and thousands of words that Mr. Martin has written about this that he could have spent, you know, finishing up the next Game of Thrones book. I think we can officially declare at this point that we have a national crisis.

Martin’s blog posts are a good place to start because his main point thus far has been to rebut the central claim that animates Sad Puppies. To wit: they claim that in recent years the Hugo awards have become increasingly dominated by an insular clique that puts ideological conformity and social back-scratching ahead of merit. While the more shrill voices within the targeted insular clique have responded that Sad Puppies are bunch of racist, sexist bigots, Martin’s more moderate reply has been: Where’s the Beef? Show me some evidence of this cliquish behavior. Larry Correia has responded here.

As these heavyweights have been trading expert opinion, personal stories, and plain old anecdotes, it just so happens that I spent a good portion of the weekend digging into the data to see if I could find any objective evidence for or against the Sad Puppy assertions. It’s been an illuminating experience for me, and I want to share some of what I learned. Let me get in a major caveat up front, however. There’s some interesting data in this blog post, but not enough to conclusively prove the case for or against Sad Puppies. I’m running with it anyway because I hope it will help inform the debate, but this is a blog post, not a submission to Nature. Calibrate your expectations accordingly.

One additional note: unless otherwise state the Hugo categories that I looked into were the literary awards for best novel, novella, novelette, and short story. There are many more Hugo categories (for film, graphic novel, fan writer, etc.) but the literary awards are the most prestigious and also have the most reliable data (since a lot of the other categories come and go.)

Finding 1: Sad Puppies vs. Rabid Puppies

I have been following Sad Puppies off and on since Sad Puppies 2. SP2 was led by Larry Correia, and his basic goal was to prove that if you got an openly conservative author on the Hugo ballot, then the reigning clique would be enraged. For the most part, he proved  his case, although the issue was muddied somewhat by the inclusion of Vox Day on the SP2 slate. Vox Day tends to make everyone enraged (as far as I can tell), and so his presence distorted the results somewhat.

This year Brad Torgersen took over for Sad Puppies 3 with a different agenda. Instead of simply provoking the powers that be, his aim was to break its dominance over the awards by appealing to the middle. For that reason, he went out of his way to include diverse writers on the SP3 slate, including not only conservatives and libertarians, but also liberals, communists, and apolitical writers. Even many leading critics of the Sad Puppies (for instance John Scalzi[ref]”I’m feeling increasingly sorry for the nominees on the Hugo award ballot who showed up on either Puppy slate but who aren’t card-carrying Puppies themselves, since they are having to deal with an immense amount of splashback not of their own making.” from Human Shields, Cabals and Poster Boys[/ref] and Teresa Nielsen Hayden[ref]”Indications are that a fair number of them [nominees on the Sad Puppy slate who got onto the ballot], maybe a majority, are respectable members of the SF community who, for one reason or another, are approved of by the SPs while not being ideologically Sad Puppies themselves.” from this comment on her post Distant thunder, and the smell of ozone.[/ref]) concede that several of the individuals on the Sad Puppies slate were not politically aligned with Sad Puppies. That fact was my favorite part about Sad Puppies: the attempt to reach outside their ideological borders demonstrated an authentic desire to depoliticize the Hugos instead of just claiming them for a new political in-group.

What I didn’t know until the finalists were announced just this month is that the notorious Vox Day had created his own slate: Rabid Puppies. Rather than angling toward the middle (like Torgersen), Day’s combative and hostile approach kept Rabid Puppies distinctly on the fringe. To give you a sense of the level of animosity here, several folks agreed to be on the Sad Puppies slate only on the condition that Vox Day was not. Despite this animosity and the very different tones, when it came time to pick a slate, Vox Day basically copied the SP3 suggestions and then added a few additional writers (mostly from his own publishing house) to get a full slate.[ref]There are 5 finalists per category. SP3 didn’t propose a full slate: they had  less than 5 nominees for several categories. RP ran a full slate.[/ref]

Because Torgersen and Correia are more prominent, when I did learn about RP I assumed it was a minor act riding on the coattails of Sad Puppies 3 and little more. For this reason, I was frustrated when the critics of Sad Puppies tended to conflate Torgersen’s moderate-targeted SP3 with Vox Day’s fringe-based RP. But then I started looking at the numbers, and they tell a different story.

The Sad Puppies 3 campaign managed to get 14 of their 17 recommended nominees through to become finalists, for a success rate of 82.4%. Meanwhile, the Rapid Puppies managed to get 18 or 19 of their 20 recommendations through for a success rate of 90-95%.[ref]Larry Correia made it onto the slate but turned his position down. If the person who took his spot came from the non-RP authors, it means that the RP slate was initially 95%  successful. If it was taken by another RP author who didn’t make the first cut then their rate success rate was 90%.[/ref]

What’s more, however, there was one category where the SP3 and RP slates conflicted: the Short Story category. Here’s how those results ended up:

Author Source Result?
Annie Bellet Both Success
Kary English Both Success
Steve Rzasa RP Success
John C. Wright RP Success
Lou Antonelli RP Success
Megan Grey SP3 Failure
Steve Diamond SP3 Failure

In other words, whene SP3 and RP actually went head-to-head, Rabid Puppies beat SP3. It appears as though in term of raw voting power, the Rabid Puppies voters outgunned the Sad Pupppies 3 voters. I put together a simple Venn Diagram that hammers that point home by showing where each of the 20 Hugo finalists came from:

913 - Corrected Venn

If you want to know where the finalists come from, it looks like Rabid Puppies can’t possibly be ignored. For someone like me who really supported the moderate, inclusive aims of Sad Puppies 3, this is a sobering realization.

Finding 2: Gender in Sci-Fi

I put together a table of all the Hugo nominees and winners with their gender. I know that gender isn’t the only diversity issue but it’s the easiest one to find data on. Here’s what I found:

918 - Percent of Hugo Nominees Who are Male

It is easy to see how a social justice advocate would interpret this chart. In the 1960s the patriarchy reigned supreme and often 100% of Hugo nominees were male. As the sci-fi community grew more mature and progressive, however, the patriarchy’s grip weakened. More and more female nominees entered the scene. But now SP3  and RP have rolled back all that progress, and as a result the 2015 finalists are right back at the status quo: the dotted line representing about 80% male nominees on average over the entire 1960 – 2015 period. It’s a simple story: SP3 and RP are agents of the patriarchy sent to re-establish the status-quo. If you want to know why so many social just advocates are very, very angry about SP3 and RP, this is why.

But there are some serious complications to this narrative. First, the diversity of the early 2010s was not unprecedented. There wasn’t a long, slow, continuous growth of diversity. There were a lot of female nominees in the early 1990s, and this gets omitted from articles that act as though sci-fi had achieved some milestones of diversity for the first time. It’s true that the 2010s were the best yet, but the most important symbolic line was crossed way back in 1992 when 52% (more than half) of the nominees were women. Second, the rebound towards the overall average started last year, not with the 2015 finalists. In 2013 there was an all-time record percentage of female finalists (61%) but in 2014 the numbers had flipped and 62% of the finalists were male. Although Sad Puppies 2 did exist in 2014, it had very little impact and so the rebound towards the status quo cannot reasonably be blamed entirely on SP3 / RP.

While we’re at it, it’s important to note that neither SP3 nor RP were 100% male (as has been widely and erroneously reported[ref]Most notably by EW, although you really need to read the original version  (prior to threats of libel and numerous corrections and edits) with the original headline of Hugo Award nominations fall victim to misogynistic, racist voting campaign to get the full effect.[/ref]). Those little green and red lines at the very end of the chart show what the gender ratio would have looked like if SP3 had won completely (82.4%, the green line) or if RP had won completely (90%, the red line).

But the fourth complication is by far the most important one. Back in 2013 a Tor UK editor actually divulged the gender breakdown of the submissions they receive by genre.

917 - Gender Breakdown of Tor UK Submissions

So, over the history of the Hugo awards from 1960 – 2015, 79% of the nominees have been male. In 2013, 78% of the folks submitting sci-fi to Tor UK were male.

There were a lot of very angry reactions to this post. For example, “I find this article disappointing, ignorant, and damaging,” starts one response which I found from a current Damien Walter blog post. It’s hard to see why an article that basically just presented factual information would be reviled, especially when the article concludes:

As a female editor it would be great to support female authors and get more of them on the list. BUT they will be judged exactly the same way as every script that comes into our in-boxes. Not by gender, but how well they write, how engaging the story is, how well-rounded the characters are, how much we love it.

This is an entirely moderate, reasonable position to take. Science fiction has been called “the literature of ideas” by sci-fi legend Pamela Sargent. And in a genre where ideas are paramount, so is diversity. Diversity is not an intrinsically liberal value. After all, conservatives are the ones who tend to believe in gender essentialism, which would necessarily underscore the importance of having female viewpoints since (if gender essentialism holds), female viewpoints are inherently different than male viewpoints in at least some regards, and thus you will get more perspectives if you include women as well as men. Thus: conservatives can be just as invested in welcoming women into the genre as writers and as fans.

But if you have a situation where men and women are equally talented writers and where men outnumber women 4 to 1 and where the Hugo awards do a good job of reflecting talent, then 80% of the awards going to men is not evidence that the awards are biased or oppressive. It is evidence that they are fair. In that scenario, 80% male nominees is not an outrage. It’s the expected outcome.

Of course this just raises the next question: why is it that men outnumber women 4:1 in science fiction? For that matter, why do women outnumber men 2:1 in the YA category? Why is it that only the urban fantasy / paranormal romance category is anywhere close to parity? These are all fascinating questions and also important questions. I believe we can only hope to address them in an open-ended conversation. This is my primary concern with social justice advocates. Because they are tied to a certain ideological version of feminism[ref]Christina Hoff Sommers calls it gender feminism as opposed to equity feminism, and Steven Pinker describes it as “an empirical doctrine committed to three claims about human nature. The first is that the differences between men and women have nothing to do with biology but are socially constructed in their entirety. The second is that humans possess a single social motive—power—and that social life can be understood only in terms of how it is exercised. The third is that human interactions arise not from the motives of people dealing with each other as individuals but from the motives of groups dealing with other groups—in this case, the male gender dominating the female gender.”[/ref] that views human society through a Marxist-infused lens that emphasizes power struggles between groups and sees gender as socially constructed, they are locked into a paradigm where the mere fact that 80% of sci-fi writers are male (let alone Hugo nominees) is conclusive evidence of patriarchal oppression. From within that paradigm, there’s nothing left to talk about. Anybody who wants to have a discussion (other than to decide which tactics to use to smash the patriarchy) seems like an apologist for male domination. The social justice paradigm is a hammer that makes every single gender difference look like an evil nail.

So the chart isn’t as clear as it first appears. What you take from it depends entirely on your ideological framework. If you’re a social justice advocate, it’s a smoking gun proving conclusively that sci-fi is struggling bitterly to break free from the grip of the patriarchy. If you’re not a social justice advocate it might be evidence of systemic sexism in the sci-fi community that leads to a greater ratio of male writers or it might be evidence that more men like sci-fi than women. Or both. Or neither. It’s interesting, but it’s not conclusive.

Finding 3: Goodreads Scores vs. Hugo Nominations

916 - Goodreads Scores for Hugo Winners Nominees

If the last chart depicted clearly the reasons why social justice warriors are so opposed to SP / RP, this chart depicts clearly the reasons why SP came into being in the first place. What it shows is the average Goodreads review for the Hugo best novel winners (in red) and nominees (in blue) for every year going back to the first Hugo awards awarded in 1953.[ref]Actually, I don’t have the nominees for some of the earliest years, which is why there are red squares but no blue diamonds at the far left end of the chart.[/ref] The most interesting aspect of the chart, from the standpoint of understanding where SP is coming from, is the fairly extreme gap between the scores of the nominees and the winners in the last few years, with the nominees showing much higher scores than the winners. Here it is again, with the data points in question circled:

916 - Goodreads Scores for Hugo Winners Nominees ANNOTATED

Let me be clear about what I think this shows. It does not show that the last few Hugo awards are flawed or that recent Hugo winners have been undeserving. There is no law written anywhere that says that average Goodreads score is the objective measure of quality. That is not my point. All those data points show is that there has been a significant difference of opinion between the Hugo voters who picked the winners and the popular opinion. What’s more, they shows that this gap is a relatively recent phenomenon. Go back 10 or 20 years and the winners tend to cluster near the top of the nominees, showing that the Hugo voting process and the Goodreads audience were more or less in tune. But starting a few years ago, a chasm suddenly opens up.

Of course there have been plenty of years in the past where Goodreads ranked a losing finalist higher than the Hugo winner, but rarely have there been so many in a row and particularly so many in a row with such wide gaps. To a Sad Puppy proponent, this chart is just as much a smoking gun as the previous one because it shows that something has changed in just the last few years that has led to a significant divergence between the tastes reflected by the Hugo awards and the tastes of the sci-fi audience at large. Whether you chalk it up to a social clique, political ideology, secret conspiracy theories, or just plain old herd mentality, it looks like the Hugo awards and popular taste have parted ways. Which, when Correia and Torgersen talk about eltiism and insularity, is exactly the central accusation that the Sad Puppies folks are making.

Just as with the prior chart, however, closer inspection complicates the picture. First, a social justice advocate may very well reply to the chart by saying, “Gee… lots of women get nominated and win and then review scores go down for nominees and winners. Sexism, much?” Turns out that isn’t likely, however, because Goodreads readers tended to rate female authors higher than male authors (at least within the sample of Hugo nominees and winners).

914 - Average Goodreads Rating by Gender

If anything, it suggests the possibility of mild sexism within the WorldCon community since it could indicate that female writers have to achieve higher popularity in order to get nominated and win. I didn’t run any statistical tests to see if the differences were significant, however, so let’s set that aside for the time being. The point is, blaming the low scores of Hugo winners vs. nominees over the last year on sexist Goodreads reviewers is a non-starter. It’s also worth pointing out that the winner scores haven’t suddenly gotten lower just over the last few years while the proportion of female nominees has gone up. They’ve actually been in a long-term slump (relative to Goodreads ratings) going back to the early 2000s with an average of around 3.7 compared to the all-time average of 3.96. Meanwhile, a lot of the losing nominees have been off-the-charts popular with scores of 4.2 and above. This is bound to lead to some hard feelings and bitterness.

When there are this few data points it pays to start looking at individual instances, and this is where the picture does  start to get a little complicated. The most recent winner is Ann Leckie for Ancillary Justice. The rating of that book is 3.98 vs. the books with much higher ratings: Larry Correia’s Warbound(4.41 with 3.6k ratings) and Robert Jordan / Brandon Sanderon’s Complete Wheel of Time series(4.59 with just 376 ratings). Wheel of Time is a special case because it was a nominee for an entire series of books. Only the most devoted fans are likely to leave a rating on the entire series, and that’s why there are so few ratings.[ref]Typical Hugo winners have 20,000 – 30,000 ratings.[/ref] It’s probably also why they are so high. A better approach would be to average the individual average ratings of the books in the series, but I haven’t taken the time to do that. In any case, Wheel of Time is suspect as a comparison for that year. That leaves us with Warbound, but it’s a special case, too. Larry Correia drew a lot of fire that year for SP2, and he had no realistic chance of winning no matter how good his book was as a result. Fair or unfair as that might be, it means we can’t really conclude anything by comparing his book with Leckie’s. Take those two out, and Leckie was the highest-rated nominee. With a score of 3.98, her book was also right in line with the long-run average and significantly higher than the short-run average. After digging deeper, it’s really hard to shoehorn the 2014 results into the narrative of divergence between the Hugo winners and the general sci-fi audience.

But there is still a trend worth considering. Going back to 2013 and earlier a succession of fairly low-rated books won despite stiff competition from much more popular nominees. The 2013 and 2010 winners had some of the lowest reviews of the last half century, came last or second-to-last vs. the nominees for that year, and won out over nominees with significantly higher scores. Again: I am not making judgment call on those particular books. Merely pointing out how wide the gap is.

Another shortcoming of this approach is that I’m only comparing Hugo nominees vs. winners, and the Sad Puppies have been claiming that conservative writers can’t get on the ballot at all, not that they keep losing once they get there. The only way to really evaluate that claim would be to contrast the Hugo nominees and winners on the one hand vs. high-rated, eligible sci-fi books that never even made it onto the ballot. If most of the highest rated, eligible books made it onto the ballot in the past but more recently are being ignored, that would be strong evidence in favor of the Sad Puppies fundamental grievance. That analysis is possible to do, but gathering the data is trickier. I hope to be able to tackle it in the coming months.

Closing Thoughts

I still think that Sad Puppies have a legitimate point. Their goal was to get a few new faces out there who otherwise wouldn’t have been considered. I think that’s an admirable goal, and I think that there are some folks on the ballot today who (1) deserve to be there and (2) wouldn’t ever have gotten there without Sad Puppies. And I know that even some of the critics of SP3 agree with that assessment (because they told me so).

The critics of Sad Puppies have a couple of important points too, however. First: concern over gender representation is legitimate. Second: it’s tricky for the Sad Puppies to make their case without appearing to disparage the Hugo winners over the last few years (much as the folks on the SP3 slate are being disparaged even before we know who has won.) Combine that uncomfortable implication (even if unwarranted) with the fact that sweeping the ballot pushed a lot of deserving works out of consideration, and it’s justifiable for the critics to be, well, critical.

I hope that Sad Puppies continues, but I hope that they take steps to avoid hogging the whole ballot. They could recommend a lot more or a lot fewer folks per category. If they recommend 10 folks for best short story, for example, it forces possible voters to (1) read more sci-fi and (2) spread their votes around instead of voting en bloc. If they recommend 2 folks for best short story, any block voting will be confined to a narrow portion of the ballot. Either alternative is better than sweeping most or all of a ballot.[ref]It’s worth pointing out that I think nobody in SP had any clue that they would be this successful, and that their sweeping of the ballot was an accident this year.[/ref]

Finally, I’d like for some Sad Puppies folks to get together with some of their critics and see if they can hammer out their differences for the good of the awards and the community as a whole. I have to give props to Mary Robinette Kowal (very much not a Sad Puppy supporter) for being exemplary in this regard.  She has called on folks on her side to knock it off with the death threats and the hate mail, and also has started a drive to get more people supporting WorldCon memberships so that they can vote as well. For his part, Larry Correia has stepped in to stop his supporters from attacking Tor as a publisher. These are all good signs, and I hope that more moderate voices can prevail. Especially because the radicals on both sides are the ones threatening to nuke the entire award system. Social justice warriors are campaigning for Noah Ward (get it?) to shut down Sad Puppies definitively. Meanwhile, Vox Day has already pledged that he would retaliate by trying to shut down the entire award system next year with a No Award campaign of his own for Rabid Puppies 2. Given the first observation in this post, such a threat should be taken seriously.

Sad Puppies 3 was a good idea, but the execution was lacking this year. The best solution for everyone is for the voters to read each book and vote according to quality, including No Award if that’s what they genuinely feel is the right vote based strictly on the quality of the stories. And it is also for SP4 to get out ahead and take steps to avoid repeating the ballot sweep next year as well as to continue to shore up support among moderates, liberals, and apolitical folks to try and depoliticize the entire discussion a little bit.

After all the anger and vitriol over the past couple of weeks, there’s still a way for good to come of this. At the very least, I dearly hope that the legacy of the Hugo awards can be preserved.

154 thoughts on “Some Sad Puppy Data Analysis”

  1. An article at Big Think
    http://bigthink.com/ideafeed/the-current-hugo-awards-controversy-is-a-cultural-proxy-war

    declared “I’m not willing to take sides for myriad reasons, most of which don’t take into account whatever topic people are arguing over today. What can be said is that the progressives appear to be winning… for now.”

    This made me think of your claim about Gamergate
    http://www.firstthings.com/web-exclusives/2015/01/gamergate-at-the-beginning-of-2015
    “Video game journalists as a group aggressively took up the social justice advocates’ perspective on the problem and rapidly used their near absolute control over the media portrayal of the story to launch a coordinated and ruinous attack on Gamergate supporters. . . Gamergate is dead, and the truth is irrelevant. Regardless of what gamers did or didn’t do, they were successfully painted as vicious thugs, and now the term “Gamergate” is toxic. A few holdouts remain, but there is no hope of support from a sympathetic public because the PR war was lost, and it was lost decisively.”

    The media has learned that if they all say the same thing over and over loud and long enough, they can drown out the alternative voices. Despite how the internet is supposed to break the chains of traditional media and allow for more conservative voices to be heard, it isn’t working that way. The all to predictable flood of articles and blog posts all trumpeting the same (usually false) talking points appearing all around the same time shows the power of the SJW fringe to get its message out quicker and with more coverage.

    At this point, I think SP is in serious danger of going the Gamergate route.

  2. Ivan-

    At this point, I think SP is in serious danger of going the Gamergate route.

    Your concern is reasonable, but I do have hope. SP has managed to so far really avoid some of the pitfalls of Gamergate. For starters, there have been no incidents of any really egregious harassment. (Death threats and harassment from both sides, but no swatting or doxing yet.) That’s really important. Additionally, SP is distinct from RP. That’s really important. Finally, SP3 in particular worked hard to maintain itself as a diverse and moderate coalition.

    As a result of these things, SP has been able to keep some really big, respected names associated with it. Folks have gone to town vilifying Brad Torgersen and Larry Correia, but have been far more circumspect about going after, for example, Jim Butcher and other big names on the SP slate. (Butcher is the biggest.) You’ve also got George R. R. Martin engaging in dialogue directly with Larry Correia and Brad Torgersen.

    All of this really undermines the usual playbook. The whole point of the torrent of accusation and villification is to make the targets so toxic no one will even touch them. That’s how you win. That’s what happened, in my view, to Gamergate. But if George R. R. Martin is having a direct conversation with Correia in front of an audience of thousands (or even tens of thousands or hundreds of thousands, I have no idea what their respective traffic is like) that really lessens the efficacy of the toxicity simply because someone is engaging with Correia.

    Meanwhile, it’s hard to write off the Sad Puppies when they have successfully brought on board a diverse and prominent slate of authors.

    A lot really depends on what happens between now and August. If most pf the SP slate stays in the SP camp and especially if they get some victories in the Hugo, I think they keep their legitimacy. If they have any major defections or desertion or if they get completely shut down in the Hugos, then that makes SP vulnerable.

    That’s why I’d really like to see SP get more proactive about building bridges, extending olive branches, and working hard to maintain a relevant and moderate voice.

  3. Yes – but you also have big names like David Gerrold dismissing it all outright. Gerrold just recently on fb called Brad “ignorant”, “squalid”, “whiny”, and directly compared him to the Confederacy in the Civil War. He called SP a “little turd in the punch bowl” and declared “there will be consequences.”

    Then you have the Hayden’s at Tor who are really working overtime to smear the SPs. Scalzi posted what seemed a fairly reasonable stance on his blog, but his twitter behavior has been on par with the worst SJWs in this mess.

    George RR Martin is apparently the lone voice of reason on one side. Now, he’s huge and has more pull than all the other combined – but the media outlets are reporting on him like he’s slamming SP the way everyone else is. Sites like i09 and The Mary Sue are cherry picking his quotes to make them seem more attacking then they are. More people will see those summaries than read Martin’s “not a blog” posts.

  4. Ivan-

    Yeah, it’s really hard for me to step away from all my research and try to see SP the way it’s being viewed by folks who aren’t studying it as intensively as I am. That’s my concern.

    But, in fairness:

    1. I have no idea who David Gerrold is.

    2. The Haydens come off as kind of crazy to me.

    3. SPs who are also in favor of moderate engagement include folks like Mary Robinette Kowal (a good friend of Scalzi’s, incidentally.)

    I also that the overkill by EW and the resulting correction was embarrassing and important for revealing just how out-of-touch the mainstream press can be. I mean, that correction was EPIC.

    So yeah: from my perspective SP is doing well, but it’s so hard to know when you can safely generalize your own perspective and when you can’t.

  5. 1. David Gerrold is best know for writing “The Trouble With Tribbles” and parts of several other scripts for the original Star Trek TV series (and the animated series – he also wrote the series bible and several scripts for the TV series “Land of the Lost”). He was also a consultant/story editor for the first season of Star Trek: TNG (and thus wrote most of the series bible for that show). He also worked on Babylon 5 and several other SF shows. He’s written several novels, but his TV work is his biggest claim to fame.

    2. EW is the only major media outlet that I’ve seen to issue a correction. The “all male/white/straight” accusations are still out there on dozens of major (and many more minor) sites.

  6. One of the other complaints that the Sad Puppies have is that certain publishers (Tor, Orbit) seem to be nominated excessively whereas others (Baen) seem to be shunned.

    There is some evidence for that.

    I looked at the Hugo nominations and winners from 2001 to 2014 and found that (excluding editors – because I couldn’t figure out who belonged where) Tor had 37 nominations and 11 wins while Baen had 7 nominations and 0 wins.

    I collated my raw source data (from thehugoawards site) into http://sadpuppi.es/hugo.html , I would be very happy to have someone do more with it. And if asked nicely I could probably extend the time period back into the 90s.

  7. MRK is not SP, but is playing fairly.

    And for not knowing who Gerrold is, TNH will claim you’re “not a real fan.” Really.

    You are correct about overachieving. Brad asked if he could promote my works, I said, “Sure.” At no point did Vox ask me. But, as GRRM noted, no one should have to ask to promote a work they like, and it’s impossible to locate and stop people who want to promote your works.

    I find the other works in my category (Related Work) to be excellent overall, even if I don’t agree with some of the authors. But I would feel no shame losing to any of them.

  8. Nice to see someone putting some effort into actual numbers!
    I’d love to see (and I don’t know how practical this is) some handle on variance with goodreads scores, as Eric once pointed out to me -correctly I think that wide variance is a good measure of ‘love it or hate it’ books (ie. two books might both score 3.5 off a hundred readers, but the first gets there by 95% of the scores being 3.4-3.6, and the other ranges from 1-5 the latter at least has some folk who think it wonderful. Of course goodreads in itself is problematic as it is self-selected sample pool probably skewing away from demographics.

    The other point I’d like to make about the Tor submission is that unless I am much mistaken, that’s submissions (via agents (a filter already, possibly introducing bias) because they don’t take un-agented) not purchases, and presumably makes no differentiation between cohorts (so for example you’ll have more old men submitting their tenth manuscript, than old women in the same position, because historically the field was harder for women to enter. I did a brief breakdown a few years back of new vs established authors released by a number of publishers in one month (painfully manually counting books and checking sex, looking up to see what prior publication history there was) The difference was quite substantial, skewing female for new entrants. The combined result however looked fairly balanced. All in all it’s very complicated and deserves thorough research, which would be of value to authors, publishers and readers. Thank you for your efforts.

  9. Overall a very well written piece. My biggest concern and disagreement is with the statements on Mary Robinette Kowal, I see her as outright buying votes for No Award. To me she is making an outright attack and attempting to buy No Award.

  10. Thanks for the analyses. Here are a couple of posts that you might (or might not) find interesting re the author-gender-balance issue:

    1. My long 2006 post about author-gender bias: http://www.kith.org/journals/jed/2006/08/12/3627.html (Part of the point of this was to argue against the fairly widespread idea that saying “well, only 20% of submissions are from women, so that’s just how things are” is a conclusive argument.)

    2. My author-gender-in-Hugo-fiction-categories page: http://www.kith.org/journals/jed/pages/hugo_stats_author_gender.html (Haven’t yet updated for this year. My graph is the inverse of yours; I graphed the percentage of works by women. I think a bar chart is a little better than a line chart for this sort of thing, though. And I pointed out that when you’re dealing with about 20 nominees per year, a difference of 1 nominee is about 5%.)

  11. A couple of other things I think may be worth noting:

    1. There are claims that RPs called for otherwise-uninterested Gamergaters to nominate (in order to stick it to the SJWs), and that that may be why the RP slate did so well. Since you don’t like the Nielsen Haydens, I won’t point you to the Making Light post where there was extensive discussion of those claims, but they sounded pretty solid to me. That may not be relevant to your analysis, but given that you’ve written extensively about Gamergate, I was surprised that you didn’t mention it.

    2. For a detailed look at the success rates of the SP and RP slates across all the categories, not just fiction, see (http://file770.com/?p=21708).

  12. Jed, I’m afraid the GG stuff is wishful thinking on the NH part. Any effect the appeal for their participation had was trivial. See here for a good breakdown of the numbers – http://shetterly.blogspot.com.au/2015/04/two-more-essential-points-about-hugos.html

    This may however change, given the abuse of the media, and attitude of some of the vested interests who seem to center on the site you mention. Given the distinct possibility of the same vested interests orchestrating the EW hit piece (they remain one of the few people with both motive and ability to organize that) they’re not exactly a source I’ll ever trust without a steam-shovel load of salt, if not the entire Dead Sea’s worth.

  13. Just checking to make sure that you know that Kary English is a woman. I saw another critique of the male/female ration that included her as if she were male.

  14. This is a nice, balanced look at the issue in a way that is sorely needed. There’s a lot of talk about how people feel things are going, but not many attempts to back these feelings up with real data. Thanks.

    One critique I think should be pointed out is that the Goodreads data is itself possibly affected by the awards and nominations because there’s not a way to limit the ratings to those that existed prior to the awards. So the WOT series (for example) might have only had a couple ratings for the series as a whole prior to being nominated, but after it was announced several new ratings could have been added by people wanting to influence the perception of the series one way or another. Or the fact that a book won a Hugo could influence ratings made after it wins. All this might have no effect on the results, but there’s not an easy way to tell.

    Still, as you point out, this isn’t a submission to Nature. :) It’s certainly better than anything else I’ve seen and hopefully will encourage some thoughtful discussion on both sides.

  15. I think you’re overstating the case on the goodreads score. Any score above 4.2 is rare, and having so many new books highly rated makes it likely their scores revert to mean in the longer run. Similarly it’s possible for some of the recent winner’s score to rise a bit closer to mean too.

  16. One thing that bothers me about “diversity” is the imprecision about the kind of diversity that is being discussed — and demanded. A simple case, male vs. female. Is a work “female” because of the author’s biology, the protagonist’s biology, or the POV of the teller of the shory? Robert Heinlein’s Friday or John Scalzi’s Zoe’s Tale, for example: male author, female protagonist, female POV. (All three female kinds are woefully underrepresented, there is no question, and I know it is more complicated than mere plumbing.)

    More female authors, protagonists, POVs?

  17. There are a few problems with using Goodreads. First, this is self-selected data so does not reflect the whole population the way a randomized poll/survey would even when it has larger numbers.
    Second, Goodreads has only been around since 2007 so some of the ratings of earlier books may be people entering what they remember of books they read many years ago, which could be colored by their knowledge of what won the Hugo (or seeing the Hugo banner on the cover).
    Third Goodreads already has a best book of the year award. And the votes here don’t always match the ratings. For instance in 2014, the #2 SF book had a lower rating than the #4 book and none of the top 10 were nominated for Hugos. In fantasy in 2014, the #1 book had a lower rating than #2 and #3 (and only one in the top 10 was a Hugo nominee.
    Also, the Hugos is a head to head competition, read these five and say which you like best. Whereas for Goodreads, there is no way of seeing how those who read all five like or dislike them.
    And, the Hugo voters, by definition, are science fiction fans. While fantasy does occasionally win, it is less likely than a sf novel. Whereas in the real world fantasy far outsells SF.
    In the 2012 Hugo contest, Goodreads shows Dance of Dragons with a 4.27 rating and 281,852 people recording. Among Others, the winner that year, had a 3.67 rating and only 12,510 readers. (Note that on Goodreads, it has the fewest readers and the lowest ratings of all the 2012 nominees). However, in many ways Among Others was written directly to fandom and about fandom as much of the book is about the science fiction/fantasy the character read and her discovery of a book group. So, it is not surprising that fans would be more positive toward it than the general populations.

  18. Sam-

    I agree with pretty much everything you wrote about the limitations of the Goodreads data. I still think it’s an interesting point of comparison, however. And I’ll add one more point: I actually think that Goodreads ratings are more relevant than Goodreads awards precisely because they (1) are not exclusively connected with any specific campaign and (2) tend to reflect popularity over time.

    Obviously there are limitations to #2. For one, Goodreads has only been around a few years, and there may be systematic differences between how readers rate new books and books that are decades old. For another, reviews may very well change over time. It’s possible that the average rating for a book will start high (because fans read it first) and then gradually lower over time (because eventually more people discover it.)

    But, on the other hand, if that’s true then it should be equally true of the Hugo nominees and the Hugo winners, and so it wouldn’t explain the gap in the chart. Additionally, I would expect older books to get higher ratings because only fan would be disposed to remember / rank them (selection effect) and there may be a nostalgia effect as well. But that’s not really born out in the data, with no obvious or clear trend towards higher scores the farther back in time you go.

    In any case, I think there are a lot more questions to answer about Goodreads data, and I hope to do that in coming months.

  19. Let me start by saying how much I enjoyed reading this. It is, by far, the best attempt to inject some objectivity into this discussion which I have come across, and I applaud both the tone of the post and the delightfully positive character of the responses. A rarity, in this field of discussion!

    That said, I want to say that you need to rethink your interpretation of the goodreads data, which I think is fundamentally problematic. You cite the “fairly extreme gap” between the winning scores and other nominees which has opened up in recent years, and offer a couple of potential explanations. Allow me to suggest another, which I believe must be one of, if not the overwhelming, explanation for this trend in the data. Goodreads was founded in 2006. All Goodreads reviews of books published prior to 2006 are reviewing “Hugo-winners” subsequent to the presentation of the award. Given the generally non-expert character of Goodreads reviewers, it would be astonishing to me if Hugo winning works weren’t reviewed much more favorably than non-award winning works, on balance. (It should be fairly simple for someone more statistically-inclined than myself to confirm the strength of this bias by looking at the relative scoring trends for other comparably prestigious awards in other genres.)

  20. That’s a very plausible hypothesis, Colin, but the data doesn’t seem to bear it out. In fairness, I’m just basing this on how the graph looks. I can and will do some statistical test on this in the future, but in the meantime there’s no abrupt change from pre/post 2006 and there’s no overall trend towards higher ratings for older Hugo noms. I’m also skeptical that a lot of the people reviewing these books are aware of which ones are Hugo winners (or what the Hugos are).

    But I absolutely can’t rule your hypothesis out without doing some rigorous testing. I would like to do that in the future, and I hope to do so, but I’ll freely grant that right now all I’ve done is collect data and look at pictures. :-)

  21. Don’t worry, Sad Puppies won’t be linked to any egregious harassment. The media will be too busy blaming Gamergate for it.

    Although you still need to worry about the media claiming that Sad Puppies is Gamergate.

  22. The series average of Wheel of Time is 4.12 with individual books getting between 75k and 175k reviews.

  23. Awesome write up! As a sociology grad student, I really enjoyed your data driven look at this problem. Thanks for all your hard work compiling it and sharing it with the sf/f community!

  24. Two things jump out to me about the Goodreads scores on what I’m calling “Figure 5”. IE the one you circled the nominees above the winners.

    Point 1) Goodreads as a site isn’t even 10 years old yet. So the ratings on books more then 10 years old are potentially tricky when comparing them to recent books. (Are people rating books they just read now? Or are they rating the books as they remember them? How could we tell?) I’m not yet sure this is an actual smoking gun (it might be) but it does bear watching.

    Point 2) We’re much closer in time to the highlighted area, and what we have yet to see is how the long term affects people’s opinions on such books. What you might be seeing (and we probably won’t know for another decade or two) is what opinion looks like closer to the release date of such books, compared to how books “age”. Which is to say, we might see opinions on the books change over time.

    I’d be tempted to draw a delineation in the data for works that existed pre Goodreads, versus works that came out after. I’d also be curious if you compared the data with, for example, Amazon numbers and LibraryThing numbers for the same works. Do we see the same trends? It would be very interesting to see such a comparison, to rule out any particular quirks of one data set. (Or, possibly confirm the trend, regardless of site of reporting.)

    Just as a quick test, I took the year 2012, where the book that one ranks lowest on your chart. When I ran that one against LibraryThing’s rankings, the winner shifts up to the middle of the pack, and the overall spread is much narrower. (Low score of 3.89 high of 4.14, or a .25 spread, compared to the stats from GR, which has a low of 3.67 and a high of 4.27 and spread of .6) I would be curious to follow the trends for the next few years, and see where it goes. (Including adjusted for time data, IE, do the perceptions stick or change over the years.)

  25. Just to add to my above comments:

    J. Michael Straczynski (Babylon 5) has called for the Hugos to just cancel their awards this year.

    Connie Willis (multiple Hugo and Nebula award winner, and very good author as well) has declared, solely on one Daily Kos article, that she will not present at the Hugos.

    A few more big names like that, and SP will lose, the way GG lost.

  26. You might be right, Ivan. If big names who are not known for being ardently political start shunning the SP folks, then it’s bad. I was particularly disappointed in Connie’s piece, where she clearly only has issues with Vox Day / Rabid Puppies, but then conflates him with Brad Torgersen and Rabid Puppies with Sad Puppies 3 as though they were the same thing.

    Then again, I’ve felt that Correia and Torgersen should have been distancing themselves from Vox Day since before Sad Puppies 2. I understand that Torgersen in particular has a principled opposition to shunning anybody, but (1) it’s tactical suicide to go to war in the public sphere with that kind of baggage and (2) I think he’s taking the principle too far. If he refuses to engage in the shunning / expulsion rituals on principle: great. But (for example), there was no good reason for Correia to ever put Vox Day on the SP2 slate and SP3 could have issued a public statement when RP first started simply explaining that the two groups are totally distinct. That’s not cowardice or ritualistic shunning. It’s just the truth.

    Still too early for me to give up hope, but these are definitely some negative signs.

  27. I stumbled on this blog while reading round the Hugo drama, and I just wanted to say thank you so much for providing such a thoughtful analysis.

    I’ve seen a lot of people say the same as you do here: ‘I was particularly disappointed in Connie’s piece, where she clearly only has issues with Vox Day / Rabid Puppies, but then conflates him with Brad Torgersen and Rabid Puppies with Sad Puppies 3 as though they were the same thing.’
    I’m not sure that I understand the reasoning behind this position. Both Puppy campaigns were started by the same people, these people are all friends, they were one campaign until just a few months ago, they have similar stated aims, they have similar names, they promote many of the same works, they use the same tactics to get on the ballot. Isn’t it reasonable for people to conflate them? To an outsider it looks like they are the same thing: one group of people playing ‘good cop, bad cop’.

  28. Great analysis Nathaniel. A fair breakdown that gives everybody a lot to think on. In spite of that I fear for you a little bit. I would suggest you look into the recent works of Chris Von Csefalvay. His wonderful analytical works, and what happened to him when he published his findings regarding a similar social/cultural conflict that shall go nameless back in December. And take every possible precaution to avoid being sucked into an online or offline hate storm. Stay safe, stay sane, avoid social media at all cost.

  29. “there was no good reason for Correia to ever put Vox Day on the SP2 slate”

    He liked the story, there is no accounting for taste. There are apparently people who think “If You Were A Dinosaur My Love” was science fiction.

    I recently completed my own list of Best Novel nominee and winner ranks on Goodreads, which makes me glad to find someone who worked on the same task.

    Did you give the earlier much smaller version of Dune the same score as the published novel? That’s what I ended up doing.
    I hope you found a more efficient way to do it than lots of transcription and copy/pasting from hundreds of Goodreads searches.
    .Did you look at the number of ratings data? That seems like it may work as a proxy for how popular/widely read the work is. Especially given how hard reliable publishing data is to find, and it’s inherent lack of tracking of library and 2nd hand readers.

    Simple proof that GamerGaters haven’t gotten involved yet, not a single Halo novel nominated.

  30. “J. Michael Straczynski (Babylon 5) has called for the Hugos to just cancel their awards this year.”

    That I find incredulous. Where?

    As far as SP losing, if too many more big names come out…Don’t put the chicken before the egg.

  31. I think you have a problem in your analysis of your “Finding 2.” You pull out the reported genders of those submitting Science-fiction stories to the “Tor submissions inbox” and say “So, over the history of the Hugo awards from 1960 – 2015, 79% of the nominees have been male. In 2013, 78% of the folks submitting sci-fi to Tor UK were male.” However, all of the categories submitted to Tor UK would qualify for a Hugo award. So the more accurate correlation would be the 79% of nominees to the total 68% of submissions. And while the you quote the article concluding that the stories are judged without gender, that doesn’t take into account whether or not the stories were evaluated without the author’s name attached. Studies have shown, for example, that scientific papers presented with a woman’s name are generally evaluated less favorably than if the same paper is presented with a man’s name. One should look, therefore, at the stories published by Tor UK to see if the percentages published by gender correspond to the submissions.

  32. Why the esteemed author didn’t ask the SJW to recommend 10 folks for best short story, for example a couple of years earlier ?

  33. Pluviann-

    Well, it’s sort of two separate questions. First: how do Sad Puppies and Rabid Puppies appear to be liked? And second: ho are Sad Puppies and Rabid Puppies actually linked?

    As for appearances: it’s going to depend on where you’re coming from. My introduction to Vox Day came via John Scalzi’s blog, which I’ve been reading regularly for nearly 10 years. Every now and then Scalzi would post these cryptic references to someone who was posting negative stuff about him, but I had no idea what he was talking about. I think several months went by before I finally found out who Vox Day was. I did my own research, particularly because at that point I knew my politics and Scalzi’s were pretty far apart, and everything I found was pretty negative. Vox Day reminds me of Ann Coulter. I think we end up on the same side of quite a few issues (not all of them!), but I can’t agree at all with either his reasoning for ending up there or with his tactics that emphasize confrontationalism and absolutism. This was probably in 2012 or so. I found out about Sad Puppies 2 (I was unaware of the first one at all) about a year ago in 2014. So from the very first that I started following it, I agreed bsaically with Correia’s views and was disappointed that Vox Day was included on the slate.

    Fast forward to Sad Puppies 3, and I followed that from the very first announcement. One of the first things I did, in fact, was scan the slate to see if Vox Day was on it. I was happy to see that he wasn’t. I was even happier to see lots of apolitical and liberal folks. This was a diverse slate, and it was clear that Torgersen was taking a more moderate, constructive approach. I was very pleased. I had no idea what Vox Day was up too at this point. I had no idea that Rabid Puppies even existed. I was pretty sure (and Torgersen has since confirmed this) that several of the writers on the SP3 slate would have refused to be on it if they had to share it with Vox Day.

    Sorry if this is long. My main point is just this: from the perspective of a long-time SP supporter, SP and RP don’t appear to be connected at all. They may have been with Sad Puppies 2, but with Sad Puppies 3 the SP folks divested themselves of Vox Day.

    As for the second question: there’s no coordination between SP3 and RP. They didn’t come up with a joint slate. SP3 came up with a slate all on their own, and then RP copied most of if (with a few changes). There’s definitely some overlap between the supporters, but honestly I don’t think it’s that much. I have met lots and lots of supporters of SP3 online. Some of them consider Vox Day and RP an ally, but most consider him a nuisance or a threat. I haven’t met a single RP supporter, as in someone who is comfortable with the goals and methods of Rapid Puppies. What’s more, Brad Torgersen (and Larry Correia, I believe) are both on record as stating that Vox Day is a bad guy. They refuse to attack him. I disagree with that stance, but I can see where they are coming from. It’s a “hate what you say, defend your right to say it” stance. I think it’s misguided, but I respect their intentions. But the point is: there’s no operational coordination. There’s not much overlap in the foot soldiers, as it were. (Although there is definitely some.) And there’s not much love lost between the respective leaders, either.

    Lots of the folks who are blurring the line between RP and SP3 are doing it strategically. Vox Day says awful things, and so it’s convenient to use those as weapons against SP3, which is much more moderate. I think that’s reprehensible. Lots of other folks are blurring the line because they honestly think the two are interchangeable or closely related. And–given the folks intentionally promulgating that view–it’s understandable.

    So I’m not mad at someone just for confusing the two. I’m not mad at you for asking about it, and I’m not mad at Connie Willis for explicitly linking them. But appearances can be deceiving, and SP3 and RP really ought to be viewed as separate groups.

  34. why do women outnumber men 2:1 in the YA category? Why is it that only the urban fantasy / paranormal romance category is anywhere close to parity

    I would love to see what kind of money these genre’s are making. How does ‘scifi’ do compared to YA? To urban fantasy?

    Obviously there have been several female authors making an absolute MINT off YA in that last 5-10 years. Maybe this just shows that the Hugo’s skew Sci-Fi?

    Thanks for compiling the data as it’s interesting.

  35. Peter Hentges-

    I think you have a problem in your analysis of your “Finding 2.”

    Well, just so we’re on the same page, I have problems with all of the analysis. I just want to keep everyone’s expectations realistic. This is an interesting and useful conversation to have, and I hope it informs people who care about this issue, but I don’t for a second think that my analysis is conclusive, and the reason is that there are lots and lots of data issues. Someone has faulted me for not providing confidence intervals, for example, but the reason I didn’t do that is simply that this is all very informal. I would like to follow it up with more rigorous analysis, but that’s going to take time. For now, we’re just talking about some Excel charts, and that’s just the level we’re at.

    That being said, I don’t entirely agree with your particular concern here. While it’s true that all of the Tor submissions would have technically been eligible, the Hugo award is a sci-fi award. Yes: fantasy wins now and then. Harry Potter did. So did Paladin of Souls. Just two examples of books I love off the top of my head. But most winners are sci-fi despite the fact that these days fantasy outsells sci-fi by about a 2:1 margin. The fantasy winners tend to be either (1) special cases or (2) examples of where genres get blurry. After all: there’s no objective definition of “sci fi” and so there’s no way to set up strict eligibility requirements. But there’s definitely still a tradition and emphasis on rocketships and robots over swords and dragons.

    As for the studies you suggest: that’s an interesting point, but not something I can really evaluate. I didn’t categorize author names as male / female. I stuck with the actual authors themselves. So James Tiptree, Jr. is female in my list (despite the male name) and so is C. J. Cherryh (despite the ambiguous use of initials and the modified last name).

  36. MC DuQuesne=

    Did you give the earlier much smaller version of Dune the same score as the published novel? That’s what I ended up doing.

    Yup, that’s what I did, too.

    I hope you found a more efficient way to do it than lots of transcription and copy/pasting from hundreds of Goodreads searches.

    For this version: nope. Just lots and lots of manual data entry (not even copy-paste). So risk of transcription error is another factor to keep in mind when looking at the data, although I did double-check all the specific scores I pointed out so I don’t think there are any major errors.

    For the next version of this analysis, however, I will be taking a very, very different approach. I actually have permissions from Goodreads to scrape their database, and I’ve been gathering data for months now. That was for a separate project (still in progress), but with some tweaks I’ll be able to adjust that data set for use here. That will give me much, much better data and also a lot more of it. At that point it might make sense to start formalizing some models and running statistical tests.

    Did you look at the number of ratings data?

    Yup, I did track it and for the same reason, but it ended up not being as interesting (so far) as the ratings gap that I ended up putting in the post.

  37. In regards to the “SadPuppies going the gamergate route” It should be noted that gamergate, though it lost the PR war in the public eye, has nearly achieved all its goals and has won the war it set for itself.

    Namely that the press has been forced to change their ethics polices and/or lost their influence in what games get made and what games get purchased.

    In the context of comparing it to gamergate You should not confuse the war the SadPuppies set for themselves to fight and win and the war their critics are fighting.

  38. Sean:

    Here (on his fan page on fb – he runs it and writes the posts)
    https://www.facebook.com/permalink.php?story_fbid=963692290332301&id=139652459402959

    Here’s an excerpt:

    “Like most people, I’ve known women who have been in abusive relationships. Their boyfriends, lovers or husbands knew how to play the game to keep them close, knew how to work the system to keep from being penalized for their behavior, and when asked “well, why don’t you just leave him?” they often don’t have a better answer than “I’m stuck with him.”

    Until the day comes when they realize they aren’t stuck with those guys, that they can simply…walk out the door. They leave the relationship unless and until they can be sure that the situation has changed. Or they simply don’t come back . . .

    Leave the relationship.

    Cancel the Hugos.”

  39. Hi Nathaniel,

    First off, kudos to you for taking a stab at some data analysis here. Actual numbers are truly hard to find.

    I have to echo the concerns of Sam, Colin, and Ed vis a vis Goodreads, for largely the same reasons. Rather than rehash the point, is there any secondary sources for popularity that would not be so self-selecting? e.g. contemporary ratings, annual sales figures? I realize that these introduce their own sources of bias, but more data is rarely a bad thing.

    I realize that some of these figures might be difficult to get, which is why I feel badly about asking for even more diversity information: would there be any chance of you expanding the nominations vs. winners to account for race or political orientation? One of the main accusations I’ve seen levelled by the xPuppies is that the diversity of the Hugos does not reflect a diversity of opinion, only of biology. I’m not nearly well-read enough to have an opinion on that front, but I think it would be fascinating. (I know GRRM has posted something to that effect; perhaps that could be a starting point?)

    Thanks again for putting some numbers into this debate, and all the best in future analysis.

  40. But most winners are sci-fi despite the fact that these days fantasy outsells sci-fi by about a 2:1 margin. The fantasy winners tend to be either (1) special cases or (2) examples of where genres get blurry. After all: there’s no objective definition of “sci fi” and so there’s no way to set up strict eligibility requirements. But there’s definitely still a tradition and emphasis on rocketships and robots over swords and dragons.

    This actually answers a few of my questions! I think the ‘gender bias’ in the hugos seems to be mostly this scifi love, as opposed to fantasy/romance etc. Now whether those genre’s are considered lesser just because they are, or because they are mostly written and read by women is sort of an open question.

    But I think the women are being smart here. They may not get hugos, but they get PAID writing YA and fantasy and romance. Why would they switch over to sci-fi?

  41. “Gamergate is dead, and the truth is irrelevant. Regardless of what gamers did or didn’t do, they were successfully painted as vicious thugs, and now the term “Gamergate” is toxic.”

    Who sees them as “vicious thugs”? Non-gamers? Non-gamers are irrelevant because they don’t purchase games. They are old and always saw games as violent.

    Gamers are overwhelmingly pro-gamergate, and in the end that’s what drives change. Our money keep them in business.

    The only reason I game to post here is because someone linked your article in KotakuInAction. Some dead people, uh.

  42. Anonymous Scholar-

    I don’t mind folks pointing out the concerns with Goodreads. They are perfectly valid. I think that any other popular voting site will be amenable to the same concerns / criticisms, but I would like to take a look at a few more to see if they exhibit any major differences. However, I think that Goodreads is the biggest (approx 30m members) and that makes the selection concerns less severe than they would be at a smaller, more focused site. So my plan is to dig deeper into the Goodreads data and see if I can address some of the concerns about selection and also about timing. And my plan is to also look into other data sources. I’m happy to take suggestions!

    I will note, however, that sales data is basically impossible to get a hold of (as far as I can tell) in any systematic way. One other data set I have created myself, however, is to go out and find a bunch of lists of “the best sci fi novels” from a variety of outlets (NPR did one, for example) and then just check to see how often a given book shows up on the different lists. I’ve already gathered that data, but it was a year or more ago. So I think the next step for me will be to compare this analysis (the aggregated best of lists) with the Goodreads data (both rating score and also number of ratings) to see if I can find any systematic differences between the aggregate expert opinion and the Goodreads audience.

    would there be any chance of you expanding the nominations vs. winners to account for race or political orientation?

    That’s really, really hard to do. What I did already was do a variety of Google searches to try and find this kind of data for all of the folks on the SP3 slate. And it was impossible for me to do. Lots of authors don’t have much of an online presence (that I can find), and my biggest concern there is that there’d be a massive selection effect in terms of me being much more capable of identifying info on authors who are in or near my social circle then otherwise.

    I think it would be a good potential to crowd-source, but then you run into other issues. I mean, there are probably a lot of authors who don’t talk about politics because they don’t want to talk about politics, and publicizing their views would problematic even if could identify them. In the course of research I did for another article I’m writing, I did get a hold of a couple of big names in sci-fi who have so far said nothing in public. Some were willing to talk to me, but only off the record. They had a lot to say and very strong opinions, but also an emphatic instruction that I could not publish their names. So any effort to start gathering a database with politics, race, religion, etc. would run into some serious privacy issues, is what I’m basically saying.

  43. Nathaniel – thank you for the response! It’s very interesting to hear that not many Sad Puppies support Rabid Puppies at all – this was news to me.

    Regarding a secondary source for popularity data: Nicholas Whyte uses LibraryThing and Goodreads when he’s analysing book popularity, but that wouldn’t solve the ‘self-selecting’ problem.

  44. GG Supporter-

    I know there are Gamergate supporters still alive and kicking because they like to come in droves and contradict anyone who says otherwise. :-) That’s what I found out in response to my article for First Things (which is where the quote in your comment comes from). I also know my article was posted in KotakuInAction at Reddit because I see the incoming traffic. More evidence that Gamergate isn’t literally dead.

    I’ll admit that I’m curious. All the gamers I talked to (and I know quite a few) agreed that Gamergate was dead, even the ones who were sympathetic to it. Either your perspective is biased because of where you are, or mine is biased because of where I am. (Or both.) I’d like to dig deeper into the issue, but until then there’s not much more for me to say about it.

  45. “All the gamers I talked to (and I know quite a few) agreed that Gamergate was dead, even the ones who were sympathetic to it.”

    Is this really the way to gather info on anything, ask a handful of people? Check the hashtag KiA or 8chan. They are running OPs and campaigns on Twitter all the time. Check the hashtag stats and quantity of posts per day. Check the movement in forums dedicated to it.

    Right about now, as in happening in this exact moment, Gamergate supporters are tweeting the hashtag #StopWebH8 with screecaps of anti-GG people saying hateful things.

  46. Thanks for writing this!

    Do want to say that I don’t think the Tor UK submissions are best baseline for the gender breakdown of the field. For a few reasons 1) as Jed pointed out above submissions are not the same as publications 2) it is a UK only measure.

    A possibly better measure is the Locus books received, which Strange Horizons use in this article http://www.strangehorizons.com/2014/20140428/2sfcount-a.shtml about gender distribution in SFF reviewing.

    This of course has different set of problems, as pointed out in the article. However there methodology of there leads to a estimate of that 43% of English language SFF books published in 2013 were written by women.

    Of course both of these methodologies only look at novels, so it is possible the numbers for the short fiction categories differ.

  47. “Either your perspective is biased because of where you are, or mine is biased because of where I am. (Or both.) I’d like to dig deeper into the issue, but until then there’s not much more for me to say about it.”

    If it is dead then why are bloggers and journalists covering SadPuppies talking about GamerGate?

    It should also be pointed out that the main critics of GamerGate (namely press sites and other current/former taste makers) still write, tweet and blog almost daily about it. If its critics don’t think it is dead it is probably safe to assume it is not dead.

    You should also look at this graph:

    http://topsy.com/analytics?q1=%23gamergate&q2=%23sadpuppies&via=Topsy

    7.5 months in GamerGate is less dead now then SadPuppies.was ever alive

  48. A-bob-omb-

    Sure. I read the statement, and it makes three basic points:

    1. There are data issues.
    2. The analysis is simplistic.
    3. The problem is very complex.

    I agree with all three points.

    What I’ve done is take the best and the most info I could, and then do the appropriate analysis. I stand by my decision not to use statistical tests and generate a bunch of meaningless p-values. Using that kind of precision and sophistication when (1) the data is pretty rough and (2) you haven’t formalized your statistical models and specified your hypothesis tests would be irrelevant showing off. And I full recognize that no amount of realistically available stats is ever going to resolve this issue with objective certainty or even a very high degree of confidence.

    My attitude is simply that we have to start somewhere, and here is my start. I hope to follow up with more data and better data. When I do, I will ratchet up the sophistication of the analysis to be commensurate with the data. In the meantime, I tried to set expectations realistically and be perfectly transparent that this is just a quick take based on the data available. The fact that this is all very far from perfect doesn’t stop it from being a positive contribution, I hope.

  49. Hooc Ott-

    If it is dead then why are bloggers and journalists covering SadPuppies talking about GamerGate?

    Think about your own question for a minute. Why would people out to trash Sad Puppies link it to Gamergate if they thought that Gamergate was successful and vibrant? The fact that the comparison is being made by folks who dislike Sad Puppies based on tenuous evidence should make you realize that–at least in their minds–they already dragged Gamergate down and would love to shackle Sad Puppies to it.

    Look, I understand you folks are very passionate. I’ve stated that I don’t share your perspective, but also that I’m interested in learning more about the movement possibly changing my mind, but not in this particular thread, OK? Gamergate a little OT relative to this particular post.

  50. Gamergate supporters remind me of Ron Paul supporters. I have had a handful of Ron Paul students who usually wanted to write essays about how Ron Paul, based mostly on internet activity, was really the most popular and best liked and successful politician, like, ever.

    Except he couldn’t get very many votes. He had a small but very dedicated group of followers who basically inflated the appearance of his support.due to lots of online presence. However, they didn’t translate to actual victory in the primaries.

    I think there’s a similar dynamic with GG. The media has made GG toxic, hence the attempts to link it to SP, actual timelines be damned.

  51. “not in this particular thread, OK? Gamergate a little OT relative to this particular post.”

    OK

    I think that is unfair but OK.

  52. “Gamergate a little OT relative to this particular post.”

    You might strongly want to reconsider that, because your post has proved one very important thing which no one is yet to admit: despite sad puppies getting all the glory/shame and rabid puppies being seen as a sideshow it’s the rabid puppies that’s leading the sad puppies, not the other way round. And the gamergate support which Vox Day has secretly drawn is playing a much bigger part in the Hugo dominance than people realize. There’s plenty of gamers who read sci-fi/fantasy too, and they have been wanting to grind their axe against the cliques that control Worldcon and Hugo. What you are seeing with sad puppies/rabid puppies is the spillover from the anger of the gamers to other genres, and that anger is going to get worse before peace is possible.

  53. Hooc Ott–

    I’m not trying to squelch you or anyone else from Gamergate. If you read my stuff, you’ll see that I’m sympathetic, for one thing, and furthermore I don’t do a whole lot of comment moderation here on my blog. For the most part I ask people to be constructive and they are and so I don’t have to implement a bunch of rules or do a lot of policing. Which is how I like it. So I’m not telling you to be quiet or to stop commenting. But I am simply saying that I can’t reply to every comment and in particular I, personally, can’t get into the “What’s the current status of Gamergate?” conversation at this time or in this thread. I would also be unhappy if such a conversation ended up drowning out the actual point of my post. I hope you don’t think that is unfair.

    Random Person-

    There are a couple of problems with the allegations of Gamergate secretly being behind SP3 via Vox Day and RP. The first is that, to my knowledge, Vox Day doesn’t do anything secretly. His whole strategy is in-your face. The second is that Sad Puppies started before Gamergate by at least a couple of years. The third is that I just stay away from conspiracy theories in general because they are not helpful.

    The conspiracy theories that the SP / RP folks like (and we do have our conspiracy theorists) are all about secret collusion among the Hugo voters to game the system. Is this possibleThe Latest Hugo Conspiracy Nonsense Involving Me. Since it’s (1) unlikely and (2) impossible to prove, it’s kind of a distraction at best. You’ll notice I didn’t bother to even mention it in my post.

    Same thing goes for the conspiracy theories on the other side of the fence. Is it possible that Vox Day is secretly getting Gamergate folks to vote in the Hugos? Of course. But it’s not very likely. First: he’s got plenty of supporters on his own, as far as I can tell, and it would only take hundreds of people to make an impact. Why outsource that to Gamergate and risk springing a leak when he could stick with his own fanbase that he presumably knows and trusts more?

    These two conspiracy theories are mirror images of each other, and I’m not interested in either one of them.

  54. Ivan-

    Gamergate supporters remind me of Ron Paul supporters

    That’s actually a very good comparison on a whole number of levels. Including the fact that I actually had a lot of sympathy for Ron Paul too. Although he was never my favorite candidate, there was a lot to like about him.

  55. Interesting analysis but your chart of Hugo nominees and winners versus Goodreads ranking is misleading. You circled the recent nominees ranked higher than the winners but ignored the nominees that were ranked lower, a not obvious bias. Also does Goodreads skew the same as the general population? Would Amazon rankings be better?

  56. You also praise the Puppy slates for getting new people on the ballot without considering the good nominees they crowded off the ballot.
    I do think you are correct in this was more a Rabid Puppies win who took the Sad Puppy 3 slate and expanded it to crowd others off the ballot and is a much worse slate. With this being a Rabid Puppies dominated ballot Connie Willis is right to focus on Vox Day.

  57. Interesting analysis.

    I’m a social scientist and long time fan (started in the late 1960s in grade school with Heinlein juveniles). I’m not going to engage with your overall argument, but I do have a methodological point: there’s quite possibly significant selection bias in the Goodreads data.

    I only do a Goodreads review on “old” books (eg, read in the years before I joined Goodreads) if the book made a deep positive impression. So my ratings of “old” books definitely have a higher average rating than my ratings of books I’ve read more recently. Doesn’t mean my current reading list is lower quality, just means I haven’t taken the time to enter low scores of alllllllllllll the many books I’ve read.

    So if you looked at my data, you’d conclude that I gave higher scores to past Hugo winners than to last year’s Hugo winner, even though I thought it was a terrific innovative book that I really enjoyed.

  58. Social Scientist-

    there’s quite possibly significant selection bias in the Goodreads data.

    Yup. That’s a definite possibility, and it’s come up in the comments so far. The thing that’s interesting is that–if that theory were true–I’d expect to see significantly higher ratings for older sci-fi, and that’s not really the case. Some of the oldest is the lowest-rated, for one thing, and there’s no evidence of any correlation between age and higher ratings that I can see.

    Just so I’m clear: I’m not saying that your theory is wrong! I haven’t even tested it in any way. I’m just saying that there are so many potential problems with the data that I didn’t even try to enumerate all of them. The two most frequent comments I’m getting are (1) why didn’t you use more statistical methods? and (2) are you aware of the problems with the data? The answer to the second is “yes,” and then that ends up being the answer to the first as well. :-)

    I’ll roll out the stats when I’ve got a data set that is worthy of it. In the meantime, this is all very informal and inconclusive. But, I hope, still moves the discussion forward in a constructive way.

  59. two objections:

    The SF/Fantasy submissions run 70/30 M/F not 80/20 throughout the field. I believe most slush readers will tell you that the female-written subs are, on average, higher quality than the males (This may reflect lower self-esteem among female writers, so they don’t send out/resubmit their weakest stuff, rather than inherent superior writing ability. I don’t know why it is.) Whatever the reason, straight submission ratios don’t provide meaningful mapping onto what ought to be the best work in the field.

    There is also a confirmation bias problem with the goodreads ratings, potentially. Readers are more likely to rate highly a book that’s already won an award (ten to twenty year past winners) than a book that has not already been given such a stamp of approval. (Winners after goodreads was launched.)

  60. Jonathan-

    Do you have any sources for you submission rates? I’d love to get more data on that.

    Whatever the reason, straight submission ratios don’t provide meaningful mapping onto what ought to be the best work in the field.

    That’s certainly true, but they can help build a picture.

    There is also a confirmation bias problem with the goodreads ratings…

    Yup, that’s another issue. The data set I’m currently collecting has individual ratings with dates attached, so I’ll be able to test that one by looking at how the ratings for a work change before and after it is awarded or nominated (and controlling with some that aren’t awarded at all). Really, there’s just no way to do this properly without individual-level, dated ratings. And I’m getting that data. I just don’t have as much of it as a I want yet, and I also don’t have the time to dig into it yet.

    Worth noting: it could go the other way, too. People might rate works lower once they’ve one a Hugo (or whatever) because it artificially raises their expectations. It’s all just theories until I can dig deeper. :-)

  61. Geoffrey-

    The category “neither”, by definition, does not overlap the other two categories.

    If a book came from any source other than SP3 or RP, then it came from a source that was neither SP3 nor RP. So, it’s not actually definitonal that there would be overlap.

    Simple example, suppose there were three other slates, call them A, B, and C. Suppose at least one of them nominated Skin Game by Jim Butcher. Then you could actually say Skin Game was nominated by SP3, nominated by RP, and also nominated by at least one group that was neither SP3 nor RP. And so you’d fill in the center space.

    If I get some time, I can rename it to “Other,” and re-upload, but my point was that there are three possible origins from a book: SP3, RP and the collection of all possible sources that are neither SP3 nor RP. I think most people get that.

  62. Ivan. Thank you for posting out of context. :)

    JMS did not say “cancel the Hugos”. What he said (at the end of his posting you linked to) was:

    ” It’s a binary decision, nothing more, nothing less.

    If the Hugos this year are legitimate, however much gamed, then hand them out and stop yelling.

    If the Hugos this year are not legitimate, then don’t hand them out.”

    IOW, either shut up about it OR cancel them.

  63. William K> –

    uhm, you’re welcome?

    Frankly, your response is – well, kinda dumb and beside the point. Any partial quote is always “out of context” and I provided a link to whole piece.

    JMS has walked it back a bit, later posting that he didn’t really mean that – he just meant “if you think this, do this” as you state. However, it’s pretty clear which side he comes down on, and no matter how much “context” you try to give it, that the first analogy he comes up with is comparing the Sad Puppies to abusive partners – well, that doesn’t really speak for his open mindedness and objectivity.

  64. I can’t edit comments here, apparently, so I shall just post this:

    I’ll walk back the “kinda dumb” part of my previous comment. It was uncalled for and beside the point itself.

    I stand by the rest of my comment – JMS used weasel words to avoid going full on anti-SP, but it was pretty clear where he stood and his analogy, no matter how much context you give it, shows he’s not being neutral and objective.

  65. This analysis is consonant with my own. My reading is that voting was driven by the Rabid slate (which in turn reads very strongly as self-promotion on the part of VD), and that the Sad slate had little influence on the outcome except where there wasn’t a Rabid slate.

    I suspect, likewise, that the outliers in the last two years of Goodreads rankings represent more or less the same slates.

  66. Sorry to pile on about the issues with Goodreads data, but since I haven’t seen it mentioned, I’ll point out that observationally later books in a series on Goodreads almost always have higher ratings than earlier books, presumably in large part because the people that don’t like the earlier books self select out of reading and rating the later books. It wouldn’t surprise me if the same thing happened with newly published authors vs authors that have been writing for a while, although the data on that is probably harder to compare.

  67. What I find interesting is the parallel between your analysis and the complaints regarding the Oscars.

  68. Ivan, Thanks.
    LOL I was off and running elsewhere IRL shortly after I posted that. came back hours later and someone had posted the entire think on their wall so it showed up on my feed. lol and then I finally remember to come see what’s transpired here, just now and find you also provided. Truly “ask and ye shall receive” LOL

  69. Alan, in no way is Kowal buying no award votes. She is serious about not telling people how to vote, but judging from the dim view she takes of Vox Day’s promise to make no award win next year, she does not support any scorched earth strategies.

  70. It’s untrue to say that Sad Puppies 2 had little effect last year. It had little effect on the winners, but quite a large effect on the nominees. It’s a bit tricky to count the ratio for categories like Dramatic Presentations and Related Book where there can be a lot of named individuals on a single nomination, so let’s stick to the four fiction categories and two editor categories. Those categories in 2014 were 53% male. But if we subtract the six nominees who were on the Sad Puppies 2 slate (five male and one female), that becomes 46% male. And if the people who were displaced by slate candidates had been on the ballot, it would have been 43% male. The Sad Puppies had quite a large effect on the gender balance of last year’s nominations, which is the main reason for the uptick in male nominees that you noted in 2014.

  71. This was an interesting article and well worth reading, even though your own anti-leftist and anti-feminist agenda is pretty evident and I disagree with you 100% there. There are certainly no communists on either slates, and you seem to object to the views of (gender) feminists, while taking most claims by the Sad Puppy side at face value.

    You assert that Sad Puppies have legitimate points (because bringing new people on the ballot is good) but don’t address the problem with their successful attempt to force a “diverse” slate of their own choosing on the ballot. There may be liberal/female/whatever writers on SP slate, but why should Brad Torgersen, Larry Correia and their allies have the right to decide who liberal/female/whatever writers get there? For me, the annoying thing about this is that there are only liberal/female/whaever writers that were approved by a small (more or less conservative-minded) clique.

    I do appreciate your wishes for more constructive discussion and less confrontation.

    The problems with the data have been pointed out several times, and you seem to agree for the most part:

    1) Goodreads ratings are problematic, because the service hasn’t been there for very long. If we could see the Goodreads score that Dune got 1-5 years after publication, then we could compare it with something published 1-5 years ago.

    2) The submission ratios only tell us what is submitted to publishing houses, while the important male/female ratio here would be that of the works that the publishers really publish.

    Despite the problems, it was interesting to see the data and what you me of it. You’ve obviously put much effort into this.

  72. spacefaringkitten-

    Thanks for the feedback and kind words. Just wanted to respond to a couple of points:

    your own anti-leftist and anti-feminist agenda is pretty evident and I disagree with you 100% there.

    Well, never let it be said that I had a hidden agenda! I believe in letting people know where I stand, but also trying to be fair and honest and as objective as I can.

    You assert that Sad Puppies have legitimate points (because bringing new people on the ballot is good) but don’t address the problem with their successful attempt to force a “diverse” slate of their own choosing on the ballot.

    I think it was absolutely a mistake for the Sad Puppies slate to recommend 4-5 individuals for any category because that led to them sweeping categories, and that is a very bad idea. They should be trying to add new faces, not take over the entire process. But what you have to keep in mind is that this was accidental. They really had no idea that they would get so many works on, and I believe that they would do things differently if they could. The first part (that it was accidental) has been stated by Correia and Torgersen publicly. The second, that they would do it differently if they could, is my own assessment. So I really do agree with you that sweeping entire categories of finalists is a bad, bad idea. And I have been pushing (in public comments at Correia’s blog in particular) for them to proactively take steps to do that for SP4 (which is already planned and will be led by Kate Paulk.)

    Despite the problems, it was interesting to see the data and what you me of it. You’ve obviously put much effort into this.

    Thanks, I did. And I’m hoping to keep going and add some more data and analysis down the road.

  73. As a complicating factor, I think Megan Grey’s story “Tuesdays with Molakesh the Destroyer” was ruled ineligible due to having been published in early January 2015 rather than late December 2014.

  74. A point on your analysis of the gender divide in the awards versus the numbers submitted to Tor UK to be published – the awards are not just science fiction, but would also include the urban fantasy/paranormal romance/epic fantasy category.

    Annoyingly, since I think horror would seldom get on the ballot, the final breakdown of 32 % to 68 % might or might not reflect the total pool for Hugo nomination purposes.

    Secondly, I think your figures include the short fiction categories as well, while I expect the Tor figures would include mostly novels.

  75. But what you have to keep in mind is that this was accidental. They really had no idea that they would get so many works on, and I believe that they would do things differently if they could.

    That’s true.

    And I have been pushing (in public comments at Correia’s blog in particular) for them to proactively take steps to do that for SP4 (which is already planned and will be led by Kate Paulk.)

    As far as I can see, Paulk has been very adamant about that there was nothing wrong with how things rolled this year. We’ll see what happens.

  76. You won’t find nominee data for years before 1959, because that was the first year of the two-stage selection. In previous years (1953, 1955-58) people voted for their (one?) favorite, and the winner was the one with a plurality of votes.

  77. Going back to the discussion of gender numbers of writers and winners, what seems to be missed, and I don’t understand why, is the reading population itself. What % of gender readers are male and female depends to a large degree on the gender itself. Romance, many more women than men. Mystery, more women than men but not by as large a percentage. Non-fiction about equal, but leaning slightly to men. General fiction (think best seller list non genre), more women than men. SFF, many more men than women readers. If more men read science fiction, is it any surprise more men write it? No. Writers tend to write what they enjoy reading. As more women read SFF, more will write it, and more of what they write will be good and recognized as such. It’s those submission stats that tell the story.

    There’s really no need for “sides”, political agendas or labels in any of this, and no place for it in a discussion about the best books published in a given year. Want to talk of good plotting, strong character, beautiful writing, sense of place, thrilling storytelling, memorable endings, sure I’m for that. But sides and labels? Nope.

  78. I have many statistical objections to your analysis, but before I can even get to those my biggest difficulty is that you conflate samples taken from different populations. The Hugos belong to Worldcon; there is little reason to expect that Hugo nominations should track GoodReads ratings since Goodreads users are not representative of Worldcon voting members, nor should they be. Worldcon voting is for hard-core fans and they have *always* been non-representative of the market.

    Also, there is considerable reason to believe that Goodreads voters of Hugo winners of five or ten years ago are different than Goodreads raters of recent Hugo winners — reflecting the cumulative effect of a Hugo award over several years of the award on people seeking to find books to read.

    Likewise I seriously doubt that Tor UK submissions are particularly representative of Hugo material. Publishers favor safe manuscripts over risky ones which is why there is so much “me too” crap on the shelves whether that’s rock-ribbed military space opera or sparkly vampire romance. Awards on the other hand tend to go to unusual works. Using your reasoning sparkly vampire romances were grossly underrepresented in the 2008-2012 Hugo awards — which on a purely numeric basis they were.

    Unless you can show you are sampling between comparable populations none of the comparisons you make have any validity at all.

    But even if we skip over the sampling shortcomings in the juxtaposition you’ve proposed, and the almost certain lack of statistical significance of the 2008-2012 “trend”, I have an epistemological problem with your way of interpreting the data. The most numbers can do is give you a probability that certain set of events is an anomalous sample. But even solid evidence of an anomaly doesn’t favor any one explanation for that anomaly over any other possible explanation. To choose one explanation over another you need a corroborating independent line of evidence.

    The SPs have taken what they see as a statistically anomalous run of events and they’ve jumped to the conclusion that it’s the result of electioneering by a cabal of SJW activists. Fine — that’s *possible*, but where is the corroborating evidence for the operation of such a cabal? Had such an SJW campaign been in operation, surely we’d see evidence for it, in fact it’d have left the same kind of public footprint the SP and RP campaigns have.

    Give me evidence — even weak evidence — and I’d say the belief at least isn’t paranoid. But patterns and anomalies happen all the time, often purely by chance. There’s no question that Sagittarius looks like a teapot, but it’s not because God is encouraging us to drink tea.

    Don’t get me wrong — I think tastes *are* changing, even in part because of the actions of social justice activists. But I have seen zero evidence of actual Hugo electioneering by social justice advocates.

  79. I’m afraid that I, like Geoffrey, stopped reading at the Venn Diagram.

    If you’re always going to get a 0 at the central point of the Venn Diagram in any possible circumstance, then a Venn Diagram is not a good way of analysing your data.

  80. >I have seen zero evidence of actual Hugo electioneering by social justice advocates

    I don’t know if this is what you are asking for, but for what it’s worth, here are some social justice activists saying that “people are openly talking about Affirmative Action, about deliberately trying to read and nominate books by POC, with the ‘by POC’ coming first before anything else”. Author N K Jemisin agrees that this is the case and justifies it because of “the favoritism shown to white male authors, and the exclusion of non-white-male authors”

    http://nkjemisin.com/2015/04/not-the-affirmative-action-you-meant-not-the-history-youre-making/

  81. And one question: supposing that the SP agreed that sweeping the nominations is a bad thing, what should their SP4 strategy be: a recommendations list with more or with less than 5 candidates per category (maybe 2?)?

    Having less than 5 would assure that there was no sweep, but both strategies have a problem: VD can become the protagonist again. If the SP have a list with less than 5 candidates he can use those and add some more up to 5, and publish it as his RP list. If it has more than 5, VD can choose a subset of 5 and publish it as his RP list. Either way, there may be a sweep again and we’d be back to a similar situation.

  82. The complaints about the Venn diagram are bizarre and smell more of confirmation bias (looking for any reason, no matter how small or irrelevant, to reject the whole argument) than any actual, substantive critique.

  83. Hampus, the tweets from Larry Correia you link are him reaching out to a journalist in a sympathetic media outlet who ran a news story. God knows he needed it, after all the character assassination being published.

    The one from VD is talking about assertions made about SP and GG being in league. Shouldn’t the alleged conspiracy to attract the attention of GG have happened before the accusations?

  84. AG: By reaching out to Yiannopoulus, he was also de facto reaching out to gamergate. He even said that in his tweet. The reason he thought the media was sympathetic was because it was sympathetic (actually more like lobbying for) to gamergate.

    The tweet from Beale is just one of many. Beale was and is an active gamergater and is continuously tweeting about his rabies campaign to the gamergaters.

    So there has been contact between the puppies and the gamergaters since before the slates were published. That can’t be denied. How ever, if you read at the gamergate forums, you will find that many gators aren’t sympathetic. Which might be why the climate is still rather civil (compared to GG that is).

  85. Also, I think it is important to note that the person Correia reached out to was not a Science Fiction-fan. Quite the opposite. This is the stuff he used to write:

    “It isn’t hard to imagine why Elliot Rodger struggled with girls. He was a pretty boy but self-conscious, theatrical and obviously disturbed. Despite his own protestations, he was an archetypal beta male: insecure, socially awkward and obsessed with the fantasy worlds of video games and science fiction movies.”

    http://www.breitbart.com/Breitbart-London/2014/05/27/virgin-killer-was-not-a-misogynist-but-a-madman

    Beta males.

    And it is him Correia reached out to, specifically saying he wanted outsiders to vote on Hugo. He went for a cultural war, ignoring if someone cared about SF or not. Yinneapolous was known for hating nerd culture, games and science fiction. But being an opportunist that made himself part of the gamergate crowd.

    So yes, he wanted to get the gamergaters to help. But they went to Beale instead. Not very surprising.

  86. Thanks for all the work you did on this, it was an interesting read. I have some concerns with the goodreads data, but other posters have already mentioned them, so I won’t repeat it. I do wonder, when you do your deeper dive into goodreads, could you look to see if there are any systematic biases for subgroups, eg do paranormal romances generally get rated higher than space opera – that kind of thing.

  87. A comment on “finding 2” — gender breakdown. You are right that this is merely unexplained data, and more evidence is needed to conclude anything interesting from it. But please note that it is *not* evidence for fairness any more than it is evidence for bias. If the 80%/20% split is men and women self-sorting according to their interests, then sure, that would be a natural interpretation. But, to the extent that the lack of women in the field is a consequence of social pressure, the women whose interest is strong enough to overcome that pressure are likely to be of a higher calibre than the men. Certainly I have found that to be true of women who are successful in male-dominated scientific and technical fields. They are far more likely to be brilliant innovators, at the top of their field, and capable and creative enough to carve out interdisciplinary niches for themselves even in organizations that are structured almost exclusively along disciplinary boundaries.

  88. bookworm1398-

    I will certainly try, but one of the tricky things is actually trying to categorize by genre. There’s no objective basis for that, and so it gets messy when you try to do cross-genre comparisons.

  89. Hi,

    Biology background here (and biology = statistics moreso than many other branches of science), so I’m probably biased, but consider your post to be far more valuable to this discussion than pretty much everything else written by both sides put together.

    Point of information: as another measure of a book’s popular appeal, have you considered using their average rating on Amazon as a metric? It would have some of the same problems as GoodReads (as well as a couple of novel Amazon-only problems: for example, some of the older books may not be listed on Amazon at all), but it could serve as a check on the GoodReads figures for at least the more recent books.

    Matt

  90. Ashley-

    I was just being honest that I did not know who Gerrold was at the time that he was brought up. I know who he is now. I learn new things all the time. :-)

  91. I am on this ballot for Best Related Work. Thus, I’ve got a dog on this fight. Vox Day is my editor and publisher. I pretty much have no points of philosophical agreement with him – I’m in the “Don’t like what you say, but you have the right to say it…” school.

    For this year’s Hugos, my statement is thus:

    Read the works. Vote your conscience. In that order.

    For next years, I have a project that I need some assistance with.

    I want to build a website that tabulates every publication in SF/F (novels, digest size, web magazines, anything else that’s eligible) by data scraping publisher’s websites.

    I never want to be in a situation where there isn’t a good generalized list to work from come January 20th for Hugo nomination times again.

    If anyone’s good at automated data scraping, please contact me via Nathaniel and I’ll help scope out the project.

  92. Ken Burnside, I think that’s a great idea. And I know at least one well-known sf writer has been wondering lately if any such list could be made. I think that it’s inevitable that any such scraper would overlook the output of some new and small publishers; one solution is to allow the list to be updated by anyone with a submission form, so that small publishers could go and manually enter any qualifying works that the scraper misses.

    Nathaniel:

    “Actually, I don’t have the nominees for some of the earliest years….”

    Apologies if someone’s already mentioned this, but there were no nominees in the earliest years; early Hugos used a one-stage voting process, rather than a nominee stage and a final vote stage..

    I think Matthew Leo’s objections are extremely worthwhile, and show that there is no hope of getting relevant data out of Goodreads.

    In addition, you mention briefly that the Goodreads data isn’t a measurement of quality, just of popularity (you then assume, without support, that popularity among people who write Goodreads reviews is a good measurement of popularity in general). But once you admit that Goodreads is not a measure of quality, then all the Goodreads data becomes irrelevant. As far as I know, no one of significance is claiming that the Hugos are, or should be, given out to books based on popularity without regard to quality, or that it would be wrong for a higher-quality book (in the eyes of Hugo voters) to win over a best-seller.

    You also argue that the Goodreads chart “depicts clearly the reasons why SP came into being in the first place.”

    But even assuming the Goodreads data is meaningful, it still only shows that in recent years, Hugo winners have been less popular than some Hugo nominees. Have either the SPs or the RPs have been arguing that the problem is that the winners of recent Hugo Awards have been less popular than the nominees they won against? I haven’t seen that argument made (and I’ve read a lot of SP arguments, mostly by the organizers). It seems wrong to suggest that this is “the reasons” for SP in the first place, when they’ve made other arguments more often and more prominently.

    Your earlier summary of SP arguments (“in recent years the Hugo awards have become increasingly dominated by an insular clique that puts ideological conformity and social back-scratching ahead of merit”) is a fair characterization of what I’ve seen many Puppies claim, but it’s also clear that the Puppies are discussing the entire nomination process, and which books get nominated, rather than saying that the problem is that the nominations process is working well but less popular nominees are winning.

    In fact, accepting Goodreads data for argument’s sake, your graph shows that in recent years the nominations process has done much better at including highly popular books than is typical for the Hugos. I don’t have the numbers, so I’m just eyeballing, but it appears from your graph that in recent years the average popularity of Hugo nominees has been significantly higher than during any other period, apart from a few years in the late 1990s.

    Far from supporting the SP case, your chart strongly undermines it, by showing that the books nominated in recent years are more popular than is typical for Hugo nominees.

  93. Barry-

    Some of your concerns are legitimate, but others misunderstand the analysis. The most legitimate one is conflating Goodreads popularity with general popularity. That remains to be examined in more detail. However, the sheer volume of comments suggests that it is quite likely that Goodreads data is fairly representative. Again, this needs to be examined further, but when you’re dealing with tens of thousands, hundreds of thousands, or even millions of ratings it would be rather difficult to have those ratings differ in very significant ways from the popular perception just because a sizeable portion of the reading public (not all, obviously) appears to be rating these works. But you’re right: it’s still just an unproven assumption.

    But once you admit that Goodreads is not a measure of quality, then all the Goodreads data becomes irrelevant.

    That’s not at all true, as we’ll get into in a moment. But for now, let’s just say that Goodreads data is an indication of general taste.

    But even assuming the Goodreads data is meaningful, it still only shows that in recent years, Hugo winners have been less popular than some Hugo nominees.

    In other words, it shows that in just the last few years, the tastes of Goodreads reviewers and the Hugo selection committee has diverged sharply. This is strong circumstantial (not conclusive, just circumstantial) evidence of the SP’s main complaint: that an elite clique has taken over the process and changed the kinds of books that win.

    but it’s also clear that the Puppies are discussing the entire nomination process, and which books get nominated, rather than saying that the problem is that the nominations process is working well but less popular nominees are winning.

    This is another one of your legitimate points, and I believe I addressed it in the OP. The only way to address it, however, is to look at what books aren’t being nominated. More on that in the next line:

    In fact, accepting Goodreads data for argument’s sake, your graph shows that in recent years the nominations process has done much better at including highly popular books than is typical for the Hugos.

    Actually, it does no such thing. And the reason is the point I mentioned earlier: we don’t know anything about the eligible books that are not being nominated. I understand where your intuition is coming from: the scores of nominated books (even if they don’t win) are higher. So… nominees are better, right? This is not a reasonable conclusion because we don’t know if the non-nominated works are even higher. Without making that comparison, we can draw absolutely no conclusions, one way or the other, about the accuracy of the nomination process with respect to Goodreads taste.

    You have to have info about winners and losers to say anything about the process. We have nominees (losers) and winners, so we can say something meaningful about that process relative to overall tastes, and in that case we find a large divide in the last few years. But since we don’t have (yet) date on non-nominated works, we can’t draw a conclusion about the actual process of non-nominated works (loser) vs. nominees (winners, in that scenario). Make sense?

    So your conclusion that the chart undermines the SP case is based on an understandable but unwarranted assumption. It may still be correct. If we find that the non-nominated works stay relatively constant in quality while the nominated works increase (we know that part already), then your hypothesis will be born out. But it’s too quick to make that guess yet.

  94. It kills me that even when this article begins to talk about diversity. They go into men and women. Which is code for white men and women. Not actually everyone else. When you break it down with how much people of color aren’t represented to those “diverse men and women” the numbers, the statistics get worse…

  95. Nathaniel –

    Thanks for your response. I appreciate your civility, and I’ll try to return it.

    First of all, in your response, you constantly conflate popular with better, even when you’re summing up my arguments (e.g., “So… nominees are better, right?”).. I wish you’d stop doing that; quality is not synonymous with popularity.

    but when you’re dealing with tens of thousands, hundreds of thousands, or even millions of ratings it would be rather difficult to have those ratings differ in very significant ways from the popular perception

    A non-random sample is not generalizable to a general population, and it doesn’t magically become generalizable just because it’s large. Beyond a minimum threshold, sample size is much less important than how the sampling was done. For example, a random sample of 1000 would provide more reliable results than a non-random, non-representative sample of 1,000,000. [*]

    [*] If the non-random sample is very close to being the entire population, that could change things; but that’s not the case here.

    In other words, it shows that in just the last few years, the tastes of Goodreads reviewers and the Hugo selection committee has diverged sharply.

    There’s no such thing as “the Hugo selection committee.” There’s just the Hugo voters. This is not a minor distinction.

    And given how many problems there are with this data, it’s not true that your graph “shows” anything. Now, you can get around that by saying “My attitude is simply that we have to start somewhere, and here is my start” and “this is a blog post, not a submission to Nature.”

    That’s completely fair. Let’s put a pin in that for a moment.

    Actually, it does no such thing. And the reason is the point I mentioned earlier: we don’t know anything about the eligible books that are not being nominated. […] This is not a reasonable conclusion because we don’t know if the non-nominated works are even higher.

    Either you’re saying that we can draw tentative conclusions from inadequate data because it’s “this is a blog post,” or you’re saying that we can’t do that. But what you’re doing now is trying to have it both ways. When inadequate data supports your preferences, then you make tentative conclusions, but when inadequate data cuts against your preferences, suddenly we can’t make tentative conclusions because the data is inadequate.

    (Didn’t I do the same thing? No, I didn’t – I specified that I was only “accepting Goodreads data for argument’s sake.” I wouldn’t actually claim that this data has any value at all. But if we are treating inadequate data as if it has value because this is just a blog – and that’s totally fair – then you can’t dismiss conclusions that cut against your bias merely because the data is inadequate.)

    The data you have implies that nominees are more popular with Goodreads readers in recent years than in prior years (apart from the late 1990s). Yes, further data might undermine that finding – but that’s just as true of the tentative conclusions that you favor.

    Finally, to repeat a point that (with all due respect) I don’t think you adequately addressed – – the Puppies have been claiming that either the entire process, or the nomination process, is corrupt and controlled by (to use their insulting term) SJWs.

    I haven’t seen a single Puppy arguing that nominations are as valid as ever, but winners have recently ceased being valid. In fact, the example Puppy supporters have constantly used to argue the Hugos are corrupt is “If You Were A Dinosaur, My Love” – a nominee, not a winner.

    However, your graph does not indicate that nominees are any less popular in recent years than they’ve ever been.

    (If you don’t believe me, try remaking your chart, turning all of the red squares into blue diamonds, so that it will be a chart only of Hugo nominations. There would be no sign of any decline in popularity at all.)

    Since we know the Puppies primary complaint is about which works are nominated, and since your graph fails to show any reduction in popularity of Hugo nominees, your chart cannot be said to support the Puppy argument.

    (This remains true even if you choose to ignore the evidence that recent Hugo nominees are in fact more popular than they have been in the past; so refuting that point does not refute my argument.)

    Thanks again for the discussion.

  96. Randa-

    I think your concern is 100% valid. Unfortunately, as I wrote in the original article itself, data on gender is much, much easier to acquire than data on race or religion or sexuality. That’s the only reason that I focused on it in this piece. If you know of any good sources of data on other attributes, please let me know and I’ll be happy to include them on my own.

    It’s not just ease of access to data, I might add. Because the data on sexuality, religion, race, etc is not easily available, I am concerned that attempts to collect it by (for example) crowd-sourcing may result in infringement of privacy. So, it’s an important issue to raise, but also a very tricky one to address.

  97. Barry-

    A non-random sample is not generalizable to a general population, and it doesn’t magically become generalizable just because it’s large…

    Actually, it does. The bigger a sample becomes, the less sampling technique matters. At the extreme end, you sample the entire population. If you do that, the sampling technique matters literally none at all, because you’ve sampled everybody.

    The data I collected included not just average score, but also ratings. Typical Hugo winners had 20,000+ ratings. Depending on the book, this is a pretty significant fraction of the folks who read it.

    This doesn’t make the problem go away entirely, but it is something important to keep in mind. In the future I’ll compare Goodreads scores with themselves (e.g. how a Hugo winner rates before and after the nomination and award) and also with other data sources (e.g. Amazon) and that will still not answer the question conclusively, but it will help.

    The data you have implies that nominees are more popular with Goodreads readers in recent years than in prior years (apart from the late 1990s). Yes, further data might undermine that finding – but that’s just as true of the tentative conclusions that you favor.

    I’m really not sure that it does. Short of a couple of outliers, the nominees seem more consistent. This actually still goes towards your point, however, which will discuss below. I think this point (below) is by far the most important criticism you levy.

    Since we know the Puppies primary complaint is about which works are nominated, and since your graph fails to show any reduction in popularity of Hugo nominees, your chart cannot be said to support the Puppy argument.

    I think your dismissal is a little too strong, but it does have merit. There are really two process in place.

    Process 1: Who gets nominated?

    Process 2: Given the nominations, who wins?

    If there’s bias, I think it should be expected to operate at BOTH levels. So we should expect to see bias in the nominee phase and in the winners phase. My chart shows evidence of bias in the winners phase. It does not show bias in the nomination phase.

    I still believe this is suggestive, but the only logical thing to do is try and evaluate the nomination phase. I tried to run that analysis for this post, but ran into data problems. (I’m working through those.) When I do, I will post the results. My hypothesis is that we’re going to see an uptick in the scores and/or numbers of eligible works that didn’t get nominated. I believe there will be evidence of bias in the nomination phase. The chart doesn’t rule that out.

    But you have my word that I will post the results no matter which way they go.

  98. I just came across this, but it’s another possible explanation for why recent Hugo winners have been rated lower than recent nominees among Goodreads users. (Apologies if someone here has already linked to this.)

    The researchers use reader ratings on the user-generated book review website, Goodreads, to evaluate readers’ opinions of books before and after they win awards. Sharkey and Kovács analyze thousands of reader reviews of 32 pairs of books. One book in each pair had won a prestigious award, including the Booker Prize, National Book Award, or PEN/Faulkner Award, while the other book had been nominated but hadn’t won. The research reveals a trend: “Winning a prestigious prize in the literary world seems to go hand-in-hand with a particularly sharp reduction in ratings of perceived quality,” write the coauthors. […]

    They find that before an award is announced, the predicted ratings of a book about to win are equivalent to the ratings of a book about to lose. But after the award is announced, that changes: award-winning books have lower predicted ratings than books that don’t win. “This is direct evidence that prizewinning books tend to attract new readers who wouldn’t normally read and like this particular type of book,” says Sharkey.

    Another reason that people may be more likely to negatively review an award-winning book is that popularity sparks a backlash. When books become trendy quickly, the researchers argue, the reader may feel less special, so may rate those books less favorably. The researchers created a statistical model of this situation, and according to Sharkey, “the negative effect of winning a prize vanishes after we’d accounted for the effects of shifting reader tastes and the rise in popularity.”

    There remains the question of why this “awards equal less popular on Goodreads” effect only applies in recent years. However, I can think of plausible explanations for this based on the different sorts of readers who choose to rate current award winners, versus those who rate older favorites (“older” in this case means before Goodreads began).

  99. Barry-

    Yup, someone else linked to that article earlier. FWIW, I agree 100% with your conclusion, including the plausibility of this only applying to recent awards. In fact, it’s kind of obvious. Goodreads started in 2007, I believe, so any book awarded prior to that point can’t show a relative downward spike because no one could review those books in their unawarded state.

    We’ll just have to see what I can find.

  100. I hadn’t yet read your most recent comment, when I posted the link to the info about Sharkey and Kovács. If I had, I would have realized that you already knew about Sharkey and Kovács. :-)

    If there’s bias, I think it should be expected to operate at BOTH levels.

    I’d question the term “bias,” because not all bias is objectionable. If Hugo voters genuinely like gender-bending novels better than military sci-fi (which certainly seems to be the case, although there are of course novels that are examples of both), that is a form of bias, but not a form of corruption. And it’s only corruption that’s reasonably objectionable, I think.

    But anyone gaming the system in a corrupt manner would probably be more detectable at the nominations level, because of the greater number of voters in the final stage. (It’s about 2000 versus about 3500, if my memory is correct.)

    So we should expect to see bias in the nominee phase and in the winners phase. My chart shows evidence of bias in the winners phase. It does not show bias in the nomination phase.

    Thank you very much for acknowledging this.

    I can’t imagine how you’re going to gather a consistent sample of non-nominated books for all those years. It’s an interesting problem, and I’ll be curious to see how you approach it.

    Thanks as well for promising to share your results.

  101. Nathaniel

    Kudos to you for your analyses.

    I used to do stats for a living. I heard you when up front you said these were back-of-the-envelope figures drawn from available sources. I read the results in that light. These results will tell you where to look in the next round. If there is one.

    As for the Goodreads data, back-in-the-day I discovered that collecting the right data consumed 85% of my time. That came after I found the source of the right data. And I was getting paid.

    All these nitpickers throwing rocks at you because you used the Goodreads data . . . yeah, well, I saw that you knew those problems going in. I also understood what they did not: having ANY independent data source was a boon. BTW I don’t think they saw all the work that went into collecting the data for and producing that chart, because it came to them as a fait accompli. But I did.

    You done good.

  102. Perhaps it is not well known but J. Michael Straczynski (Babylon 5) is extremely political. When the diagnosis of President Reagan’s Alzheimer disease was made public, he posted on rec.arts.sf.tv.babylon5.moderated that he was delighted to hear it and hoped that it would be a very painful time for the president and Nancy. When given a couple of chance to soften his stance, he doubled down on his hatred.

  103. Thought experiment: How do the Sad Puppies nominations stack up in terms of Goodreads scores? I know they have a smaller sample size and I haven’t done all of them but it looks like they fall into a range that is pretty similar to the one the Hugos fall into. So are they really better at judging what is good that regular Hugo voters?

  104. Thought experiment for you, Andrew. Are they substantively worse? No. Are they substantively different to the usual array? Yes. They don’t overwhelmingly have prior noms, which last year’s for example, show a startling number of just being same crowd. Are the SP suggested nominees themselves diverse? Yes. Sex, sexual orientation, skin color, religion, politics – taken over those 5 metrics they’re far more diverse and more representative of the English First language World than last year. Does the evidence (PNH knowing the outcome of the novel category on 25/3 – long, long before the names of winners were not embargoed – something he could simply not know by any honorable or honest means. He basically had to know who was expected to win, and they had to have told him they did (a breach of the rules) or did not, or have been informed by the Hugo administrators (a breach of the rules again) suggest that there are major problems in ‘business as usual’? Yes.

    So why are so many people so eager not to see change? And why are loudest voices those either implicated (being part of the clique that had foreknowledge of the results, and therefore ‘knew’ who they expected to win, and willing to breach the rules and inform others of that win or the non-win) or historical major beneficiaries of the status quo.

  105. Now Popular Science and NPR’s “On The Media” have joined in the chorus of “Sad Puppies are all white males who hate diversity” and are repeating the same lies as the EW article (and adding more). No one seems to have noticed or cared about EW’s “epic” correction.

    I’ve had friends on fb who aren’t science fiction people link to and angrily rant about the evilness of the SPs. I think it’s pretty clear the SPs are losing, and that’s a shame. There seems to be a rather well-coordinated media blitz from the SJW wing, and unless there’s some serious pushback, SP is in serious trouble.

  106. Ivan-

    So, the Popular Science article was disappointing but not surprising. (Link here, if anyone is curious.) Then I went to check out the NPR On the Media story and say that their one and only guest was Arthur Chu. Seriously? (Again, here’s a link.)

    You might be right, Ivan. This is disappointing.

    Then again, when I realized that Arthur Chu had written at least two of the articles (I think he did Salon and the Daily Beast), I realized how incestuous the relationship was between anti-Sad Puppy critics and the media. They’re basically the same group of people.

  107. Ivan, it’s not true that they are “repeating the same lies as the EW article.” Neither one of those stories repeats EW’s retracted claim (which was that the entire Puppy slate consisted of white male nominees).

    I think it’s pretty clear the SPs are losing, and that’s a shame.

    I think the puppies are losing and that’s delightful.

    There seems to be a rather well-coordinated media blitz from the SJW wing, and unless there’s some serious pushback, SP is in serious trouble.

    Is there any evidence of “a well-coordinated media blitz,” as opposed to a bunch of reporters who have read earlier media reports and this influences their reporting, without any coordination required?

    In the end, whatever the media says, winning or losing the Hugo Award fight is going to depend on what Worldcon members do, not on how the media frames this story. For that reason, I think it’s been a strategic error for the Puppies to spend so much time sneering at and denigrating Worldcon members, and alleging conspiracies that they can’t prove.

    Nathaniel:

    They’re basically the same group of people.

    Nonsense. The vast majority of anti-Puppy critics are not in the news media. GRRM, by far the most prominent Puppy critic within the SF community, isn’t in the news media. Neither are the “Making Light” folks, neither is Connie Willis, neither is Kevin Standlee, etc etc etc..

  108. “Neither one of those stories repeats EW’s retracted claim (which was that the entire Puppy slate consisted of white male nominees).”

    Those weren’t the only lies in the EW article.

    “Is there any evidence of “a well-coordinated media blitz,” as opposed to a bunch of reporters who have read earlier media reports and this influences their reporting, without any coordination required?”

    Yes, but you’d explain it all away somehow. At this point, you’re in the territory where confirmation bias has taken over, so there’s little point it trying to explain it to you.

    But even if there’s no specific person or person coordinating it all behind the scenes, the fact, say, Arthur Chu has two articles and gets the NPR interview (while somehow there are no pro-SP interviewed), the author of one the original Guardian articles in the initial anti-SP blitz (where several too similar articles all appeared around the same time) was also published by Tor (and connected to the Haydens through that), connections to Scalzi from several of the article authors, etc. etc.

    Yeah – the people pushing the anti-SP narrative in the media are basically the same people, and they’re getting the media to keep pushing the same narrative and only talk to them. It coordinated and incestuous, but it’s working. Which you don’t mind, so of course you don’t care, really. Your protests are weak attempts to make your side look better than it really is, but as with the media mongering, it’s designed to influence the low information types who don’t have the time or energy to find out the truth.

  109. Barry Deutch – you say “and alleging conspiracies that they can’t prove.”

    . Patrick Nielsen Hayden March 26 2015 8.30 AM quoted from ‘Making light’

    “* Regarding Best Novel: I’ve heard that three of the five finalists are SP-endorsed. (Which, see above, doesn’t in itself guarantee that any of them are unworthy of a Hugo.) I don’t know what any of those three books are. I do know the identity of the other two, and I don’t think anyone in this conversation will regard them as unworthy candidates. (Disclaimer: Neither of them are books Teresa or I worked on in any way.)”

    He was correct (although originally 4 of 5 were SP endorsed. Correia withdrew to gainsay the accusations that he did this for his own benefit) This is some nine days before the embargo was lifted.

    So: Barry Deutch. How did PNH know? How did he know the identity of the other two? How did he know the SP slate had 3? The rules are clear and unequivocal about the embargo, and there is no evidence that those involved in the Sad Puppies slate breached them. (Provable fact, as one of the nominees, they screwed up contacting me. So I couldn’t possibly inform anyone. Nor did anyone ask me). So Barry, how did he know?

  110. My speculation? He knew through the usual channels – ordinary word-of-mouth gossip. (Although I should admit straight off that I haven’t looked into this, so I’m just going by the information you’ve just given me). But hearing gossip about who is nominated AFTER the nominations are set – even if it’s not proper – is NOT the same thing as having any ability to control the outcome.

    I know from personal experience (I’ve been nominated for an Andre Norton award, which is given out at the Nebulas), and also from several people I know who are either writers or in the industry, that SF award nominees often tell – or at least drop hints – to their friends and editors and agents well before the embargo is lifted.

    Editors and publishers and agents, in turn, gossip with each other. (To some extent it’s their job to know, because they have to be ready to capitalize on nominations.) Because those folks know the most authors, they’re the most likely people to have put a complete picture of the “best novel” nominations together just from word-of-mouth.

    Frankly, in the two weeks before the Hugos are announced, “who will be nominated” is usually the biggest topic of industry gossip. Often the gossip gets it right, and sometimes it doesn’t.

    From what you say, PNH’s info was incomplete and/or wrong. If I’m following what you’re saying correctly, there were four Puppy nominees at the time he made his statement, and unless you think PNH and Correa were in cahoots,[*] there was no way for PNH to know in advance that Correa would drop out. And he didn’t know which the puppy books were. So if his info wasn’t quite right, that fits in with my theory that he was going by word-of-mouth gossip, not certain knowledge.

    But in any case, what this “proves” is nothing more than that people – including people who have been told they are on the list – gossip and share info earlier than they should. That’s hardly enough to prove the claims of a shadowy cabal of eeevvviiiil SWJs controlling who does and doesn’t get nominated.

    And by the way, if there is such a conspiracy, how come conservatives have been nominated for Hugos without any puppy help? And how come some of the most popular left-wing authors have never been nominated?

    [*] If they had been in cahoots, this would all be a much more interesting story, don’t you think?

  111. Barry, you seem to be missing the point with rare skill. It’s not that publishers and their cliques habitually break the rules and gossip and you’re fine with that (but furious with people stick to the rules. Because…ungenteel) It’s HOW THEY KNEW THE OTHER 3 WERE SAD PUPPIES? If he’d said I know who two are , but have no idea who the other three are – that’s gossip. Unfair, cliquish, but not necessarily indicative of wrong doing. As soon as he states that he knows the other 3 are SP endorsed – you have a question that can only be answered by either the Hugo Administrators leaking to a clique with commercial interest in the award, or because his clique had block voted in secret, and therefore knew who they expected. The certainty of success of this kind of thing he shows only comes from knowing the normal outcome. I believe it can be tracked back to Nebulas where logrolling was open (it’s a matter of record) and many of the same people were involved. Why would they do this in one area, and not the another? (Both of which are problems to me, and I would hope to you).

    Let me clarify something. This statement of yours is incorrect – “From what you say, PNH’s info was incomplete and/or wrong. If I’m following what you’re saying correctly, there were four Puppy nominees at the time he made his statement,”

    No, you’re at the kindest looking for an escape clause. There isn’t one. At the time he made statement there were 3, as Correia withdrew immediately – well before this. What this is, is an indication that the Hugo Administrators were not the source of the information. Neither were the Puppies.

    Which leaves only ONE possible way he knew. He knew because he had an expected slate, two of whom had – in breach of the rules – told him they had got theirs. The other 3 told him they hadn’t (it’s not actually breaking the rules to tell someone you haven’t. But why would they ask? Who would they ask? And why would you say anything one way or the other?). Therefore 1)He knew the names of the non-SP noms. 2)the only way he could know the remaining nominees were SP… was that no other candidate had a hope in hell. Knowing by elimination can’t work if another party could get in, or if the list of possible candidates is large.

    I’m sorry, but this is the smoking gun. The Hugos were -and probably have been for many years -gamed by a small clique. I don’t care who that clique is, or if you or I love or hate them. It’s bad for the award and bad for sf.

  112. “Thought experiment: How do the Sad Puppies nominations stack up in terms of Goodreads scores?”

    The sad puppies score is higher than any years Hugo nominee ballot except 2000, which was a very good year.

  113. NG,
    for what it’s worth I’m a Rabid Puppies supporter, I guess you could say, and I most certainly support Correia’s and Torgersen’s refusal to do the whole ritual denunciation and separation that everyone is demanding that they do. It’s unmanly, unseemly, and plays right in the Left’s standard playbook of getting the right to shift itself leftward by shaming the right into doing its own divide-and-conquer for them.

  114. Thank you for doing these analyses.

    I wondered if you had thought of comparing the ratings of nominees vs winners for the Nebula and Locus awards in addition to the Hugos? It seems that this would give valuable information and something like a baseline (Locus/Nebula) about how well the different SF award voting franchises “succeed” in picking the highest rated books…popular (Locus) vs professional (Nebula) vs “fandom” (Hugo).

  115. Larry Correia wrote on “Monster Hunter Nation” on April 24, 2014:

    “I’ve said for a long time that the awards are biased against authors because of their personal beliefs. . . . Message or identity politics has become far more important than entertainment or quality. . . . So I decided to prove this bias and launched a campaign I called Sad Puppies (because boring message fiction is the leading cause of Puppy Related Sadness).”

    Step back from the race and gender data points, Nathaniel. Those are red herrings raised by Anti-Sad Puppies, trying to claim victim status, to gain the moral high ground. Sad Puppies never said a White Man can’t win, but that a Conservative can’t win. You’ve done a masterful analysis of the defense’s claims: what of the plaintiff’s?

    Conservatives come in all colors and sexes – you must know an author’s political opinion to judge that claim. As a proxy, you might read their stories or their websites. I predict you’ll find Sad Puppies is entirely correct: the Hugos are awarded, not to the Best SF Story, but the to Best Politically Correct SF Story.

  116. Sad Puppies never said a White Man can’t win, but that a Conservative can’t win.

    A quibble (that you may not even disagree with me about) – it’s clear that Puppies are talking about nominations, not just who wins. As I said earlier, the example Puppy supporters most frequently use to argue the Hugos are corrupt is “If You Were A Dinosaur, My Love” – a nominee, not a winner.

    Secondly, what about Mike Resnick? He’s conservative, and he’s been nominated more than any other author ever (36!) and won five times. And he’s hardly the only conservative to be nominated. Other conservatives have been nominated as well – I’m a cartoonist, so I know the comics category people best, and Bill Willingham’s been nominated several times.

  117. I like the idea of comparing to Goodreads data.

    When the SP/RP Hugo dominated list came out, I went to the Goodreads Choice Awards for 2014 to compare. What it told me was that the SP/RP slate was not representative of popular taste. That was the point of the SP/RP slates, yes? You can do the same comparison. Did I misread the data?

  118. I have one comment about your data. Goodreads was founded in 2006/7. Hugo Awards were founded in 1953.

    If given two novels, one of which had the label HUGO AWARD WINNER 1997 and one of which had the label HUGO NOMINATION 1997, I believe that most people would suggest that the winner was superior. This bias would likely inform my reading of both books – the nomination I might consider better than an un-nominated book, but the winner I would likely consider better than either a nominated book or an un-nominated book. Note that I have not actually read either book in this scenario (nor seen their title or cover or ANYTHING) and I’ve still created assumptions about them.

    My thinking is merely that by the time Goodreads was founded in 2007, most of the books that had Hugos already had them. People were therefore influenced by the award and it’s hard to say how much this affects their rating. Books that were nominated and/or won more recently would presumably show less of this bias, because there has been less time to accumulate reviews biased by their award status. More reviews would have been given pre-Hugo nominations or winnings, and therefore they would be less biased by this.

    I don’t know if this would contribute a lot or a little to the ratings – given that some Hugo winners are ranked very lowly, it is perhaps negligible – nor how you could calculate out the bias, but it is worth considering.

Comments are closed.