Click amount to donate direct to CounterPunch
  • $25
  • $50
  • $100
  • $500
  • $other
  • use PayPal
Please Support CounterPunch’s Annual Fund Drive
We don’t run corporate ads. We don’t shake our readers down for money every month or every quarter like some other sites out there. We only ask you once a year, but when we ask we mean it. So, please, help as much as you can. We provide our site for free to all, but the bandwidth we pay to do so doesn’t come cheap. All contributions are tax-deductible.

Mid-Term Polling Firm and Forecaster Scorecard: 2016 Democratic Primary

Based on twenty-two primaries or caucuses with two or more polls and final results through Wisconsin on April 5th. GGAB/CounterPunch is GoodGawdAnotherBlog or, yours truly. HC = Hillary Clinton BS = Bernie Sanders *Survey USA has issued no polls outside the South and could not be fully scored.

Largely on the strength of stunning statistical accuracy in the South, FiveThirtyEight scores highest among pollsters or forecasters in a continually hard-to-call Democratic primary season. Still, the difficulty in accurately polling or forecasting state contests between former Secretary of State Hillary Clinton and Senator Bernie Sanders is evidenced by the fact that, as good as they are, FiveThirtyEight scores only seventh best out of fifteen in contests outside the South. In those contests, FiveThirtyEight has a statistically measurable bias in favor of Secretary Clinton of nearly six percentage points per state. FiveThirtyEight’s statistical bias outside the South leads it to an eight place finish for overall candidate bias, worse that American Research Group (ARG), a polling firm that receives a C- in FiveThirtyEight’s own Pollster Ratings.

More starkly, of the 148 polls included in this scoring tally, the average raw difference between poll or forecast and final outcome is 8.8243%. This is slightly better than the 11 point raw or absolute error for all polls, including many that only fielded one poll, that has been noted by Harry Enten of FiveThirtyEight.

The results and scores themselves point unmistakably to a now extremely recognizable feature of this polling cycle. In the South, polling firms and forecasters had a statistically definitive bias toward Senator Sanders; in states that have been polled and have voted outside the South, there is an equally statistically definitive bias against Senator Sanders. In other words, while generally calling Hillary Clinton the winner correctly in the South, polling firms and forecasters expected her margin of victory to be substantially less than it turned out to be, a very important factor in a race where delegates are awarded on a proportional basis. The opposite has proven true time and again for elections in the Northeast, West, and Midwest. At times, such as starkly in Michigan, this has meant getting the winner wrong, but nearly always it has meant landing well-wide of the mark in getting the margin of Senator Sanders’ victory or losses correctly outside the South.

I’ve mathematically scored every polling firm that has released at least two state-level polls on the Democratic side in 2016 along with RealClearPolitics, FiveThirtyEight, and myself (GGAB/CounterPunch). As I long suspected, in-state or in-region university pollsters are doing much better on average than national, commercial polling firms. For example, where everyone else, including FiveThirtyEight, missed South Carolina by ten percentage points or more, Clemson University’s poll accurately saw a nearly fifty point win for Secretary Clinton. Michigan State University did not call the right winner in Michigan, as I was able to, but it was the only pollster that was within its margin of error by calling just a five point win for Clinton against an eventual 1.5% victory for Senator Sanders.

To get a grasp on this phenomenon, I have scored in-state or in-region university polling as if their work was from a single polling firm or forecaster. Where there has been more than one university with a final poll, I have averaged them. This is not a strictly statistically accurate feature, however. I’ve excluded Loras University altogether as their two polls in Iowa and Wisconsin were off by 24.25% on average. Taken together, the hypothetical University Polling firm scores second best with a total score of 17.6596 to FiveThirtyEight’s 14.7271 where lower scores are better. My forecasts earned me third spot at 20.3522. Quinnipiac (22.5000) has polled in just three states that have voted so far but was able to secure fourth spot, and best individual pollster overall, ahead of number five, RealClearPolitics (23.4771).

Survey USA is a firm that consistently receives high marks in FiveThirtyEight’s pollster ratings. Survey USA has actually outscored FiveThirtyEight on average in the four states they have polled, but are excluded from top honors overall because they have not attempted to predict a single state outside the South. This gave me no way to rate them in one of the four key categories scored.* Polling firms and forecasters have been scored by: (1) average raw error (simply subtracting their final poll for a state from the actual result then adding and averaging them according to the number of states polled or forecast), (2) average candidate bias in the South (11 states to date), (3) average candidate bias outside the South (11 states to date), and (4) average candidate bias for all states.

Remarkably, Survey USA’s tiny 1.15% Clinton bias in the South represents the only firm or forecaster scored who had a bias toward Clinton in Southern contests. More impressively, FiveThirtyEight showed just a 1.04% candidate bias toward Senator Sanders on average for the eleven Southern states. Compare this to RealClearPolitics which showed a bias toward Bernie Sanders of 8.38% on average in the eleven states in the South.

For the final score, I’ve added the four scores together with the lowest being best. In other words, if a firm or forecaster had called every outcome they predicted exactly correctly, their score for all four categories would be zero, as would be their overall final score.

FiveThirtyEight’s best forecast in the South was in Texas where their Polls Plus feature suggested a 32.2% win for Hillary Clinton, and the final result, just 0.2% off, was a 32.0% Clinton victory. Both Survey USA and FiveThirtyEight’s worst southern prediction was in North Carolina where Survey USA missed by 11.2% and FiveThirtyEight missed by 10.8%. I called North Carolina exactly accurately at 13.8%, but as noted elsewhere, was wide of the mark, consistently in favor of Senator Sanders, in the other four contests held on March 15. Those four misses significantly affected all four categories for me.

Taking first place in each of the four scorecard areas are (1) Survey USA with a raw average error of 4.4500% (2) FiveThirtyEight with a Sanders bias in the South of just 1.0363% (3) CNN with an extremely tiny average Sanders bias outside the South of just 0.0250 in four states polled, and (4) Monmouth with an overall bias of just 0.4555% in favor of Secretary Clinton. Monmouth and CNN fared worse in other categories, bumping them to number six and number nine overall. Monmouth’s overall bias was low because they have lowballed Clinton in the South (8.3000% on average) and Sanders outside the South (7.4600% on average) about equally. CNN’s Southern score was affected by calling just two states in the South – Florida reasonably well with a 4.1% Sanders bias, but South Carolina quite badly with a 29.5% miss.

CBS/YouGov and NBC/WSJ/Marist were closely competitive with each other, have small overall biases in favor of Senator Sanders statistically, and take up the seventh and eight spots overall respectively. CBS has a slightly lower average raw error at 8.3% versus NBC’s 8.55% and is better by nearly four points with just a 3.05% Clinton bias outside the South. NBC, however, has a lower overall Sanders bias by nearly a point and a half and did slightly better in the South, if one can speak of missing by an average of over ten points as “better” for a pollster.

The bottom six overall spots are round out by (10) Emerson Polling, which misses by an atrocious ten and a half points on average everywhere, (11) PPP, which nevertheless fares well outside the South with just a 2.7625 Clinton bias, good enough for 2nd place in that category, (12) ARG with a whopping 12.5 average raw error, but which still outperformed FiveThirtyEight in one category by only having a 1.9571 average overall candidate bias while FiveThirtyEight’s overall Clinton bias is 2.3272, (13) Fox News (National) which is basically bad everywhere, but would have done even worse if I had added in local Fox polls in Michigan and Georgia, (14) Gravis, which has given us three really awful polls in Iowa, Florida, and South Carolina, one ho-hum poll in New Hampshire, and one nearly dead-on poll in Nevada, and finally (15) Bloomberg, which has the unfortunate scoring disadvantage of having the lowest number of polls at just two, with one of them being in South Carolina where virtually everyone missed and missed badly.

I will say why I think the polls are so bad on the Democratic side this year in a separate post that also analyzes how terrible the polling appear to me to be at this juncture in New York.

But that is a more speculative and partisan task.

The purpose of this article was to mathematically score just how bad the polling has been overall.

As a closing thought on this, the top scorer overall is FiveThirtyEight, but they’ve been wrong outside the South by an average of 5.7%. Six out of the eleven contests outside the South have been decided by less than that margin. Luckily for their reputation and for RealClearPolitics’, this has only meant getting two of eleven non-Southern contests wrong (Oklahoma and Michigan). A very small margin of voters in Missouri, Iowa, Illinois, and Massachusetts, however, have kept the best of the worst from doing about as good as you’d expect from using eleven coin flips to prognosticate who would win those contests.

*To get an idea where Survey USA would land with even just one good poll outside the South, if I average the “outside the South” scores for the top five overall scorers and plug that average into Survey USA’s scorecard, they would end up in the top spot with a low score of 12.0650.

More articles by:

Doug Johnson Hatlem is best known for his work as a street pastor and advocate with Toronto’s homeless population from 2005-2013. He is now a film producer and free-lance writer based in Chicago.

October 16, 2018
Gregory Elich
Diplomatic Deadlock: Can U.S.-North Korea Diplomacy Survive Maximum Pressure?
Rob Seimetz
Talking About Death While In Decadence
Kent Paterson
Fifty Years of Mexican October
Robert Fantina
Trump, Iran and Sanctions
Greg Macdougall
Indigenous Suicide in Canada
Kenneth Surin
On Reading the Diaries of Tony Benn, Britain’s Greatest Labour Politician
Andrew Bacevich
Unsolicited Advice for an Undeclared Presidential Candidate: a Letter to Elizabeth Warren
Thomas Knapp
Facebook Meddles in the 2018 Midterm Elections
Muhammad Othman
Khashoggi and Demetracopoulos
Gerry Brown
Lies, Damn Lies & Statistics: How the US Weaponizes Them to Accuse  China of Debt Trap Diplomacy
Christian Ingo Lenz Dunker – Peter Lehman
The Brazilian Presidential Elections and “The Rules of The Game”
Robert Fisk
What a Forgotten Shipwreck in the Irish Sea Can Tell Us About Brexit
Martin Billheimer
Here Cochise Everywhere
David Swanson
Humanitarian Bombs
Dean Baker
The Federal Reserve is Not a Church
October 15, 2018
Rob Urie
Climate Crisis is Upon Us
Conn Hallinan
Syria’s Chessboard
Patrick Cockburn
The Saudi Atrocities in Yemen are a Worse Story Than the Disappearance of Jamal Khashoggi
Sheldon Richman
Trump’s Middle East Delusions Persist
Justin T. McPhee
Uberrima Fides? Witness K, East Timor and the Economy of Espionage
Tom Gill
Spain’s Left Turn?
Jeff Cohen
Few Democrats Offer Alternatives to War-Weary Voters
Dean Baker
Corporate Debt Scares
Gary Leupp
The Khashoggi Affair and and the Anti-Iran Axis
Russell Mokhiber
Sarah Chayes Calls on West Virginians to Write In No More Manchins
Clark T. Scott
Acclimated Behaviorisms
Kary Love
Evolution of Religion
Colin Todhunter
From GM Potatoes to Glyphosate: Regulatory Delinquency and Toxic Agriculture
Binoy Kampmark
Evacuating Nauru: Médecins Sans Frontières and Australia’s Refugee Dilemma
Marvin Kitman
The Kitman Plan for Peace in the Middle East: Two Proposals
Weekend Edition
October 12, 2018
Friday - Sunday
Becky Grant
My History with Alexander Cockburn and The Financial Future of CounterPunch
Paul Street
For Popular Sovereignty, Beyond Absurdity
Nick Pemberton
The Colonial Pantsuit: What We Didn’t Want to Know About Africa
Jeffrey St. Clair
The Summer of No Return
Jeff Halper
Choices Made: From Zionist Settler Colonialism to Decolonization
Gary Leupp
The Khashoggi Incident: Trump’s Special Relationship With the Saudi Monarchy
Andrew Levine
Democrats: Boost, Knock, Enthuse
Barbara Kantz
The Deportation Crisis: Report From Long Island
Doug Johnson
Nate Silver and 538’s Measurable 3.5% Democratic Bias and the 2018 House Race
Gwen Carr
This Stops Today: Seeking Justice for My Son Eric Garner
Robert Hunziker
Peak Carbon Emissions By 2020, or Else!
Arshad Khan
Is There Hope on a World Warming at 1.5 Degrees Celsius?
David Rosen
Packing the Supreme Court in the 21stCentury
Brian Cloughley
Trump’s Threats of Death and Destruction
Joel A. Harrison
The Case for a Non-Profit Single-Payer Healthcare System