We don’t run corporate ads. We don’t shake our readers down for money every month or every quarter like some other sites out there. We only ask you once a year, but when we ask we mean it. So, please, help as much as you can. We provide our site for free to all, but the bandwidth we pay to do so doesn’t come cheap. All contributions are tax-deductible.
Since beginning to adjust or unskew polls for house effect during the 2014 United States Senate elections, Nate Silver and FiveThirtyEight (538) have a measurable 3.5% bias toward Democrats. This 3.5% bias is found by taking the final 538 projected gap between the Democratic and the Republican candidate in 125 races over the 2014 and 2016 election cycles and comparing that projected gap to actual final results. (The raw numbers for comparison can be found here. I am grateful to a professor in Britain, who works with statistics but wishes to remain anonymous, for reviewing the data for errors.)
A portion of the 3.5% 538 bias in the most recent elections can be attributed to normal polling margins of error that have over-projected Democrats, generally within the margin of error, during those two cycles. RealClearPolitics (RCP), for example, had a 1.7% statistical bias in favor of Democrats for the 2016 cycle in Senate and Presidential races it projected. This compares to 538’s 3.3% Democratic bias for the 2016 cycle in Senate and Presidential results.* For the battle to control the 2018 U.S. House of Representatives, 538’s projection is firmly in the #BlueWave camp, regularly projecting a “Democratic seats average gain” of well over 30, sometimes nearing 40. A thirty seat gain puts Democrats into #BlueWave territory with 225 or more seats, giving the party comfortable control of the House while leaving Republicans with 210 or fewer seats.
Our modeling disagrees with 538 and has since March when we published “Numbers Suggest Democrats Are Not Currently Set to Take Back the House of Representatives.” Democratic chances of taking back the House continue to hover just under 50% with the most likely outcomes seeing a very small seat gap between whichever party wins control of the House and the runner-up. If things go somewhat badly for Democrats over the final three and a half weeks or if polling is over-projecting their outcome as badly as it did in 2006 (3.6% per the RCP Generic Congressional Ballot or GCB average on the gap), they could gain as few as ten or eleven seats. If things go somewhat badly for Republicans over the final three and a half weeks or if polling is over-projecting their outcome a bit worse than it did in 2010 (2.6% per RCP), Democrats could wind up gaining twenty-five to twenty-nine seats for a maximum 224-211 advantage.
#10at10 modelling (updated with the strict average of GCB polling over the last ten days around 10pm each night in this Twitter thread) assumes a toss-up for any race projected to have a 3.4% or less advantage for one party or the other. Outside of this range, there are inevitably seats the model will get wrong–even if it is generally accurate–as 435 contests happen simultaneously. Provided, however, that neither side gobbles up more than a few seats that tilt (3.5-6% advantage), lean (6.1-9.9%), or are likely (10.0-14.9%) or safely (15.0%+) in the other party’s column in our projection, the winner of the House on November 6 will control 224 seats or fewer. Our current projection, updated daily and explained in detail methodologically here, shows the expected gap between Democrats and Republicans in each of the 100 most competitive races. The chart runs from the most likely Democratic victories at the top in dark blue in the “CURRENT CALL” column to the most likely Republican victories in dark red near the bottom. A lavender row marks the magic 218 number required by either party for control of the House. Republicans enjoy a slim 210-205 advantage in the model currently, with 20 seats considered toss-ups.
Per the discussion in the first few paragraphs in this article, the purpose is not to compare and contrast 538 and RCP, both of which have seriously advanced political horse race discussions beyond simply cherry-picking a poll or two to bolster whatever political story it is that one is wanting to tell. Our model aims to blend the best of 538 and RCP’s insights (and others) by, among other things, averaging the latest from all scientific polling firms, generally over the previous ten day period, without adjustments of any kind.
The major problem for this simple model in a situation like the race for control of the United States House of Representatives in 2018, as for almost any model, is how to translate the substantial polling data we do have into making projections for dozens and dozens (if not hundreds) of House races where we have either no recent polling at all or not enough to average it in any meaningful way.
There are many things our model has learned about how to make such predictions from 538 and RCP, as well as from making projections in parliamentary systems in the United Kingdom and in Canada. Parliamentary systems much more closely approximate the race for control of the House of Representatives because a forecaster is required to take what national, provincial, and regional data we have, combine it with scant data at the individual race level, and translate it into predictions for dozens or hundreds of individual seats that, taken together, will determine which party and leader will have control of parliament. Or, in this case, the House of Representatives.
All of this could be discussed in much more detail, but let’s focus for a bit on two features from 538. The first (adjusting or unskewing polls by House effect) is a glaring negative, in my view, and likely (though not necessarily) means that 538 is over-stating Democratic strength heading into the House midterms. The second is a likely very positive, helpful new 538 feature (CANTOR) that approximates YouGov’s MRP model, a model that was deadly accurate in the 2017 UK General Election for parliament and Prime Minister. Deadly accurate, in fact, where most other polling firms and forecasters were radically wrong. (Our model was nearly as accurate as YouGov’s on a seat-by-seat level and more accurate in terms of national popular vote.)
First, a discussion of why 538 has had a 3.5% Democratic bias in recent elections:
There are three basic reasons why 538 modeling might over-project one party or the other in a given election cycle or set of election cycles. The first two are rather harmless and to be expected. Scientific polling, even when polls are averaged, has a margin of error that means that it will not get the winner or gap between candidates right or right within even 1.5% or so each time. If the polling or poll aggregating is unbiased, however, over time it should show a vanishingly small preference for one party or another when results are averaged. RCP, for instance, has missed in its polling average for the Generic Congressional Ballot over the 8 cycles it has averaged such polls, beginning in 2002, by a grand average of just 1.05% in favor of Democrats.
Pretty darn good.
Three times it has basically had things correct, missing by 1.7% or less in favor of one party or another. Four times it has underestimated the Republican performance on the gap by 2.6%-3.6%. In one year, when Republicans surged in 2010, RCP underestimated Democratic performance on the gap by 2.6%. Unfortunately, we cannot precisely compare how 538 managed in cycles where polling overprojected Republicans because, prior to 2014, they did not release their forecasts in terms of precise percentages.
The second basic reason 538 modeling might have a normal bias against one party or another is a disagreement among statisticians as to whether strict averaging or using a weighted average of polls is superior. I take no position on this debate because I don’t have adequate statistical training to make such a judgment. I suspect 538 is right, so long as weighting methodologies are free of bias for one party or another. In any event, analysis shows that, overall, in terms of states in Presidential elections, there has been very little variation in accuracy rate between five major poll aggregators and forecasters, including 538 and RCP. From what I can tell given my limited data competency, 538’s methods of weighting polls (generally by sample size, date, and pollster rating) are not biased toward one party or another. It does seem, it should be noted however, that how often they update their pollster ratings makes some difference. Monmouth polling, for instance, would show a ton more bias toward Democrats and likely lose its “A+” rating if 538’s pollster ratings were regularly updated and weighted toward recent elections.
The third reason 538 modeling could (and does) show bias, however, seems far more problematic. This is the House Effect adjustment used to adjust the final results of polls in one direction or the other, almost always in favor of Democrats. The fairest interpretation of why this happens is that the House Effect adjustment was put in place by 538 in early 2014 based on 2008, 2010, and 2012 elections that underestimated Democratic performance and has not been significantly updated after 2014 and 2016 elections in which pollsters tended, on average, to underestimate Republican performance. Take for instance the last 50 polls used in terms of date (not weight) for 538’s Generic Congressional Ballot average. As of Thursday, October 11 at 5pm, the previous 50 such results (all in the field for at least one day in September or later) saw 22 adjustments that moved in a pro-Democratic direction versus just 5 results that were massaged in a Republican direction. The remaining 23 results saw no adjustment. Furthermore, all five results moved in the Republican direction were moved just one or two percentage points while Democratic favorable unskewing changed results by up to 5% on the gap.
This is particularly baffling since 538 began adjusting poll results in the election cycle after 2012. In 2012, 538 and others rightly mocked conservatives for the widespread practice of unskewing poll results to make things look better for Mitt Romney (when, in fact, polls somewhat under-estimated Obama’s numbers).
But 538 is, after all, very very good at data, bringing us to the very positive development in the way they are forecasting the race for the House in 2018. When they launched the model for this year’s House, 538 introduced a new feature called CANTOR without much fanfare. As described by 538:
Our district similarity scores are based on demographic, geographic and political characteristics; if two districts have a score of 100, it means they are perfectly identical. These scores inform a system we use — CANTOR, or Congressional Algorithm using Neighboring Typologies to Optimize Regression — to infer what polling would say in unpolled or lightly polled districts, given what it says in similar districts.
While not strictly similar to YouGov’s MRP model since 538 isn’t stratifying and regressing data originally obtained in the field on its own, the basic premise is solid, a premise that makes predictions based on the way districts that are similar demographically, geographically, and politically tend to move similarly in polling and elections results. It was 538’s insistence that normal polling errors in one set of states or counties would likely show up in similar places that allowed it to be far less embarrassingly wrong in 2016 than many other major prediction outfits.
This development in 538’s methodology is welcome enough that we are using it prominently in our model. Where we have no polling data in the last three weeks for a particular congressional district (about 1/3rd of the seats currently in the 100 Congressional Districts to Watch graphic above), 538’s CANTOR score makes up a full 60% of the forecast. Where there is just one poll, the CANTOR weight drops to 28% of the forecast. At two or three polls in the last twenty-one days, CANTOR still makes up 17% of the prediction of the gap between Democrats and Republicans. Where there are three polls in the last ten days or four polls in the last twenty-one days (just a few seats), the strict average of individual polling makes up 100% of the projection.
It should be noted that, strictly using the CANTOR scores as of Thursday afternoon October 11, we would expect Democrats to win 213 seats and Republicans to win a majority with 220 seats, while two districts (IA-3 and IL-6) come in exactly tied at 0.0% on the gap according to CANTOR. All of this goes together well with a criticism of feature of 538’s forecast that the New York Times Nate Cohn noted on Twitter last weekend. When it comes to individual seat projections, 538’s model shows a much closer race, approximately 50-50, than its main feature that currently has Democrats with a 7 in 9 chance of taking back the House.
Maybe the easiest way to summarize this fact: the FiveThirtyEight forecast, last I looked, had Democrats favored to win +33 seats, well over the 23 they need.
But FiveThirtyEight only had them favored in 218 individual contests, precisely the number needed for a majority
— Nate Cohn (@Nate_Cohn) October 7, 2018
Ending on a necessary note of caution, polling has under forecast how well Democrats will do on the gap by one and a half points or more three times in the last eight House elections. If that happens again, Nate Silver, FiveThirtyEight, and others forecasting a #BlueWave could well be right. Democrats could win 225 seats or more, earning commanding control of the House at a level that could see them bleed a few votes from their membership on key legislation and other matters such as articles of impeachment and still claim victory. The most straightforward and defensible reading of the data we have now, a reading that does not adjust or unskew polling in favor of Democrats and does not project that they will somewhat magically win or dozen or so individual seats that the FiveThirtyEight model currently has them behind in, suggests a much fiercer and more closely contested contest for control of 218 seats, or perhaps a few more, by one party or the other.
*It could be pointed out that the data is not strictly comparable since RCP made projections in fewer Presidential and Senate races than 538 in 2016. While true, this may recommend the RCP caution of not projecting in races where there is inadequate polling data, particularly in an election situation like the 2018 House where there is scant data for most races. Furthermore, even comparing simply the races where both RCP and 538 made predictions in 2016, 538 was 6/10ths of a point worse, on average, than RCP in terms of a Democratic bias. The best explanation for at least this 6/10ths of a point is 538 making adjustments to polls, usually in Democrats favor.