Why Was The National Polling Environment So Off In 2020?February 23, 2021
Heading into the 2020 election, Democrats were favored to not only capture the presidency but also win back the Senate and retain their sizable majority in the House. Much of that came down to the overall national environment, which appeared to be pretty favorable to Democrats, as they held a 7.3-point lead in FiveThirtyEight’s final polling average of congressional polls.
Yet even though Democrats did win the presidency — and eventually the Senate — their grip on the House actually slipped, as Republican gains meant the Democrats’ majority fell from a 32-seat advantage1 to just a 9-seat edge after the election.
So what happened? Were the polls just terribly off in 2020? Not dramatically, no. Yes, polls once again underestimated Donald Trump’s performance, but the magnitude of that error (about 4 percentage points) wasn’t all that different from past presidential contests, such as in 2012 when polls underestimated Barack Obama’s margin of victory by almost 4 points. And there have, of course, been much larger polling errors, too.
But one reason the polling in 2020 has received so much attention is that down-ballot polling, namely the generic ballot — which asks respondents whether they plan to vote for a Democrat or Republican in their local race for the U.S. House of Representatives — was also off by a similarly large margin in 2020. In fact, as the table below shows, the House popular vote was 4.2 points more Republican-leaning than the polls anticipated, making it the largest generic ballot polling miss in a presidential or midterm cycle since 2006.2
|National House vote margin|
|Year||Poll Avg.||Actual result||Error|
The fact that congressional polls performed so well just two years prior meant the error in 2020 felt particularly large, too. In 2018, the polls nailed the House popular vote right on the head, besting the 2010 midterms for the most accurate cycle in generic ballot polling dating back to 1996.
That said, comparing the performance of generic ballot polls in 2020 to 2018 is a bit of a red herring as generic ballot polls are often a better predictor of midterm results than presidential ones: The average error in presidential years was 3.1 points versus 2.5 points in midterms, according to FiveThirtyEight’s polling average. But still, if we look at an average of generic ballot polls from 1996 to 2020 — covering seven presidential elections and six midterms — the polls were only off by 2.9 points. As such, the error in 2020 does rank on the higher end of the spectrum. Nonetheless, it was not the worst performing year for generic ballot polling in the past two and a half decades for which we have data. The polls missed by more in 1996, 2002 and 2006.
It’s hard to pinpoint why exactly the generic ballot polls were so off in 2020, but here are some possible explanations.
First, it’s possible the 2020 generic ballot polling error was actually pretty normal — that is, generic ballot polls often underestimate GOP support. In the past 13 election cycles, there has been just one time the House vote was clearly more Democratic than the final polling average (and three cases in which the polling error has been within plus/minus 1 point in either direction). Otherwise, the GOP has consistently performed better than the polls expected, in varying degrees, of course.
But there are some conditions specific to 2020 that could have affected generic ballot polling, too. Notably, presidential polls also underestimated the Republican candidate, so it’s plausible that some of the same issues that influenced polling in the 2020 presidential race, such as not enough Republicans responding to surveys, may have also altered generic ballot polling.
The 2020 election was also the first modern-day election held during a pandemic, which could have affected polling — not to mention the actual results. In fact, COVID-19 may actually have caused Democrats to be overrepresented in some polls, as the combination of being stuck at home and anti-Trump energy may have made them more likely to answer pollsters than Republicans. This, in turn, may have exacerbated issues of non-response, which can translate into pollsters missing GOP support. What’s more, people who feel more alienated and alone are often the ones who are less likely to respond to polls, and in 2020, at least, they were also more likely to have backed Trump and other Republican candidates.
Patrick Murray, director of the Monmouth University Polling Institute, told me he’s still in the process of understanding which voters aren’t responding to polls, but he thinks issues of non-response bias may be the likeliest culprit for polling error in 2020. Nailing down just how much of a role non-response bias played in that error won’t be easy, though. “If the error is due to non-response bias caused by a portion of the electorate that came out only to support Donald Trump, then the polls may be basically fine in 2022 and we really won’t know why,” said Murray. Remember, too, that the size and direction of polling error has historically been unpredictable, so we can’t just bank on Democratic bias being the new normal for pollsters to adjust to. That said, Murray did say this could all be a much bigger issuer if the polls are “about 3 to 4 points more left-leaning in their responses because a skewed cohort of folks [have] tuned out from participating in traditional venues of political discourse.”
At this point, we don’t know how widespread of an issue non-response in polling is. But one thing we do know is that what happened down-ballot in 2020 was at least partially tied up in the outcome of the presidential race, as Trump also outperformed his polls and most voters voted for the same party for president and the House. In fact, presidential and generic ballot polls have largely moved in the same direction: In total, in five of the seven presidential elections since 1996, the polling error in the presidential race has gone in the same direction as the error in the House contest (which was true in 2020 as well).
Robert Erikson, a political scientist at Columbia University who studies election polling, told me it makes sense that we often see polling error in the same direction. “There’s a last-minute trend that the polls obviously can’t capture,” Erikson told me, and as 2020 showed, that usually means movement in the direction of the same party. “If Trump gained at the end, it only stands to reason that Republican candidates [down the ballot] would, too.” And it does seem as if there may have been some last-minute movement in the polls toward Trump in 2020. The national exit poll found, for instance, that Trump won more late-deciding voters than Biden (although we should be careful with that data considering so many voters cast early or absentee votes in 2020).
The difference in the error between presidential and generic ballot polls has typically been fairly small, too. There are only two presidential elections in the last 25 years in which the error went in different directions — 2000 and 2012 — and as such, those were the only two times that the difference between the two was particularly big. This is in large part because the errors are often related — either due to unforeseen environmental factors like a stronger-than-expected presidential performance at the top of the ticket by one party or the reality that often the same pollsters who conduct national presidential polls are also the ones testing the generic ballot, meaning any of their biases or methodological choices in polling one race might carry over to the next.
“Any non-response bias due to missing Trump support also impacted down-ballot measures — whether specific races or generic ballot,” Murray told me. He did caution, though, that our expectations of what polling can — and can’t — do probably need a reality check. That is, the value of the generic ballot is that it identifies trends, Murray stressed, not that it should be used to determine how an individual district might vote.
“Being off by 3 to 4 points on a typical public opinion question is generally not consequential if the responses are in the ballpark,” said Murray. “Of course, being off by 3 to 4 points in an election poll is the end of the world.” But that, Murray told me, says less about the practice of polling than it does about the unrealistic expectations of what polling can do.
And that serves as a good reminder that polls in general — including the generic ballot — are not perfect predictors, as FiveThirtyEight will tell you until the end of time. That’s why our forecasts build in plenty of room for uncertainty. The generic ballot is helpful for getting a general sense of the electoral environment — does it lean toward one party or does it seem fairly competitive? But expecting it to routinely turn out to be as accurate as it was in, say, the 2018 midterms, is simply unrealistic.