If it's in the News, it's in our Polls. Public opinion polling since 2003.

POLITICAL COMMENTARY

Inconclusive Studies of 2020’s Pre-Election Polling Problems Could Be Good for the Industry

A Commentary By Natalie Jackson

KEY POINTS FROM THIS ARTICLE

— Following another presidential election in which pre-election polls often understated support for Donald Trump, the polling industry is once again trying to figure out what went wrong.

— An American Association for Public Opinion Research task force pointed to a lack of education weighting in its post-2016 assessment, but that did not fix the problems with 2020 polls.

— That the AAPOR has not identified a specific problem with the 2020 polls may actually be a good thing for pollsters.

Evaluating pre-election polling following 2020

At the 2021 virtual conference of the American Association for Public Opinion Research (AAPOR), a task force presented the findings from their official assessment of 2020 pre-election polling. [1] The findings confirmed what general suspicions and early analysis had shown: That 2020 polls collectively overstated support for Democrats in every contest and generated the highest polling errors in “at least 20 years.”[2] However, the task force was unable to determine what caused the error with the available data, only that it was “consistent with systemic non-response.”

The conclusions, or lack thereof, from the task force are disappointing on one dimension. That an all-star group of hard-working researchers in the industry did not provide concrete answers to what went wrong is somewhat disheartening. By the same token, however, that could be good for the industry overall in two ways: It could help reset expectations for pre-election polls because there is no single identifiable “fix” to be applied, and it is likely to spur innovation among diverse methodologies to identify and address underlying problems.

Polling error in 2016 vs. 2020, and how not knowing what is wrong can be good for expectations

After the 2016 pre-election polls underestimated Donald Trump’s support, a similar AAPOR task force went to work in early 2017 to investigate why. The conclusions from that task force pointed to two concrete sources of error that skewed polls away from Trump. First, the 2016 pre-election polls had unusually high proportions of undecided voters, among whom the majority ended up voting for Trump. Additionally, the polls that performed the worst tended to not adequately adjusted their surveys to get enough voters with less formal education than a four-year college degree — a group that also swung heavily toward Trump.

In the lead-up to the 2020 election, there were far fewer undecided voters in polls, leaving the education weighting issue as the main point in discussions of polling accuracy. While pollsters often warned that fixing education weighting did not mean 2020 would be error-free, that caution usually came after a statement about making corrections and adjustments based on specific problems identified after the 2016 election. Fairly or not, the perception emerged that by correcting the education weighting deficiency, pollsters had fixed the problem (despite some warnings otherwise). The 2020 task force poured cold water on that theory by noting that the issues identified in 2016 had mostly been ruled out as primary drivers of polling error in 2020.

The positive side of the lack of concrete answers is that the narrative of fixing polls by adjusting this one thing cannot take hold in the wake of 2020 polling errors. This time, instead of feeding a focus on how to make polls perfectly predict election outcomes, as the education weighting finding inadvertently did, the 2020 task force report seems as if it will put a spotlight on the unknown sources of uncertainty that exist in polling. If this is leveraged to foster better communication about and understanding of uncertainty, it will be a positive outcome.

No more “gold standard” and opportunities for innovation

It also follows that, because the AAPOR task force did not identify easily corrected flaws in pre-election polls, individual pollsters are left to innovate and problem-solve on their own. However, the findings do point to areas that need innovation — how we contact people and get them to take polls, and how we determine who are “likely voters” that we want in our polls.

It is increasingly clear that how a poll contacts people — formerly a key heuristic for assessing poll quality — no longer tells us what it used to about accuracy. The 2020 primary pre-election polling task force report found that whether the survey was online or by telephone had no bearing on accuracy, and the new task force report presentation indicated the same finding. As a result of their own analysis showing the same thing, FiveThirtyEight has retired the landline and cell phone live-caller survey as the “gold standard.” The field letting go of its attachment to one source as more accurate than others will allow other methodologies to become more prominent and encourage further experimentation with new methodologies.

The second key place we need to innovate, or at least focus more energy, is on determining who is a “likely voter.” The task force seemed to somewhat dismiss likely voter modeling as a reason for polling misses in 2020 based on the limited information they had available. That came with a huge caveat that the task force did not have information on likely voter models for most polls. That is not surprising; most pollsters regard likely voter selection or modeling as their proprietary “secret sauce” and do not divulge it. Without more information to analyze, there is no way for the task force to really rule out likely voter models as part of the bias. We need to increase awareness that, unless details are provided, anything labeled “likely voters” is essentially a pollster’s best guess about what the electorate will look like — nothing more.

An instructive illustration on how much likely voter selection matters comes from a 2016 article in the New York Times in which Nate Cohn had four different sets of pollsters adjust the same data using weighting and likely voter determinations, and they came out with results ranging from Clinton +4 to Trump +1. That exercise demonstrated quite clearly that likely voter modeling — done by rational, smart people! — can result in significant survey error. Of course, this has always been true, but likely voter models will be much more consequential in elections won or lost on razor-thin margins in a few states. The best move AAPOR could make is to continue encouraging transparency in methods, including likely voter models.

Looking to 2024

There will still be plenty of presidential horserace polls in 2024, and before that in contests happening in 2021, 2022, and 2023. The demand for polls in the early 2021 Georgia Senate runoffs illustrated that polls are still a desirable part of campaign coverage. Polls are also still the best way to know what the mass public thinks.

However, when 2024 rolls around, it looks like pollsters will not be able to say, “we fixed x as the AAPOR report said we should to make up for what happened last time.” The more likely scenario in the absence of any type of community consensus is that individual pollsters will tweak their processes here and there, and those tweaks will be different for each organization. Some will be at the sample level, working hard at the task of making sure those non-trusting people are recruited into surveys somehow. Some will be in other parts of the process, including likely voter models. The AAPOR task force report is not telling us how to do that, but that leaves the field wide open to innovation and learning. That makes it a difficult, but exciting, time to be a pollster.

Natalie Jackson, Ph.D., is Director of Research at the Public Religion Research Institute (PRRI). She was previously Senior Polling Editor and responsible for election forecasting efforts at the Huffington Post from 2014-2017. Views expressed herein are her own and not representative of any employer, past or present.

Footnotes

[1] At the time of this writing, the written report has not been released. The information in this article regarding the task force report is based solely on the presentation at the conference. Any misinterpretations are solely the responsibility of the author.

[2] In the absence of a public report, quotes are taken from the conference presentation slides and presentation recording, last viewed on June 21, 2021.

See Other Political Commentary.

This article is reprinted from Sabato's Crystal Ball.

Views expressed in this column are those of the author, not those of Rasmussen Reports. Comments about this content should be directed to the author or syndicate.

Rasmussen Reports is a media company specializing in the collection, publication and distribution of public opinion information.

We conduct public opinion polls on a variety of topics to inform our audience on events in the news and other topics of interest. To ensure editorial control and independence, we pay for the polls ourselves and generate revenue through the sale of subscriptions, sponsorships, and advertising. Nightly polling on politics, business and lifestyle topics provides the content to update the Rasmussen Reports web site many times each day. If it's in the news, it's in our polls. Additionally, the data drives a daily update newsletter and various media outlets across the country.

Some information, including the Rasmussen Reports daily Presidential Tracking Poll and commentaries are available for free to the general public. Subscriptions are available for $4.95 a month or 34.95 a year that provide subscribers with exclusive access to more than 20 stories per week on upcoming elections, consumer confidence, and issues that affect us all. For those who are really into the numbers, Platinum Members can review demographic crosstabs and a full history of our data.

To learn more about our methodology, click here.