If it's in the News, it's in our Polls. Public opinion polling since 2003.

 

Can the Polling Industry Learn From Its Mistakes?

A Commentary By Robert Barnes

Thursday, May 18, 2017

The American Association for Public Opinion Research convenes today for its 72nd annual conference. The first such conference took place in 1945, right before the last presidential election that led to widespread concern about the science and art of polling. But buried within that 1948 election were the seeds of the rebirth of polling as a credible profession for the next six decades. After the disastrous polling in the last presidential election, another rebirth is in order.

Yes, there is more polling than ever, more polling methods than ever and, of course, more polling “experts” than ever. But the challenges to pollsters continue to grow, with fewer and fewer people willing to answer a poll, less and less people reachable by traditional methods of polling and hypercriticism from those who want the results to look different than they are.

Two of America’s most famous independent presidential pollsters saw this polling news environment and scattered for the hills. Both Gallup (who took an unfair beating after 2012 from the polling “experts” who have never polled a day in their lives, yet know everything about a “good” poll) and Pew (who made their name with presidential polling since the 1990s) deserted the 2016 race.

On election night, the Twitterers, Facebookers and bookies, along with journalists, analysts and commentators, took to the airwaves to complain about and condemn many of the pollsters. Only a few – Rasmussen Reports, Investors Business Daily and the USC/LA Times poll - avoided that fate but only after being condemned all year long as “outliers” who deviated from the herd and challenged the wisdom of the polling “experts.”

Indeed, the methods chosen by those three pollsters — Rasmussen’s use of Interactive Voice Response (IVR) polling, demographically weighted internet panels and a strong likely voter screen; IBD’s method of weighting, and the LA Times unique same-panel method of online polling — took widespread Machine Gun Kelly-style fire from media and polling pundits throughout the campaign season. The safer route, the easier route, was to herd toward the media “middle.” Ironically, the same polling experts who attacked a non-herded poll furiously complained about “herding,” yet it is their own hypercriticism of outside-the-media-norm polls that feeds the herding instinct in the first place.

What 2016 showed was the virtue of the valiant who put their polling forecasts out for the world to see, even when it didn’t match the herd. It also showed there is a good chance that an outlier poll may be on to something.

All three successful forecasters shared common traits. They ignored their pundit critics and published their polling results, even when those results deviated from the herd and challenged the media narrative about the election. They relied on their own techniques and expertise, despite frequent criticism of their methodology by the “experts.” And like a good writer, they took a publish-or-die approach rather than a poll-but-hide approach, even though it placed their reputations at the center of criticism, a risky business move.

The inventive pollsters who correctly captured the Trump tide also provided essential information for the polling community at large to incorporate.

First, many polls’ use of weighting techniques to give some respondents more, or sometimes less, voice in the poll’s measurement of public opinion requires reconsideration. Too often, the polls used the Census population data, rather than real-world likely voting data, inflating the anticipated voter participation of urban voters, upscale voters, young voters and Latino voters at the expense of rural, blue-collar and older voters. Weighting is as much art as it is science, and it is the soul of polling: Projecting that a sampling of a 1000 voters forecasts the votes of over 137 million people. The more accurate pollsters utilized additional screening techniques in their weighting - self-reported voter registration, identification of known precinct location, self-reported intensity of interest in the incipient election and, as important, modeling the electorate to a likely electorate - to create sufficient sub-samples of like-minded groups for voting projection forecasts by race/education, by religion, by age, by region, that does not undercount the older, white, rural and working class as polling can easily do.

Second, IVR technology, supplemented by online samples, not only makes polling more affordable, it also diversifies the source of information because it reaches landline-dependent voting groups better than live polling and avoids the non-response bias and cell-heavy proclivities continually arising in live polling.

Lastly, alternative methods of polling, like using the same sample over time as the USC poll did, brings added value to the equation, reducing the variability in polling results that non-response bias can produce at disparate-intensity moments in the election campaign (such as conventions, debates and scandal news). 

Pew once led the way in likely voter screening. Gallup cemented the scientific value of polling in the first place by being one of the few public pollsters willing to stake its reputation on presidential polling back to its daring 1936 entry into the world of public opinion. When Gallup’s methods proved inadequate for the 1948 race, it jumped head first into solving it and would accurately forecast almost every election thereafter. All pollsters owe a debt to both, and all missed their contribution to the 2016 presidential election.

Polling is both science and art, part Edison, part Picasso, and many valuable inventive approaches greatly added to the polling community in 2016, primarily because of pollsters who were willing to enter the battlefield when the battle was most brutal.

Robert Barnes, a high-profile trial lawyer, won acclaim this past election year as America’s most successful political gambler in the United Kingdom for his bets on the U.S. election. 

See Other Political Commentary.

Rasmussen Reports is a media company specializing in the collection, publication and distribution of public opinion information.

We conduct public opinion polls on a variety of topics to inform our audience on events in the news and other topics of interest. To ensure editorial control and independence, we pay for the polls ourselves and generate revenue through the sale of subscriptions, sponsorships, and advertising. Nightly polling on politics, business and lifestyle topics provides the content to update the Rasmussen Reports web site many times each day. If it's in the news, it's in our polls. Additionally, the data drives a daily update newsletter and various media outlets across the country.

Some information, including the Rasmussen Reports daily Presidential Tracking Poll and commentaries are available for free to the general public. Subscriptions are available for $4.95 a month or 34.95 a year that provide subscribers with exclusive access to more than 20 stories per week on upcoming elections, consumer confidence, and issues that affect us all. For those who are really into the numbers, Platinum Members can review demographic crosstabs and a full history of our data.

To learn more about our methodology, click here.