Polling Critics Who Work in Glass Houses Shouldn’t Throw Stones.
A Commentary By Richard Baris
In 1948, Gallup had Democratic President Harry S. Truman trailing Republican Thomas E. Dewey by 5 points, 49.5% to 44.5%. President Truman won the popular vote 49.6% to 45.1%, and the Electoral College 303 to 189.
It would be six decades before Gallup misfired again, when they underestimated Barack Obama’s support in 2012.
The 2016 presidential election wasn’t the first major polling blunder in U.S. electoral history. But it was the worst if for no other reason than it resulted in us losing the public trust.
It was a near-universal, industry-wide failure outside of a handful of us.
I’ve long been a critic of “Gold Standard” pollsters and methodologies. Admittedly, I had genuine high hopes the 2016 failure would lead to a productive conversation.
In interviews and columns, I’ve tried to start that conversation with data sourcing, collection modes, response bias, response rates, weighting for party identification while ignoring region and education.
And yes, even about ideological corruption.
Those efforts have not been in vain, nor a complete disappointment. You’d never know it from media coverage, but pollsters and data providers have been asking themselves tough questions.
But as we approach the 2020 presidential election cycle, it’s becoming clear big media polling critics and analysts have taken few lesson to heart. Instead, they are overstating the performance of their outlets, ignoring significant failures, and attacking those who won’t run with the herd.
Those attacks typically have two targets–the pollster and, of course, the president–and are not designed to advance any conversation to the benefit of the industry.
Take a recent article from Philip Bump at The Washington Post.
To attack President Donald Trump, Mr. Bump trashes Rasmussen Reports. Unlike the pollster employed by the same outfit as Mr. Bump, Rasmussen called the last presidential election correctly.
“Trump also likes to cherry-pick polls that show what he wants to see — such as the consistently generous polls from Rasmussen Reports that several times have shown him with much more robust support from black Americans than other pollsters.”
All politicians cherry-pick polls. But it’s worth noting Rasmussen is not alone in measuring increased support for the president among black voters. Pew Research, YouGov and our own polling at Big Data Poll have measured volatile but notably higher levels of support among both black and Hispanic voters.
It is the educated white suburban swing voter weighing down the president’s numbers. But I digress.
“The RealClearPolitics average of polls at the end of the election estimated Democrats would win seven percentage points more of the House vote,” he continued. “Rasmussen’s last poll had the Republicans winning that vote.”
A column by Harry Enten for CNN repeated a similar criticism, referring to Rasmussen as “the least accurate” to estimate the House popular vote.
Mr. Enten wrote the “midterm elections prove that at least for now Rasmussen is dead wrong and traditional pollsters are correct.”
Nate Silver, in a tweet also aimed at attacking the president, claimed Rasmussen “said that Republicans would win the popular vote for the U.S. House.”
First, it is not true that 2018 proved “traditional polls are correct.” Putting aside their specific focus only on the generic ballot to vindicate a debacle that continued throughout 2017, big media pollsters can at best claim a mixed record for 2018.
Playing right into deep public distrust, those inaccuracies again favored Democratic candidates in both timing and topline-driven headlines. It might be convenient for them to ignore their poor state-level track record, but it doesn’t make it untrue.
CNN, the very outfit for which Mr. Enten wrote that article, missed the gubernatorial contest in Florida by 13 points. They herded less than a week later for the record, a poll that never saw coverage.
Quinnipiac University, another big media favorite “traditional” pollster, had both Democratic candidates with a 7-point edge in Florida.
They both lost.
Rasmussen was actually more accurate in September, finding defeated incumbent Democratic Senator Bill Nelson leading now-Republican Senator Rick Scott by just 1 point.
CNN missed in Tennessee by 7 points, laughably claiming a competitive race within the margin between now-Republican Senator Marsha Blackburn and Democrat Phil Bredesen. The Democrat was defeated by nearly 11 points.
In Ohio, not a single poll gave Republican Governor Mike DeWine the lead over Democrat Richard Cordray in the final months. The oft-cited NBC News/Marist Poll had Governor DeWine leading in June, but herded to a tie in their late-September survey.
The same poll gave defeated incumbent Democrat Claire McCaskill a 3-point lead over now-Republican Senator Josh Hawley, a 9-point miss. They missed by 6 points in Tennessee, a bias favoring the defeated Democrat.
The Fox Poll was equally unreliable, giving defeated incumbent Democrat Joe Donnelly a 7-point edge over now-Republican Senator Mike Braun in Indiana. It was a 13-point miss. In Missouri, Fox had McCaskill tied with Hawley, a near 7-point miss.
I could go on and on and on and on. But as we get closer to 2020, a race that will be be decided at the state level, Americans should know big media pollsters have not corrected their mistakes.
Criticizing Rasmussen Reports because the president tweets their approval polls, won’t change that. Only an honest conversation will.
It’s true the final Rasmussen General Congressional Ballot gave Republicans a 1-point edge in 2018, but they left out some important details.
It is also important for Americans to understand that the community of “analysts” and “forecasters” largely are not pollsters. Those who do this for a living and put themselves out for scrutiny year-after-year, know the impact survey wording and ordering can have.
It is not entirely true Rasmussen “said that Republicans would win the popular vote for the U.S. House,” as Mr. Silver tweeted. Their predicted composition of the electorate was fairly accurate, but their survey wording was convoluted.
This is how we typically ask the generic ballot question at Big Data Poll.
“Thinking ahead to November, if the election in your Congressional District was held today, would you vote for the Republican candidate, the Democratic candidate, or someone else?”
We make it clear we are asking about the respondent’s vote preference by party for the U.S. House by including “Congressional District.” Here is how the Rasmussen Reports Generic Ballot is worded.
“If the elections for Congress were held today, would you vote for the Republican candidate or for the Democratic candidate?”
There is no specific reference to the U.S. House. Personally, I feel wording the question like this poses a danger respondents might give their preference for either chamber.
They acknowledged that lack of distinction in an article published on the Friday following the election, and that “party preference are both combined into the concept of ‘Congress.'” Given the survey wording and split result of the 2018 midterms, the final topline doesn’t surprise me.
Republicans lost a slightly higher than average number of seats in the U.S. House for a first-term incumbent party midterm. They won an above average number in the U.S. Senate, the most since 1962.
Nevertheless, to ignore the pollster’s response and your own outlet’s failures, is double disingenuous.
Americans would never know it, but the private polling industry doesn’t operate like this. We talk “to” each other and learn from each other. Big media talks “at” each other and you, and learns nothing from either.
Mr. Baris is the Data Journalism Editor at PPD and Director of the PPD Election Projection Model. He is also the Director of Big Data Poll, and author of "Our Virtuous Republic: The Forgotten Clause in the American Social Contract." This analysis was originally published on People’s Pundit Daily.
See Other Political Commentaries.
Rasmussen Reports is a media company specializing in the collection, publication and distribution of public opinion information.
We conduct public opinion polls on a variety of topics to inform our audience on events in the news and other topics of interest. To ensure editorial control and independence, we pay for the polls ourselves and generate revenue through the sale of subscriptions, sponsorships, and advertising. Nightly polling on politics, business and lifestyle topics provides the content to update the Rasmussen Reports web site many times each day. If it's in the news, it's in our polls. Additionally, the data drives a daily update newsletter and various media outlets across the country.
Some information, including the Rasmussen Reports daily Presidential Tracking Poll and commentaries are available for free to the general public. Subscriptions are available for $4.95 a month or 34.95 a year that provide subscribers with exclusive access to more than 20 stories per week on upcoming elections, consumer confidence, and issues that affect us all. For those who are really into the numbers, Platinum Members can review demographic crosstabs and a full history of our data.
To learn more about our methodology, click here.