Top

Say It Ain’t So, Chuck

Thoughts on the Use of Online Opinion Polling by a Gray Lady and a Peacock

To qualify candidates for the recent 2015 GOP debate, Fox News used results from polls conducted by Bloomberg, CBS News, Fox News, Monmouth University and Quinnipiac University. The fact that all five studies were conducted via telephone is not a coincidence.

Although web-based surveys are prevalent within the market research industry, they have not been widely embraced by the opinion-research field, and news outlets generally avoid reporting on findings based on online panels. The reason, as ABC News explains in its current polling standards:

Methodologically, in all or nearly all cases we require a probability-based sample, with high levels of coverage of a credible sampling frame. Non-probability, self-selected or so-called “convenience” samples, including internet opt-in, e-mail, “blast fax,” call-in, street intercept and non-probability mail-in samples do not meet our standards for validity and reliability, and we recommend against reporting them.

Probability sampling is based on the premise that nearly every person in the United States had a known and non-zero chance of being selected for the sample, and therefore precise estimates of error rates between the sample and the population can be calculated.

For much of the last century, almost every person had a home telephone and pollsters could reasonably expect to reach a sample very representative of the population of interest with even 70% or higher completion rates. Combined with their cost and speed advantage over in-person techniques, telephone studies became the gold standard of opinion polling.

Today, however, less than two-thirds of people have land lines and people have become prone to ignore answering calls from numbers they don’t recognize. While cell phone penetration is growing, incoming calls are even more easily avoided and reaching respondents via their cell phones has become more regulated. As a result, overall telephone response completion rates are now often under 10%.

A Sign of the Changing [New York] Times

Recent polling prediction misses, such as the wholesale underestimation of President Obama’s standing in 2012, the margin of victory for Conservatives in this year’s UK elections, the Israeli elections, and the Greek bailout referendum, have been attributed to the declining response rates, a decreasing ability to reach increasing important demographic groups, and, overall, the reduced ability to capitalize on the advantages of probability-based, random sampling.

Standing in the wings is online polling. The advantages are that these polls can reach a larger sample size and results can be reported both faster and at a lower cost. Perhaps most intriguing is the potential of online polls to “learn” from experience and become more adaptive than telephone polling. The disadvantage, of course, is that online samples do not meet the standard for probability-based sampling because respondents opt-in to the survey.

What online polls lack in conventional probability sampling purity may be compensated for by the pollsters’ ability to accumulate information about the voting-related attitudes and behavior of its panelists, and develop empirical determinations about the quality of individual responses. This has the potential to lead to new ways of quantifying sampling error and reducing other non-sampling sources of survey error in order to improve predictability.  In a telephone survey one assumes that all respondents have understood the question and responded to it truthfully. In an online survey, the pollster is able to use other information that he or she may have about each respondent to test the validity of that assumption.

In fact, some leading news organizations have concluded that the quality gap between telephone and online has closed.

Flash Back to 13 Months Ago

On July 27, 2014, The New York Times and CBS News began to include results from an online panel survey in their model of preferences in the then upcoming mid-term elections.

The New York Times explained their decision by saying that “[a] deluge of cheap partisan polls has swamped a shrinking number of high-quality, nonpartisan surveys, making it hard to know who is really ahead in many political campaigns. The solution? More nonpartisan surveys.” They followed that opening with a thoughtful exposition of online and telephone polling differences.

Reaction to the Times decision, was sharp but divided: while the American Association for Public Opinion Research (AAPOR) issued a pointed statement condemning the use, others rebuked the AAPOR stance, with Andrew Gelman and David Rothschild characterizing it as “disturbing.”

Flash Forward to Now

On August 9, 2015, NBC News used online panel data to support reporting Chuck Todd did on Meet the Press and Nightly News about the Republican presidential primaries.

Reaction to this more recent use of online data by Chuck Todd and NBC News has been milder, even though it differed from the Times/CBS use in two more contentious ways:

  1. It was more assertive. NBC’s poll was used as the complete basis for the reporting, whereas the Times/CBS use was as one of many inputs for a model that bases its results on aggregating the results from multiple polls. If the NBC poll is wrong, then the reporting is wrong. If the Times/CBS poll is wrong, then the reporting may be off, but it won’t be wrong.
  2. It was less transparent. Unlike the Times, NBC did not provide an explanation of its action and did not alert their audience that its reporting might be more or less accurate than similar stories it had run due to the new the new polling methodology. Their tact seemed to minimize the impact with Chuck Todd characterizing the online study as “scientifically conducted” and subsequently tweeting to some doubters, simply, that he was skeptical at first but that “[t]his has been rigorously vetted.”

Should polling: a) change; b) stay the same; or, c) I don’t know

We don’t yet know whether a poll where people opt in to the sample and are regularly interviewed online is materially different from a poll conducted randomly over the phone when only half the population can even be reached. Online polling’s track record is still limited and the consequences of the new form of data gathering on voting behavior are not yet fully known. In addition to the sample randomness issues, online polls are less personal, because there is no direct connection with an interviewer, less accountable, because a respondent is often able to mask their identity, and participation levels are also limited due to respondent concerns about privacy.

With that said, it is also worth reiterating Nate Silver’s observation:

[All] of this must be weighed against a stubborn fact: We have seen no widespread decline in the accuracy of election polls, at least not yet. Despite their challenges, the polls have reflected the outcome of recent presidential, Senate and gubernatorial general elections reasonably well. If anything, the accuracy of election polls has continued to improve.

As a company that has used surveys for its livelihood for more than 65 years and has direct experience with data integrity complexity of conducting surveys online, G&R supports their use and evolution. Their strengths in lowering costs, speeding up turnaround, and increasing sample size are substantial. However, their disadvantages in sample representativeness, respondent accountability and methodology transparency make them different from telephone results.

Carefully curated online surveys have a place in marketing research and political polling. However, we are not yet at the point where the two can be used interchangeably. Michael Cobb recapped the issue succinctly recently when he wrote, “[Online] is the cutting edge of either a brand new way to conduct reliable surveys or the tip of the iceberg about to sink the Titanic.”

Research and news organizations that hold themselves out to be fair and balanced, owe their clients and audiences notice when standards of reporting are changed, and an explanation of the consequences of that change if they want to retain that trust.

Here, the Gray Lady met the standard but the Peacock did not.


Postscript

September 21, 2015

Republican Presidential Candidate, Donald Trump called out NBC for their selective use of polling results. Mr. Trump put NBC’s “Today” show co-anchor Savannah Guthrie on the defensive during a live telephone interview when he called the network out for promoting a CNN poll over an NBC poll because, he said, the billionaire had a smaller margin in the CNN survey. In the CNN poll, Trump’s lead over other candidates dropped eight percentage points after the second debate. In the NBC poll, Trump gained 7 points from a month earlier to reach 29%. Read more at: http://tr.im/vMrlL. The CNN poll is a random telephone survey and the NBC poll is a non-random online panel survey.

, , , , , , ,

Comments are closed.