Skip to main content
Northwestern University
Multiple pairs of legs behind curtains at voting boothsleft

Were the Polls Really So Wrong?

It’s time to get comfortable with uncertainty, says economist Charles Manski

Pity the poor pollsters. They’re just trying to make a living. They provide the data that help officials govern wisely, poli-ticians reach consensus and businesses make better products.

But you, the American public, avoid pollsters like the plague. You want no part of the work they do, no part of those invasive dinnertime interruptions from phone numbers you don’t recognize. To make matters worse, a series of apparently embarrassing polling failures last year — most notably surrounding Brexit and the U.S. presidential election — surprises you and prompts you to demand to know how they could miss so miserably.

What’s a put-upon pollster to do? Hang it up?

Hardly, but it does raise interesting questions about how polls work in the digital age and what poll numbers truly mean.

The “response rate” is the term pollsters use when discussing the increasingly difficult task of getting people to pick up the phone. In the 1970s, about 80 out of 100 calls paid off with interviews. So 2,000 calls netted more than enough responses for a reputable poll. To get the same number of responses today, a pollster has to make 32,000 phone calls.

Some experts say that statistically, such low response rates shouldn’t affect poll accuracy so long as the sample is still random. Others disagree.

“I actually am surprised that the polls are as accurate as they are,” says Northwestern economics professor Charles Manski, who adds that pollsters don’t know if nonresponse actually is random. “It is potentially a major issue.”

Another daunting problem facing political pollsters is predicting who will vote. Based on election data and census trends, the pollster tries to create a “likely voter” model. That can be tricky, though, because one of the few groups harder to pin down than the American public is the American voter. “The people responding to your poll often don’t even know themselves if they are going to vote,” observes political science professor Jamie Druckman.

So what’s the answer?

Manski says polling organizations, the media and the public need to get more comfortable with uncertainty. He argues that while concrete numbers are comforting and easy to understand, ranges more accurately represent the poll data. For example, compare these two sentences:

1 ) “Candidate A has 52 percent support to Candidate B’s 48 percent (with a margin of error of plus or minus 4).”

2 ) “Candidate A’s support is 48–56 percent vs. Candidate B’s 44–52 percent.”

With No. 1, Candidate A is sitting pretty. No. 2 is messier, no doubt, but better illustrates how close the race is. This approach would have better shown how tight last year’s campaign was and that the results from nearly all of the major polls fell within their margins of error.

Some 15 years ago, Manski went a step further and floated an idea about changing the way the poll questions are asked. Instead of “Will you vote on Election Day,” Manski suggested the interviewer ask, “What’s the percent chance, 0 to 100, that you will vote on Election Day?”

The USC/L.A. Times Daybreak tracking polls adopted this prob­abilistic polling method — and were the only surveys in 2016 to predict well ahead of November the victory of Donald Trump.

“I’ve been arguing for years that this would make polling more accurate,” says Manski, who notes that the same uncertainties plague the federal government’s economics reports: GDP, jobless rates, inflation. Those numbers guide government policy and business decisions worth millions of dollars. Understanding the ambiguity that surrounds these figures would be well worth our time.

“The major issue is that polling has been oversold,” Manski says. “We would be better off facing up to the uncertainty.”

Back to top