mpablog-2016electionpolling

As I write, Donald Trump is less than two weeks from being inaugurated as President of the United States. For political scientists, our “what the…?” moment involves the failure of most public-opinion polls to predict the results of the 2016 election. I joined numerous colleagues in assuming a Hillary Clinton victory. The news media and even Saturday Night Live took Clinton’s victory for granted. I will never in my life forget spending Election Night watching the needle on the New York Times’ prediction meter move from strongly favoring Clinton to 100% Trump.

Comparisons to the classic “Dewey Defeats Truman” headline of 1948 are inevitable, but several differences emerge. Most notably, telephone polling was in its infancy in 1948. The methodological sophistication and advanced computer programs used today were not available. Today, pollsters predict elections based not on a single poll or early returns, but rather on an amalgamation of many polls, plus other data. The methodology is so advanced, so tested, it is completely indestructible—just like the Titanic!  However, in fairness, it should be noted here that Nate Silver, the most popular proponent of this polling-amalgamation strategy, stated repeatedly that Donald Trump has a path to victory. Just before Election Day, however, even Silver’s models leaned toward a Clinton win.

What lessons can we learn from these polling-based collisions with last year’s electoral iceberg?

First, it is worth noting that political scientists were not necessarily part of the horse race frenzy. Quite a few correctly predicted the Republican victory, using various modeling techniques. Most of those who bucked the media’s conventional wisdom have one thing in common—they looked at numbers affecting partisan breakdown, not numbers for Hillary Clinton and Donald Trump specifically. The news media’s “horserace” coverage emphasizes polling respondents’ plans to vote for one candidate or another, while political scientists such as Michael Lewis-Beck and Charles Tien, Brad Lockerbie, and Alan Abramowitz, each did what political scientists (as opposed to campaign or media pollsters) usually do—they looked at fundamentals such as the state of the economy, partisan breakdown of the electorate, historical trends, approval of the current President, and voter optimism about the economy, not voters’ opinions of the candidates themselves.

Why were these models so widely ignored? That answer could be summarized as, “but… Donald Trump!” More formally, many commentators (including more than a few who were political scientists or political science-trained) assumed that Donald Trump’s quirky candidacy and high personal negatives meant that the usual partisan-breakdown models used by these political scientists and others simply did not apply this year. In fact, they were onto something. The scholars cited above all predicted a higher popular vote share for the Republican than Trump actually won, while others were even farther off, predicting percentages for the Republican nominee as high as 56% (Trump actually won just 46.1%).

If John McCain or Mitt Romney had been the Republican nominee, he might very well have gotten the 50%+ of the popular vote predicted by these models. So, in fact, the conventional wisdom was not completely wrong. Trump did underperform the expectations of these models, presumably due to his unusual personality, behavior, and candidacy. Yet he is still on the verge of becoming President. The results of another poll, in the very “red” state of Kansas where I research, write, and teach, may offer a clue as to why. According to respondents in the Kansas Speaks survey, Donald Trump was highly unpopular here, scoring particularly low with our respondents on the matters of trustworthiness and “understanding people like me.” Yet Trump won Kansas easily, and the reason is clear: not only is Kansas a heavily Republican state, but Hillary Clinton was even more unpopular here than was Trump. Her worst-scoring categories in Kansas Speaks were the same as Trump’s, and Kansans rated her lower on trustworthiness and “understands people like me” than they did Trump.

In short, outside of California, voters disliked Hillary Clinton more, but they also disliked Donald Trump. The conventional wisdom before the election had this reversed, with commentators assuming that Clinton, not Trump would be perceived as the lesser of the evils. Commentators underestimated the roles of three things: deep party ties (the vast majority of Mitt Romney’s supporters from 2012 backed Trump), the same variables that usually affect elections, such as the state of the economy and optimism about it, and finally, Hillary Clinton’s unpopularity.

While this is conjecture on my part, I cannot resist adding that in the last three elections that have been framed by the conventional wisdom as “a choice between the lesser of two evils”—2000, 2004, and 2016—Republicans have gained the White House each time.  The tiresome “lesser evil” frame appears to be toxic to Democrats, likely because their base is less reliable about turning out to vote if they do not like the candidates.

Still, I have not yet gotten to the problem with the polls themselves. Weren’t they clearly predicting a Clinton victory, not only nationwide (which was correct), but in those Great Lakes “firewall” states that put Donald Trump in the White House?

Here’s a dirty little secret of polling: no poll has a representative sample of those being studied. Polling, like scientific tests of soil or water quality, works by sampling— drawing a subset of thing being studied, testing it, and then drawing an inference (logical leap) from the results for the sample to the likely condition of the whole from which that sample was drawn. We cannot really know what the water quality is in, for example, Lake Michigan, because it is impossible to test all of it. However, water-quality experts often draw and test samples of the water, then draw inferences to the whole.

For this to work, sampling must be done with great care. Likewise, pollsters must take pains not to over-sample certain populations and under-sample others. One classic example pertains to the time, not so far back, when most households had one landline telephone. In mixed-gender households (often married heterosexual couples), the adult woman was usually the one to answer the phone. Had pollsters simply interviewed her, the result would be a sample that was heavily skewed towards women, and under-sampled men, relative to their proportions among the population. Thus, a “randomizing” technique had to be employed, such as asking to speak to the adult in the household with the next birthday.

Today, many Americans have their own cell phones, and landlines are becoming obsolete. Call “screening” is also more popular than ever.  If getting something close to a random sample was hard 20 years ago, today it is nearly impossible. It is very difficult to get proportionate numbers of complete surveys from African-Americans and from people that do not speak English as a first language, for example. Randomizing methods are still used but they are not enough.

When polling results are featured on the news, what you are hearing about are not the raw data from the poll, but rather, poll results that have been “weighted” to account for the impossibility of getting a true representative sample. Imagine that we expect 12% of the voters to be African-American, yet only 5% of the polling sample fit this description. The “weight” of each result from an African-American respondent is thus multiplied to adjust to something more representative. This process often employs “multivariate regression with post-stratification,” or, in a wonderful acronym, “Mr. P.”

Here’s where things went south in 2016. In order to weight the polling results, we have to know ahead of time who is going to vote. If we weighted the data based on a prediction that 12% of the electorate would be African-American, and it turns out that only 10% were, then our predictions were off.  It is, of course, impossible to know who is going to vote until after they have done so, therefore, the composition of the electorate is estimated, often using the composition of the electorate for the last election (in this case, the 2012 Obama-Romney race). In 2012, this worked well—the composition of the electorate was similar to 2008 and the winning candidates were also the same. Notwithstanding unnecessary media “horse race” hype, the predictions of prognosticators in 2012 were pretty much dead-on.

Then it all fell apart in 2016.

Put simply, the composition of the electorate changed. African-American turnout dropped, while Trump, like 1992 third-party candidate Ross Perot, pulled out voters who simply would not have voted at all, had Trump not been in the race. But unlike Perot, Trump also won a major-party nomination, so he was able to put the party’s base together with those infrequent voters and pull off the victory—at least in the electorally-critical states. The pollsters’ estimates of the electorate’s composition were incorrect, therefore, the weighted predictions were wrong as well.

Another possible factor in the polling inaccuracies is the “Bradley effect,”- that is, Trump voters having lied to pollsters about their intentions. This was a popular Election Night speculation.  However, subsequent analysis indicates that the Bradley effect was, at most, only one of a number of factors involved.

Taking stock of all this, it’s not yet time to invoke the famous quip about “lies, damned lies, and statistics.” In fact, many political-science-based models correctly predicted the winner, while polling data such as Kansas Speaks show how Trump could win despite relative unpopularity (because Clinton was even more unpopular). I join fellow MPSA bloggers in calling for the news media to re-orient away from “horse race” coverage. It is underlying dynamics, not the horse race, that usually decide elections—and news consumers deserve more attention and analysis of those dynamics. After all, it is things like the state of the economy and our optimism about the future, not political candidates’ personal idiosyncrasies, which are what truly affect our own lives.

About the author: Michael A. Smith is a Professor of Political Science at Emporia State University where he teaches classes on state and local politics, campaigns and elections, political philosophy, legislative politics, and nonprofit management. Read more on the MPSA blog from Smith and follow him on Twitter.