Home Understanding the 2016 US Presidential Polls: The Importance of Hidden Trump Supporters
Article Open Access

Understanding the 2016 US Presidential Polls: The Importance of Hidden Trump Supporters

  • Peter K. Enns EMAIL logo , Julius Lagodny and Jonathon P. Schuldt
Published/Copyright: August 21, 2017
Become an author with De Gruyter Brill

Abstract

Following Donald Trump’s unexpected victory in the 2016 US presidential election, the American Association for Public Opinion Research announced that “the polls clearly got it wrong” and noted that talk of a “crisis in polling” was already emerging. Although the national polls ended up being accurate, surveys just weeks before the election substantially over-stated Clinton’s lead and state polls showed systematic bias in favor of Clinton. Different explanations have been offered for these results, including non-response bias and late deciders. We argue, however, that these explanations cannot fully account for Trump’s underperformance in October surveys. Utilizing data from two national polls that we conducted in October of 2016 (n>2100 total) as well as 14 state-level polls from October, we find consistent evidence for the existence of “hidden” Trump supporters who were included in the surveys but did not openly express their intention to vote for Trump. Most notably, when we account for these hidden Trump supporters in our October survey data, both national and state-level analyses foreshadow Trump’s Election Day support. These results suggest that late-breaking campaign events may have had less influence than previously thought and the findings hold important implications for how scholars, media, and campaigns analyze future election surveys.

1 Introduction

10:30pm: Trump Showing Unexpected Strength in Battleground States

11:40pm: Trump Takes Florida, Closing In on a Stunning Upset

2:30am: TRUMP IS ON THE VERGE OF A STUNNING UPSET

2:50am: TRUMP TRIUMPHS: Shocking Upset as Outsider Harnesses Voters’ Anger

(The New York Times, Nov. 8–9)

These changing Election Night headlines from the New York Times website chronicle the unfolding surprise of Donald J. Trump’s presidential victory.[1] The American Association for Public Opinion Research reacted to Trump’s unexpected election by announcing, “The polls clearly got it wrong… and already the chorus of concerns about a ‘crisis in polling’ have emerged.”[2] Even Mr. Trump expressed surprise at the outcome (Jacobs and House 2016). In hindsight, however, Trump’s victory should not have seemed so implausible given the polls. Although many forecasts based on the polls indicated a very high probability of a Clinton victory (Katz 2016), the national polls ended up being quite accurate. And already 10 days before the election, ABC News reported that the presidential race had tightened, with Clinton ahead of Trump by just two percentage points (Langer 2016).[3]

Looking back, a bigger puzzle seems to be why surveys conducted in mid-October, just weeks before the election, substantially underestimated Trump’s electoral support. Typically, presidential polls already reflect the final vote margin three to 4 weeks before the election. As Erikson and Wlezien (2012: p. 66) explain, “during the heat of the campaign in October into early November, very little happens to change the national verdicts.” Shirani-Mehr et al. (2016: p. 8) similarly note, “Average [polling] error … appears to stabilize in the final weeks, with little difference in RMSE [root mean square error] one month before the election versus one week before the election.” The deviation in 2016 from these historical trends begs the question: why did Trump perform so poorly in the October polls?

The worst-case scenario for pollsters is non-response bias, which occurs when those who participated in the survey have systematically different attitudes and opinions than those who were not contacted or who refused to participate (Groves 2006). Non-response bias means the sample is not representative of the public, and thus by definition, the survey will produce biased estimates of public opinion. Although weighting survey responses can help alleviate concerns associated with non-response bias, weighting assumes that those with particular demographic characteristics who answered the survey share the same preferences (on average) as those with the same characteristics who did not answer the survey. Concerns about non-response bias have risen as industry-wide response rates have declined, in many cases below 10 percent (Pew Research Center 2012). Many scholars and prominent analysts have pointed to non-response bias as a possible explanation for why the polls (especially state polls) under-estimated Trump support in the 2016 US presidential election (Linzer 2016; Mercer et al. 2016; Silver 2016b; Shepard 2017).[4] Yet, despite declining response rates, research suggests that polls continue to “provide accurate data on most political, social, and economic measures” (Pew Research Center 2012: p. 1) and errors in presidential polling have not increased as response rates have dropped (Franklin 2016). Furthermore, non-response bias seems inconsistent with the accuracy of the final national polls and with evidence that the state polls ended up performing as well or better in 2016 than 2012 (Trende 2016).

The recent AAPOR report on the 2016 Election Polls in the US offers an alternative explanation for why October polls underestimated Trump support: “late deciding” voters (AAPOR 2017). The careful analyses in the report make clear that multiple factors influenced state and national election polls. Yet, the report’s focus on late deciding voters that broke for Trump is particularly relevant for understanding Trump’s underperformance in October polls. The report concludes, “There is evidence of real late change in voter preferences in Trump’s favor in the last week or so of the campaign, especially in the states where Trump won narrowly” (p. 52).

Although some individuals may certainly have waited until the last minute to decide to vote and for whom, we offer a different explanation that we believe more fully accounts for why the October polls substantially underestimated support for Donald Trump: the existence of “hidden” Trump supporters. We define hidden Trump supporters as survey respondents who did not directly express intent to vote for Trump in the survey, but who nevertheless appeared as if they would support Trump on the basis of their responses to other survey questions. Of course, it is impossible to pinpoint exactly when these respondents decided to vote for Trump, and thus we cannot rule out the possibility that these respondents were simply late deciders. We can, however, show that hidden Trump supporters were detectable in October surveys, which holds important implications for forecasting based on polling data as well as for campaign strategy during the final weeks of an election contest. Furthermore, when we consider these hidden supporters, polls in early October come within one percentage point of the actual final vote share and we can correctly predict the winner of 5 of the seven swing states with 10 or more electoral college votes. Thus, if the hidden Trump supporters were simply late deciders, they were not deciding equally between Trump and Clinton. Instead, almost a month before the election, those who did not indicate a vote intention for either major party candidate already leaned disproportionately toward Trump.

Our article proceeds as follows. First, we introduce alternative factors beyond non-response bias and late deciders that could explain systematic underestimation of Trump support: social desirability bias and “top of the head” considerations (e.g. Taylor and Fiske 1978; Zaller 1992). We then explain how the nature of the 2016 campaign could lead both of these factors to produce “hidden” Trump supporters who completed the surveys but did not openly declare their support for Trump. We test our expectations with three separate analyses based on 14 state polls and two national surveys conducted during October 2016. Supporting our expectations, the results suggest that more Trump supporters were in the data than standard vote intention questions revealed. In addition to tempering concerns about non-response bias and a “crisis” in polling, as we discuss in the conclusion, these results hold important implications for future analyses of election surveys.

2 The Case for Hidden Trump Supporters

During the 2016 election, surveys consistently found that the percentage of respondents who indicated they were undecided or choosing a third-party candidate was much larger than in previous presidential elections (Linzer 2016; Silver 2016a). Consistent with this finding, our own surveys conducted in October (described below) found that around 20 percent of respondents did not report a voting intention for either Clinton or Trump. The unusually large proportion of respondents who did not express support for either of the two major-party candidates led us to wonder on the eve of the election whether hidden Trump supporters might exist among this group (Enns and Schuldt 2016). Considering Trump’s racist (O’Connor and Marans 2016), sexist (Bahadur 2015), and bigoted comments (Moreno 2015), it seemed especially plausible that some respondents may have felt hesitant to declare their support of him out of a fear of being judged negatively – a phenomenon known to psychologists and pollsters as social desirability bias (e.g. Nederhof 1985). Indeed, survey respondents sometimes “hide” behind the label “independent” to avoid associating with either of the two parties – particularly when those parties are stigmatized (Connors et al. 2016; Klar and Krupnikov 2016). The same process may have occurred with Trump supporters.

Some have argued that a lack of difference between online and phone surveys suggests that social desirability bias was not a factor for Trump supporters (e.g. Dropp 2016). We must remember, however, that although web-based surveys may reduce social desirability bias, there is no evidence that it eliminates such bias entirely (Kreuter et al. 2008); both modes may be sensitive to social desirability considerations. Furthermore, news coverage of Trump’s offensive comments could have affected survey responses in another way. Enns and Richman (2013) show that expressed vote intentions in surveys often reflect different considerations than the final vote choice. Furthermore, they show that the difference between a vote intention expressed in a survey and the actual vote choice does not necessarily result because voters learn new information about the candidates. Instead, some respondents treat the survey question differently than the vote choice, bringing different information to bear.[5] In the present case, we might expect that the extremely negative news associated with Trump [e.g. his mocking a disabled reporter or boasting that he groped women against their will (Carmon 2016; Graham 2016)] would create strong “top of the head” (Taylor and Fiske 1978; Zaller 1992) considerations among Trump supporters. If so, these considerations may have led some who would eventually vote for Trump to tell pollsters that they were undecided or voting for a third party.

The key point is that either social desirability bias or top of the head considerations, or some combination thereof, could have led a subset of Trump supporters to refrain from indicating that they would vote for Trump. If so, we would expect Trump to have underperformed other Republican candidates in state polls. We would also expect that, if we could observe the true preferences of those who did not express a vote intention for either Trump or Clinton, the October polls would more closely reflect the final vote share on Election Day. We test these predictions below.

3 State Polls Underestimated Trump Support in October

We have argued that Trump supporters participated in the surveys, but negative news at the time kept some of these supporters from directly expressing their intention to vote for Trump. If our argument is correct, in states with a Senate election, we would expect state polls to systematically underrepresent support for Trump relative to the Republican candidate for Senate. By contrast, if non-response bias explains why Trump’s vote share was underestimated, we would expect support for both Trump and the Republican Senate candidate to be underestimated by roughly equivalent amounts. To test this prediction, we consider October polls from 14 states that had Senate elections.[6] We calculate the percent indicating support for Trump and the percent indicating support for the Republican candidate for Senate and then compare these values to the actual vote share each received in the state.[7]

For example, in the October poll in Arizona that we analyzed, Trump received 48.8 percent of the two-party vote intentions. Trump won Arizona with 51.9 percent of the two-party vote, indicating the poll underestimated Trump’s vote by 3.1 percent. We cannot, however, take this alone as evidence of hidden Trump supporters. Enten (2016) has shown that state polls underestimated Republican support in general. Thus, we must consider Trump’s performance relative to the performance of Republican Senate candidates in the same poll. In this case, the Republican candidate for Senate (John McCain) received 59.1 percent of two-party vote intentions in the poll and he won Arizona with 56.9 percent of the two party vote. In other words, McCain’s October vote support exceeded his final vote share by 2.2 percent. Subtracting McCain’s over-performance in the poll (2.2%) from Trump’s underperformance (−3.1%) indicates that relative to the Republican Senate candidate, Trump underperformed by 5.3%. If Trump systematically underperformed Republican candidates for Senate in the October polls, we take that as evidence that some of those indicating they would vote Republican in the Senate race chose not to indicate support for Trump.

Figure 1 presents Trump’s performance relative to the Republican candidate for Senate in the 14 states with relevant poll data. Negative bars indicate Trump underperformance and positive bars indicate Trump over-performance in the state polls. The figure indicates that Trump systematically underperformed. Ten of the states have the expected negative values, whereas just four states show that Trump exceeded expectations. Furthermore, Trump’s average underperformance is more than 2.5 times greater than the average of the four states where he over-performs. In other words, not only were state polls more than twice as likely to underestimate support for Trump, but the magnitude of the difference was much greater in states that underestimated his support. Even after accounting for differences in actual vote share, Trump’s poll numbers underperformed those of other Republican candidates.[8]

Figure 1: 
          Trump’s performance in the state polls relative to the state senatorial candidate in early October. Negative values mean Trump underperformed. Under/over-performance calculated by: (Trump poll support – actual Trump vote share) – (Republican Senate candidate poll support – actual Republican Senate candidate vote share).
Figure 1:

Trump’s performance in the state polls relative to the state senatorial candidate in early October. Negative values mean Trump underperformed. Under/over-performance calculated by: (Trump poll support – actual Trump vote share) – (Republican Senate candidate poll support – actual Republican Senate candidate vote share).

We should note that the AAPOR report on the 2016 Election Polls conducted a nearly identical analysis; however, the polls analyzed were quite different. The AAPOR report only considered surveys from the final 2 weeks of the campaign, and if a survey firm conducted multiple state-level surveys in the last 2 weeks, only the final poll was analyzed. Based on these data, the report finds no evidence that Trump systematically underperformed in state polls, which the report takes as evidence against what the report calls the “shy Trump hypothesis.” The timing of the AAPOR analysis holds important implications. The accuracy of the final national polls indicates that if hidden Trump supporters existed, they had emerged by the end of the campaign. Indeed, as we saw in the ABC report above, some national polls had aligned with the final vote outcome almost 2 weeks before the election. Based on these results, we would not expect hidden Trump supporters in the surveys the AAPOR report analyzed. By contrast, the evidence in Figure 1 that Trump underperformed in early October state polls is consistent with our argument that hidden Trump supporters help account for Clinton’s substantial polling lead at the time.

4 More Evidence of Hidden Trump Supporters

Relative to Republican Senate candidates, Trump underperformed in early October state polls. Our next analysis relies on two nationally representative surveys that we conducted in October 2016 – an online survey fielded by GfK (n=1541) and a phone survey (cell and landline, n=625) conducted with Cornell’s Survey Research Institute (SRI).[9] To get a sense of how our surveys compared with other surveys at the time, we begin by identifying all survey questions about presidential vote intentions in the Roper Center’s iPOLL database that were in the field during the dates of our surveys.[10]

As noted above, we focus on October because it is during this period when presidential polls typically begin to reflect the final vote margin (Erikson and Wlezien 2012: p. 66; Shirani-Mehr et al. 2016). Figure 2 illustrates that most polls at this time showed Trump trailing Clinton by a substantial margin, with Trump’s two-party vote share ranging from 41 to 48 percent. Since polls typically converge on the final outcome during this period, these numbers seemed to offer a strong basis for those predicting a Clinton victory. The two national surveys we conducted at this time (light gray bars in Figure 2) closely correspond with other surveys at the time, with the GfK survey aligning with the mean across all surveys (45% Trump support) and the SRI survey aligning with the modal survey at the time (46% Trump support). Although national-level estimates shifted in November – eventually nearly matching Trump’s actual vote share – our October results are consistent with the standard story that the polls underestimated Trump support.

Figure 2: 
          Trump support from October 5–25, based on all national surveys in the Roper Center’s iPOLL database and our two national surveys. The vertical bars represent the number of survey questions with that vote share (indicated on the x-axis).
Figure 2:

Trump support from October 5–25, based on all national surveys in the Roper Center’s iPOLL database and our two national surveys. The vertical bars represent the number of survey questions with that vote share (indicated on the x-axis).

To further test our argument, we would like to know more about those who did not express support for Clinton or Trump in October surveys. If these individuals leaned toward Trump and we could observe their true preferences, we should see results that are more aligned with the actual election results. By contrast, if these individuals were undecided, we would expect an even split between Clinton and Trump. Or, if the barrage of negative stories about Trump was swaying these individuals, we might expect this group to lean toward Clinton.

To evaluate these competing hypotheses, we analyze a question that we included in our surveys that allows us to estimate which candidate these respondents actually supported: a forced-choice measure of candidate truthfulness that read, “If you HAD to choose, which presidential candidate do you find to be more truthful: Donald Trump or Hillary Clinton?” This question was useful for a variety of reasons. First, because this question was phrased as a forced choice (“if you HAD to choose”), we expected it to alleviate respondents’ concerns about being judged for their response – indeed, nearly every survey respondent answered it. Second, automatic associations can predict vote choice – even for those who report that they are undecided (Galdi et al. 2008; Wang and Gold 2008). In this case, we expect that the evaluative judgment of which candidate is more truthful will similarly elicit latent support for that particular candidate. Thus, for those Trump supporters who did not express their vote intention directly (because of social desirability bias or negative “top of the head” considerations), responses to the truthful question should still reflect underlying candidate preferences.

Consistent with the expectation that responses about which candidate is more truthful would reflect latent support for that candidate, for those who answered both this question and the standard vote intension question, the truthful question predicted stated vote intention with very high accuracy (97.5% in our GfK survey and 94.1% in our SRI survey). Thus, it appears that truthfulness is an excellent proxy for vote choice. Recall, however, that our goal is to identify the vote preference of the approximately 20 percent of respondents who did not express a vote intention for Trump or Clinton. Because the truthful question strongly predicts vote preference (empirically and theoretically) and because almost all respondents answered the truthful question, it is ideally suited for this purpose. For these reasons, we expected the truthful question would offer a more accurate measure of vote choice than the vote intention question.

Figure 3 tests this prediction with our online GfK survey and our SRI phone survey, both of which were conducted well in advance of Election Day (October 5 through 25). In both, analyzing the full sample (using the truthful question) produces an estimate of Trump’s vote share that is substantially closer to – and within one percentage point of – the actual vote share he received (dashed horizontal line). The GfK result is particularly noteworthy because more than 90 percent of respondents completed the survey before October 15, three-and-a-half weeks prior to the election. Although these results cannot directly confirm our hypothesized mechanisms, if the polls suffered from non-response bias and excluded Trump supporters, analyzing the full sample should not improve the estimates. Additionally, if those who did not express a vote intention were truly undecided (with an equal probability of voting for Clinton or Trump), or if they were leaning toward Clinton based on the barrage of negative Trump coverage, this result should not emerge.

Figure 3: 
          The percent indicating support for Trump (among Trump and Clinton supporters) in two national surveys (GfK and SRI) in October, based on expressed vote intentions and the truthful question.
Figure 3:

The percent indicating support for Trump (among Trump and Clinton supporters) in two national surveys (GfK and SRI) in October, based on expressed vote intentions and the truthful question.

5 Solving Correlated Error in State-Level Estimates: An MRP Analysis

The results above are consistent with hidden Trump supporters in both state and national surveys in October. As a final test of our argument, we return to our “truthful” question and analyze responses for the state level. The results in Figure 3 suggested that this question allowed us to identify hidden Trump supporters (also see Enns and Schuldt 2016). The results in Figure 1 also showed that the state polls systematically underestimated Trump support, a result that has been referred to as “correlated error.” If some of this correlated error resulted because a subset of Trump voters in the sample did not express a vote intention, we should no longer systematically underestimate support for Trump across states when we analyze responses to our truthful question at the state level.

Because our data are based on a national survey, we generate state-level estimates of Trump support (based on the truthful question) with a common approach called multilevel regression and poststratification (MRP).[11] MRP proceeds in three steps. First, we used a multilevel model to estimate the relationship between whether the respondent indicated Clinton or Trump is more truthful and the respondent’s demographic characteristics, state, and region.[12] Based on these results, we then predict the probability of indicating that Trump was more truthful for each demographic-geographic “type” (e.g. Latino females, age 18–29, with a college degree in New York). Finally, we use census data to poststratify (i.e. weight) the responses to match actual state population values, which allows us to estimate the proportion supporting Trump in each state.[13] Importantly, our statistical model only includes demographic variables, state, and region. Thus, beyond the census data used for weighting responses, we are not adding additional information to our data. We report the results for the seven swing states that have 10 or more electoral college votes. We focus on states with 10 or more electoral college votes because these states matter most for the election outcome, and because states with fewer electoral college votes have smaller populations and are more sparsely represented in our survey.[14]

Since we are using a single national-level survey to estimate state-level support, the results are bound to include substantial uncertainty (horizontal bars in Figure 4). Nevertheless, two important patterns stand out. First, consistent with expectations, the errors do not appear to be correlated: our estimates (solid dots) over-state Trump’s actual vote share (hollow diamonds) in four states and underestimate Trump support in three states. We would not expect this pattern if non-response bias was the source of correlated error in state polls. Second, our estimates predict the outcome in 5 of these seven key battleground states, including Wisconsin and Pennsylvania, where state polls made Trump victories appear extremely unlikely.[15]

Figure 4: 
          Trump support in October based on our GfK poll (solid dots) and actual vote support (hollow diamonds) for seven battleground states with 10 or more electoral college votes. Horizontal bars reflect 95% uncertainty estimates. For NC, the MRP estimate and actual vote overlap obscuring the hollow diamond.
Figure 4:

Trump support in October based on our GfK poll (solid dots) and actual vote support (hollow diamonds) for seven battleground states with 10 or more electoral college votes. Horizontal bars reflect 95% uncertainty estimates. For NC, the MRP estimate and actual vote overlap obscuring the hollow diamond.

Of course, as a post-hoc analysis, any interpretations must be made with caution. At the same time, it is clear that our data would not predict a Clinton landslide in the Rust Belt. Considering that Clinton was the first major-party nominee since 1972 not to set foot in Wisconsin, a state where Trump ads outnumbered Clinton ads (DeFour 2016), our evidence that Clinton’s lead was not as secure as it appeared is particularly striking.

6 Implications for the 2016 Election Polls and Beyond

Although the conversation about the 2016 election polls will continue, our results reinforce recent findings that cast doubt on non-response bias as a primary reason that the polls underestimated Trump support (e.g. AAPOR 2017). Across two separate national survey samples, when we analyze the standard voting intention question, our results align with the many other October polls that showed Trump’s support well below Clinton’s. However, when we instead analyze an alternative candidate preference question that nearly all respondents answered – whether respondents viewed Trump or Clinton as more truthful – our results paint a much different picture. Both surveys are within one percentage point of the actual vote share and a state-level analysis of this question not only eliminates the correlated error problem found in many state surveys but also suggests much improved chances of a Trump victory in several key swing states. We would not expect this pattern if the polls systematically missed a subset of Trump voters. Our analysis of 14 state polls offered further evidence that some likely Trump voters did not directly indicate support for Trump in surveys. Based on these analyses, it appears that there was an important group of respondents in October polls who were surveyed but did not directly voice their intention to vote for Trump. Although our data cannot speak directly to the psychological motives and processes of survey respondents, either social desirability bias or “top of head” considerations (or both) could have led to the pattern of hidden Trump supporters we observe.

These results also hold implications for whether late-breaking news, such as FBI Director Comey’s announcement on October 28 to reopen an investigation into Hillary Clinton’s private email server, influenced the election outcome. In a close election – such as this, in which Clinton lost Michigan, Wisconsin, and Pennsylvania by fewer than 80,000 total votes (Leip 2016) – it is plausible that such news could indeed sway the outcome. Additionally, we cannot completely rule out the possibility that these hidden Trump supporters made their voting decision in the final week or two of the campaign (Blake 2016; AAPOR 2017). Yet, our analyses do not support the idea that these individuals were late deciders that were equally likely to go toward Clinton or Trump. Much of Trump’s Election Day support was detectable in early October. To the extent these individuals were influenced by late-breaking news that favored Trump (or harmed Clinton), our results suggest they were already predisposed to this outcome in early October.

Our findings also suggest important implications for future survey research. Of course, it would be unreasonable to conclude that the polls should have correctly predicted the election outcome. Although polls can help improve forecasts, they are designed to measure preferences at the time the poll was conducted, not to forecast election outcomes. Furthermore, extremely close state races, unanticipated events, and the challenge of predicting voter turnout all complicate the task of using polls to forecast elections. Our analyses do, however, suggest strategies for evaluating the certainty (or lack thereof) of election forecasts. We were able to use an alternate measure (i.e. candidate truthfulness) to approximate vote preferences for the unusually high proportion of respondents who did not indicate an intention to vote for either the Democratic or Republican candidate. It is important to note that the truthfulness of the candidate may not be such a salient consideration in future elections as it was in 2016. Thus, questions about candidate truthfulness may not always correspond with vote intentions. However, in future elections, researchers can identify other relevant questions that theoretically (based on the campaign issues) and empirically predict vote intentions.[16] The current findings might also suggest that less direct measures, such as web browsing habits (Wang 2016) or implicit measures (Galdi et al. 2008), could also prove useful for imputing preferences of nominally undecided voters.

Finally, our MRP analysis suggests that Clinton’s lead was much less certain in key swing states than state polls suggested. In future elections, researchers should evaluate whether state-level MRP estimates based on national surveys match the conclusions of state-level surveys. Different results would not reveal which outcome was more likely, but would provide important evidence that the outcome was less certain than expected on the basis of state-level polls alone. Considering how much Trump’s victory came as a surprise, improving our understanding of the uncertainty around election surveys would be a major benefit to researchers, media, and the public.


Corresponding author: Peter K. Enns, Associate Professor, Department of Government, Executive Director, Roper Center for Public Opinion Research, Cornell University, Ithaca, USA

Appendix 1: State Polls Analyzed in Figure 1

As explained in the text, the state polls analyzed in Figure 1 were selected based on the following criteria: a sample of likely voters, live telephone interviews, in the field around early October, and included vote intentions for the presidential and Senate candidate. If more than one survey in a single state met these criteria, we selected the survey conducted closest to October 1. Additional survey details appear below in Table A.1.

Table A.1:

Details for State Surveys Analyzed in Figure 1

State Polling organization Dates in the field Sample size Link
Arizona Emerson 10/2–10/4 600 http://www.realclearpolitics.com/docs/-2016/Emerson_final_Press_Release_-and_Toplines_Fl-Nv-RI-AZ-_10.5_.pdf
Colorado Monmouth 9/29–10/2 400 https://www.monmouth.edu/WorkArea/DownloadAsset.aspx?id=40802211216
Florida NBC/WSJ/Marist 10/3–10/5 700 http://www.nbcnews.com/politics/-2016-election/polls-clinton-ahead-florida-pennsylvania-n662076
Georgia Landmark communications 10/11–10/12 1400 https://www.realclearpolitics.com/docs/2016/Landmark_Poll_Georgia_Statewide_Oct_11-12_2016.pdf
Illinois The Simon Poll/SIU 9/27–10/2 865 http://www.realclearpolitics.com-/docs/2016/Simon-SIU_Poll_Sept_2016.pdf
Indiana WTHR/Howey politics 10/3–10/5 600 http://www.wthr.com/article/-exclusive-wthrhpi-poll-clinton-trump-presidential-race-tightens
Iowa DesMoines register 10/3–10/6 642 http://www.desmoinesregister.com/-story/news/politics/iowa-poll/2016/-10/10/iowa-poll-grassley-leads-judge-in-senate-race-by-17-points/91824228/
Missouri Monmouth 10/9–10/11 406 http://www.monmouth.edu/polling-institute/reports/MonmouthPoll_MO_101216/
Nevada CBS News/YouGov 10/12–10/14 996 https://www.scribd.com/document/-327758568/CBS-News-Battleground-Tracker-Nevada-Oct-16#from_embed
New Hampshire UMass Lowell/7 News 10/7–10/11 517 https://www.uml.edu/docs/TOPLINE-UMassLowell-7NEWS NH GENERAL20161013_tcm18-262711.pdf
North Carolina NBC/WSJ/Marist 10/10–10/12 743 http://maristpoll.marist.edu/wp-content/misc/NCpolls/NC161010/NBCNews_WSJ_Marist Poll_North-CarolinaTablesofAdultsandRegisteredVoters_October2016.pdf–page=3
Ohio Monmouth 10/1–10/4 405 http://www.monmouth.edu/polling-institute/reports/MonmouthPoll_OH_100516/
Pennsylvania NBC/WSJ/Marist 10/3–10/6 709 http://www.nbcnews.com/politics/2016-election/polls-clinton-ahead-florida-pennsylvania-n662076
Wisconsin CBS News/YouGov 10/5–10/7 993 https://www.scribd.com/document/-326944222/CBS-News-Battleground-Tracker-Wisconsin-Oct-9-2016-#from_embed

Appendix 2: National Polls Analyzed in Figure 2

The data in Figure 2 come from the Roper Center for Public Opinion Research iPOLL database. The survey questions reported in Figure 2 typically followed the format, “If the (2016) presidential election were being held today and the candidates were Hillary Clinton and Tim Kaine, the Democrats, Donald Trump and Mike Pence, the Republicans, Gary Johnson and Bill Weld of the Libertarian Party, and Jill Stein and Ajamu Baraka of the Green Party, for whom would you vote? Would you lean toward Clinton and Kaine, Trump and Pence, Johnson and Weld, or Stein and Baraka?” After asking this initial vote intention question, several surveys followed up with respondents who indicated a third party candidate, do not know, or no answer, by asking, “If the only candidates were Hillary Clinton and Tim Kaine the Democrats and Donald Trump and Mike Pence the Republicans, for whom would you vote?” To ensure the results in Figure 2 are as representative as possible, we include the results from both formats when they are available. Additional survey details follow.

  • USABC.102316.R03: ABC News. ABC News Poll, Oct, 2016 [survey question]. USABC.102316.R03. ABC News [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABC.102516.R03: ABC News. ABC News Poll, Oct, 2016 [survey question]. USABC.102516.R03. ABC News [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABC.102316.R05: ABC News. ABC News Poll, Oct, 2016 [survey question]. USABC.102316.R05. ABC News [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABC.102516.R05: ABC News. ABC News Poll, Oct, 2016 [survey question]. USABC.102516.R05. ABC News [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABC.102616.R03: ABC News. ABC News Poll, Oct, 2016 [survey question]. USABC.102616.R03. ABC News [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABC.102616.R05: ABC News. ABC News Poll, Oct, 2016 [survey question]. USABC.102616.R05. ABC News [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABCWP.102716.R04: ABC News/Washington Post. ABC News/Washington Post Poll, Oct, 2016 [survey question]. USABCWP.102716.R04. ABC News/Washington Post [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABCWP.102716.R03: ABC News/Washington Post. ABC News/Washington Post Poll, Oct, 2016 [survey question]. USABCWP.102716.R03. ABC News/Washington Post [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABCWP.101616.R05: ABC News/Washington Post. ABC News/Washington Post Poll, Oct, 2016 [survey question]. USABCWP.101616.R05. ABC News/Washington Post [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USABCWP.101616.R03: ABC News/Washington Post. ABC News/Washington Post Poll, Oct, 2016 [survey question]. USABCWP.101616.R03. ABC News/Washington Post [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USARG.102116A.R01: American Research Group. American Research Group Poll, Oct, 2016 [survey question]. USARG.102116A.R01. American Research Group [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USAP.102616G.R05: Associated Press. Associated Press/GfK Knowledge Networks Poll, Oct, 2016 [survey question]. USAP.102616G.R05. GfK Knowledge Networks [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USPRRI.101916.R03: Brookings Institution. PRRI/Brookings Survey, Oct, 2016 [survey question]. USPRRI.101916.R03. PRRI [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USORC.20161023.Q05: Cable News Network. CNN/ORC International Poll, Oct, 2016 [survey question]. USORC.20161023.Q05. ORC International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USORC.102416.R01A: Cable News Network. CNN/ORC International Poll, Oct, 2016 [survey question]. USORC.102416.R01A. ORC International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USCBS.101716.R04: CBS News. CBS News Poll, Oct, 2016 [survey question]. USCBS.101716.R04. CBS News [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USCBS.101716.R07: CBS News. CBS News Poll, Oct, 2016 [survey question]. USCBS.101716.R07. CBS News [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USPSRA.102716K.RV03: Henry J. Kaiser Family Foundation. Kaiser Health Tracking Poll, Oct, 2016 [survey question]. USPSRA.102716K.RV03. Princeton Survey Research Associates International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USPSRA.102716K.RV02: Henry J. Kaiser Family Foundation. Kaiser Health Tracking Poll, Oct, 2016 [survey question]. USPSRA.102716K.RV02. Princeton Survey Research Associates International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USPSRA.102716.R011A: Pew Research Center for the People & the Press. Pew Research Center for the People & the Press Political Survey, Oct, 2016 [survey question]. USPSRA.102716.R011A. Princeton Survey Research Associates International, Abt SRBI [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USPSRA.102716.R010A: Pew Research Center for the People & the Press. Pew Research Center for the People & the Press Political Survey, Oct, 2016 [survey question]. USPSRA.102716.R010A. Princeton Survey Research Associates International, Abt SRBI [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USQUINN.101916.R01: Quinnipiac University Polling Institute. Quinnipiac University Poll, Oct, 2016 [survey question]. USQUINN.101916.R01. Quinnipiac University Polling Institute [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USQUINN.100716.R02: Quinnipiac University Polling Institute. Quinnipiac University Poll, Oct, 2016 [survey question]. USQUINN.100716.R02. Quinnipiac University Polling Institute [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USQUINN.100716.R01: Quinnipiac University Polling Institute. Quinnipiac University Poll, Oct, 2016 [survey question]. USQUINN.100716.R01. Quinnipiac University Polling Institute [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

  • USPRRI.101116.R04: The Atlantic. PRRI/The Atlantic Survey, Oct, 2016 [survey question]. USPRRI.101116.R04. PRRI [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, iPOLL [distributor], accessed Jun-13-2017.

Appendix 3: MRP Results for All 11 Swing States

Figure 4 in the text reported our MRP estimates and the actual vote share for the seven swing states with 10 or more electoral college votes. Figure A1 presents the results for all 11 swing states. Not surprisingly, our MRP estimates are much less accurate with the four states with fewer than 10 electoral college votes (CO, IA, NH, and NV). These states, after all, have smaller populations and are the least well represented in our surveys. Nevertheless, the key result holds. Unlike state surveys, which consistently underestimated Trump support, even when we consider all 11 swing states, our estimates do not reflect “correlated errors.” Our estimates (hollow dots) over-state Trump’s actual vote share (solid dots) in five states and underestimate Trump support in six states. Although more data would certainly improve our estimates, by employing MRP with one national-level survey from October, we do not systematically underestimate Trump support.

Figure A1: 
            Trump support in October based on our GfK poll (solid dots) and actual vote support (hollow diamonds) for the 11 battleground states. Horizontal bars reflect 95% uncertainty estimates. For NC, the MRP estimate and actual vote overlap obscuring the hollow diamond.
Figure A1:

Trump support in October based on our GfK poll (solid dots) and actual vote support (hollow diamonds) for the 11 battleground states. Horizontal bars reflect 95% uncertainty estimates. For NC, the MRP estimate and actual vote overlap obscuring the hollow diamond.

References

American Association for Public Opinion Research (AAPOR) (2017) “An Evaluation of 2016 Election Polls in the United States.” http://www.aapor.org/getattachment/Education-Resources/Reports/AAPOR-2016-Election-Polling-Report.pdf.aspx.Search in Google Scholar

Arcuri, Luciano, Luigi Castelli, Silvia Galdi, Cristina Zogmaister and Alessandro Amadori (2008) “Predicting the Vote: Implicit Attitudes as Predictors of the Future Behavior of Decided and Undecided Voters,” Political Psychology, 29(3):369–387.10.1111/j.1467-9221.2008.00635.xSearch in Google Scholar

Bahadur, Nina (2015) “18 Real Things Donald Trump Has Actually Said About Women,” Huffington Post, August 19, sec. Women. http://www.huffingtonpost.com/entry/18-real-things-donald-trump-has-said-about-women_us_55d356a8e4b07addcb442023.Search in Google Scholar

Blake, Aaron (2016) “How America Decided, at the Last Moment, to Elect Donald Trump,” The Washington Post. Nov. 17, 2016.Search in Google Scholar

Carmon, Irin (2016) “Trump’s Worst Offense? Mocking Disabled Reporter, Poll Finds,” NBC News. August 11. Accessed January 4. http://www.nbcnews.com/politics/2016-election/trump-s-worst-offense-mocking-disabled-reporter-poll-finds-n627736.Search in Google Scholar

Connors, Elizabeth, Samara Klar and Yanna Krupnikov (2016) “There May Have Been Shy Trump Supporters After All,” November 12. The Washington Post. https://www.washingtonpost.com/news/monkey-cage/wp/2016/11/12/there-may-have-been-shy-trump-supporters-after-all/?utm_term=.ad6f7a32adbc.Search in Google Scholar

DeFour, Matthew (2016) “Donald Trump Wins Presidency after Stunning Victory in Wisconsin,” Associated Press. November 8. http://interactives.ap.org/2016/general-election/.Search in Google Scholar

Dropp, Kyle (2016) “How We Conducted Our ‘Shy Trumper’ Study” Morning Consult. https://morningconsult.com/2016/11/03/shy-trump-social-desirability-undercover-voter-study/.Search in Google Scholar

Enns, Peter K. and Julianna Koch (2013) “Public Opinion in the US States 1956 to 2010,” State Politics & Policy Quarterly, 13(3):349–372.10.1177/1532440013496439Search in Google Scholar

Enns, Peter K. and Brian Richman. 2013. “Presidential Campaigns and the Fundamentals Reconsidered,” The Journal of Politics, 75(3):803–820. doi:10.1017/S0022381613000522.10.1017/S0022381613000522Search in Google Scholar

Enns, Peter K. and Jonathon P. Schuldt. 2016. “Are There Really Hidden Trump Voters?” The New York Times, November 7. http://www.nytimes.com/2016/11/07/opinion/are-there-really-hidden-trump-voters.html?_r=0.Search in Google Scholar

Enten, Harry. 2016. “Shy’ Voters Probably Aren’t Why The Polls Missed Trump,” FiveThirtyEight. https://fivethirtyeight.com/features/shy-voters-probably-arent-why-the-polls-missed-trump/.Search in Google Scholar

Erikson, Robert S. and Christopher Wlezien (2012) The Timeline of Presidential Elections: How Campaigns Do (and Do Not) Matter. Chicago: University of Chicago Press.10.7208/chicago/9780226922164.001.0001Search in Google Scholar

Franklin, Charles (2016) “The Polls Are Not Broken. Say It Again: The Polls Are Not Broken,” Washington Post. Accessed January 3. https://www.washingtonpost.com/news/monkey-cage/wp/2016/06/02/the-polls-are-not-broken-say-it-again-the-polls-are-not-broken/?utm_term=.453bda7f8c17.Search in Google Scholar

Galdi, Silvia, Luciano Arcuri and Bertram Gawronski (2008) “Automatic Mental Associations Predict Future Choices of Undecided Decision-Makers,” Science, 321(5892):1100–1102.10.1126/science.1160769Search in Google Scholar

Gelman, Andrew and Thomas C. Little (1997) “Poststratification into Many Categories Using Hierarchical Logistic Regression,” Survey Methodology, 23(2):127–135.Search in Google Scholar

Graham, David A. (2016) “Trump Brags About Groping Women,” The Atlantic, October 7. http://www.theatlantic.com/politics/archive/2016/10/the-trump-tapes/503417/.Search in Google Scholar

Groves, Robert M. (2006) “Nonresponse Rates and Nonresponse Bias in Household Surveys,” Public Opinion Quarterly 70(5):646–675.10.1093/poq/nfl033Search in Google Scholar

Hillygus, Sunshine D. (2011) “The Evolution of Election Polling in the United States,” Public Opinion Quarterly, 75(5):962–981.10.1093/poq/nfr054Search in Google Scholar

Jacobs, Jennifer and Billy House (2016) “Trump Says He Expected to Lose Election Because of Poll Results,” Boomberg Politics, https://www.bloomberg.com/politics/articles/2016-12-14/trump-says-he-expected-to-lose-election-because-of-poll-results.Search in Google Scholar

Katz, Josh (2016) “2016 Election Forecast: Who Will Be President?” The New York Times, July 19. http://www.nytimes.com/interactive/2016/upshot/presidential-polls-forecast.html.Search in Google Scholar

Klar, Samara and Yanna Krupnikov (2016) Independent Politics: How American Disdain for Parties Leads to Political Inaction. New York: Cambridge University Press.10.1017/CBO9781316471050Search in Google Scholar

Kohn, Nate, Josh Katz and Kevin Quealy (2016) “Putting the Polling Miss of the 2016 Election in Pespective,” New York Times. https://www.nytimes.com/interactive/2016/11/13/upshot/putting-the-polling-miss-of-2016-in-perspective.html.Search in Google Scholar

Kreuter, Frauke, Stanley Presser and Roger Tourangeau (2008) “Social Desirability Bias in CATI, IVR, and Web Surveys,” Public Opinion Quarterly, 72(5):847–865.10.1093/poq/nfn063Search in Google Scholar

Langer, Gary (2016) “Shift in the Electorate’s Makeup Tightens the Presidential Contest,” ABC News. http://abcnews.go.com/Politics/shift-electorates-makeup-tightens-presidential-contest-poll/story?id=43142198.Search in Google Scholar

Lax, Jeffrey R. and Justin Phillips (2009) “How Should We Estimate Public Opinion in the States?” American Journal of Political Science, 53(1):107–121.10.1111/j.1540-5907.2008.00360.xSearch in Google Scholar

Leip, David (2016) “Dave Leip’s Atlas of U.S. Presidential Elections,” http://uselectionatlas.org/.Search in Google Scholar

Linzer, Drew (2016) “The forecasts were wrong. Trump won. What happened?” Votamatic. Nov. 16. http://votamatic.org/the-forecasts-were-wrong-trump-won-what-happened/.Search in Google Scholar

Mercer, Andrew, Claudia Deane and Kyley McGeeney (2016) “Why 2016 Election Polls Missed Their Mark,” Pew Research Center.Search in Google Scholar

Moreno, Carolina (2015) “9 Outrageous Things Donald Trump Has Said About Latinos,” Huffington Post, August 31, sec. Latino Voices. http://www.huffingtonpost.com/entry/9-outrageous-things-donald-trump-has-said-about-latinos_us_55e483a1e4b0c818f618904b.Search in Google Scholar

Nederhof, Anton J. (1985) “Methods of Coping with Social Desirability Bias: A Review,” European Journal of Social Psychology, 15(3):263–280.10.1002/ejsp.2420150303Search in Google Scholar

O’Connor, Lydia and Daniel Marans (2016) “Here Are 13 Examples Of Donald Trump Being Racist,” Huffington Post, February 29. http://www.huffingtonpost.com/entry/donald-trump-racist-examples_us_56d47177e4b03260bf777e83.Search in Google Scholar

Pew Research Center. 2012. “Assessing the Representativeness of Public Opinion Surveys,” Pew Research Center for the People and the Press. May 15. http://www.people-press.org/2012/05/15/assessing-the-representativeness-of-public-opinion-surveys/.Search in Google Scholar

Politico (2016) “The Battleground Project: 2016 Swing State Map, List, Polls & News,” Election Hub. http://www.politico.com/2016-election/swing-states.Search in Google Scholar

Shepard, Stephen (2017) “Democrats Burned by Polling Blind Spot,” Politico. http://www.politico.com/story/2017/03/democrats-trump-polling-236560.Search in Google Scholar

Shirani-Mehr, Houshmand, David Rothschild, Sharad Goel and Andre Gelman (2016) “Disentangling Bias and Variance in Election Polls,” Unpublished Manuscript. https://5harad.com/papers/polling-errors.pdf.Search in Google Scholar

Silver, Nate (2016a) “Election Update: Where Are The Undecided Voters?” FiveThirtyEight. October 25. http://fivethirtyeight.com/features/election-update-where-are-the-undecided-voters/.Search in Google Scholar

Silver, Nate (2016b) “Pollsters Probably Didn’t Talk To Enough White Voters Without College Degrees,” FiveThirtyEight. December 1. http://fivethirtyeight.com/features/pollsters-probably-didnt-talk-to-enough-white-voters-without-college-degrees/.Search in Google Scholar

Taylor, Shelley E. and Susan T. Fiske (1978) “Salience, Attention, and Attribution: Top of the Head Phenomena,” In: (Berkowitz, L., ed.) Advances in Experimental Social Psychology, Vol. 11, New York: Academic Press, pp. 250–288.10.1016/S0065-2601(08)60009-XSearch in Google Scholar

Trende, Sean (2016) “It Wasn’t the Polls That Missed, It Was the Pundits,” Real Clear Politics. http://www.realclearpolitics.com/articles/2016/11/12/it_wasnt_the_polls_that_missed_it_was_the_pundits_132333.html.Search in Google Scholar

Wang, Sam (2016) “Google-Wide Association Studies,” Princeton Election Consortium. http://election.princeton.edu/2016/04/26/google-wide-association-studies/.Search in Google Scholar

Wang, Sam and Joshua Gold (2008) “You’re Undecided Now,” New York Times. https://mobile.nytimes.com/2008/10/31/opinion/31iht-edwang.1.17418207.html.Search in Google Scholar

Zaller, John R. (1992) The Nature and Origins of Mass Opinion. New York: Cambridge University Press.10.1017/CBO9780511818691Search in Google Scholar


Article note:

A previous version of this paper was presented at the 2017 Annual Conference of the American Association for Public Opinion Research in New Orleans, LA. We would like to thank Claudia Deane, Gary Langer, Sam Wang, Kathleen Weldon, and Chris Wlezien for helpful comments. We also thank GfK and Cornell’s Survey Research Institute for their support and Alex Rauter for his contribution to the surveys.


Published Online: 2017-08-21
Published in Print: 2017-10-26

©2017 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 2.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/spp-2017-0003/html?lang=en&srsltid=AfmBOopweYQTthUFOXtloVkbPsk50Rv-snEcab6zl-gnZuL4VDrg6KYy
Scroll to top button