Mason-Dixon pollster Brad Coker on 'shy Trump voters,' polls and the 2020 election

November 14, 2020 at 3:55PM
FILE - This combination of Sept. 29, 2020, file photos shows President Donald Trump, left, and former Vice President Joe Biden during the first presidential debate at Case Western University and Cleveland Clinic, in Cleveland, Ohio. (AP Photo/Patrick Semansky, File)
This combination of Sept. 29, 2020, file photos shows President Donald Trump, left, and former Vice President Joe Biden during the first presidential debate at Case Western University and Cleveland Clinic, in Cleveland, Ohio. (AP Photo/Patrick Semansky, File) (The Minnesota Star Tribune)

The 2020 presidential election is (mostly) in the books and the political world is once again fiercely debating a familiar question from 2016: What went wrong with public-opinion polls in a number of key battleground states?

One poll that came out looking pretty good was the Star Tribune/MPR News/KARE 11 Minnesota Poll, which was conducted by Mason-Dixon Polling & Strategy of Jacksonville, Fla. The final pre-election survey, conducted in late September, found Democrat Joe Biden leading President Trump by a 6-percentage-point margin in Minnesota — Biden ultimately won by a 7.2 points. The poll also found Sen. Tina Smith leading Jason Lewis by an 8-point margin, and she won by 5.3 points. Both findings were within the poll's ± 3.5-percentage-point margin of sampling error.

We asked Mason-Dixon CEO and pollster Brad Coker — who by his estimation has conducted more than 5,000 polls at the state and local levels over his nearly 40-year career, including 10 presidential election cycles — to share his thoughts on polling, Minnesota and the 2020 elections. His responses to our questions, submitted by e-mail, have been lightly edited.

Can you please summarize your methodology for our readers and describe how it is similar to or different from how other major pollsters do their work?

Our methodology is fully disclosed each time we release a poll. Mason-Dixon uses live-call interviewing. We draw our sample randomly from a file of registered Minnesota voters that has been phone-matched to a land line, a mobile phone or both. We balance our sample to account for geographic distribution (by county, based on voter registration), as well as by gender and age.

Obviously, there are other methods that some use – robo-calling, online solicitation and online panels. The strengths and weaknesses of all have been widely debated, but coming into this election, most observers considered live calling of both land lines and cellphones to be the "gold standard."

There has been a lot of talk in the polling industry about underrepresentation of voters with lower levels of education, who are more likely to support Republicans. Following the 2016 election, many pollsters advocated correcting this imbalance by "weighting" poll samples to bring them in line with government education statistics. What is your take on this debate?

My view on this was, and is, contrary to the consensus that many others reached going into the 2020 election. It is worth noting that over the last 10 to 12 years, election polling has been increasingly dominated by academic-based polling organizations, and professional polling firms are providing far fewer polls into the mix. This has been driven by changes in the economy that have led to cuts in newsroom budgets.

As a result many news organizations have cut costs by eliminating professional pollsters and relying heavily on "free" polls given to them by a growing cluster of small private colleges. The idea that the 2016 mistakes were attributable to an under-sampling of voters without college degrees came primarily from the academic pollsters. I never bought it for several reasons, which I will expand upon later.

Many pollsters again overestimated Democratic strength in key swing states in 2020. What do you think went wrong?

As in 2016, pollsters were unable to capture a specific group of voters – what has come to be referred to as "shy Trump voters." I will say with certainty that the incorrect overcompensation of those without college degrees was a major factor that produced less-accurate polls in 2020.

Can you explain why?

I don't know the exact answer, but over-compensation often opens doors that allow other errors to enter the mix. The micro-analyzers will eventually find out what factors were negatively affected by the education weighting assumption. It is the old "that for every action there is an opposite reaction" theory.

For example, those without college degrees also include a large number of minority voters (both Black and Hispanic). If they up-weighted non-college voters and did not control for race/ethnicity, that would have made things worse.

So who are the "shy Trump voters?"

Identifying the "shy Trump voter" was not difficult. In the final stretch of the 2016 presidential election, two major news organizations released nationwide polls that indicated Hillary Clinton was ahead of Donald Trump by 25 to 32 percentage points among college-educated white women. The Election Day exit poll, conducted by a consortium that included these two organizations, indicated that Clinton only carried this group by six points (51% to 45%). This 20-plus percentage point discrepancy was the largest under-representation of the Trump vote in 2016.

Fast-forwarding, I am aware of at least one national poll that indicated Biden's lead over Trump among college-educated white women was 27 points. Although exit poll results are not completely reliable given the number of mail-in voters this cycle, the estimates I have seen suggest the actual margin was in the 10% to 15% range. History seems to have repeated itself.

This being the root of the actual problem makes complete sense. In which group would social disclosure of support for Trump make it less likely for members within to participate in a poll that would out them as a Trump supporter – "blue-collar" voters or white women with college degrees? My opinion is that professional women working in the white-collar office world and/or living in affluent suburban neighborhoods are less likely to publicly say they support Trump than voters who are in the trades or live in working-class communities. In fact, I have found that "blue-collar" Trump voters (men and women) are not at all shy about expressing their support for him.

Have you encountered this "shy voter" phenomenon in other elections?

I first encountered it in the 1980s when conducting polls in North Carolina that included Republican Sen. Jesse Helms – a very controversial politician of the time. Using proven methods that produced accurate results for every other race on the state ballot, we repeatedly underestimated Helms' support by about 5% to 7%.

A former Ku Klux Klan leader named David Duke ran in Louisiana for the U.S. Senate in 1990 and for governor in 1991. Duke was by far more controversial than Helms. Although he did not win, he made surprisingly strong showings – including beating the sitting governor in the primary. In his races, the actual support he received from voters on Election Day was more than 10 points above what had shown up in the pre-election polls.

Can "shy voters" be reached?

Having a better awareness of the real "shy Trump voter" may not have led to significantly better polling results in 2020. I am not convinced there is a magic combination that would have increased their participation in large enough numbers. A 25-point gap is difficult to fill and you can't arbitrarily compensate for a group of voters who are making themselves that inaccessible to pollsters. Anyone claiming they have the secret formula is selling snake oil.

To sum it up, no effort would have fully captured the "shy Trump voter". Additionally, while there will be a lingering public perception that all polling methods are flawed, I believe the problem will go away with Donald Trump – just as it did with the exits of Helms in North Carolina and Duke in Louisiana.

Are there any other groups or demographics that are harder to reach or more reluctant to participate in polls than others?

There are a variety of demographic groups that are a little harder to reach, but over time, proven procedures have been developed to solve the problem. For example, polling in heavily Hispanic areas requires the use of bilingual interviewers with questionnaires that are written in both English and Spanish.

What surprised you most this election cycle?

Only one outcome truly surprised me in 2020 and that was the U.S. Senate race in Maine. Mason-Dixon did not poll in that contest, but every published poll showed Democratic challenger Sarah Gideon leading incumbent Republican Susan Collins, with the average lead at six points. Collins ended up winning by nine points. This universe involved eight different polls conducted by eight different polling organizations within the last 60 days of the campaign. The magnitude of this group inaccuracy is troubling.

How much has the participation rate (the percentage of people who, when called, agree to participate in a poll) changed in Minnesota over time? Is Minnesota's participation rate higher or lower than in other states?

As in all states, participation rates have dropped over the last 10 years. This is primarily the result of the shift of people away from land-line use and to heavy reliance upon mobile phones. In our most recent polls in Minnesota, over 70% of our sample has been produced by cellphone interviews.

Participation rates have always varied from state to state and city to city. New York and California continue to be the most difficult for voter cooperation, while small states like Delaware, Wyoming and Arkansas are still the easiest. In my experience, I have found that Minnesota has generally been the mid-sized state with the highest participation rate.

It is worth mentioning that we observed a new participation pattern emerge this year. For decades, we have always found women more likely to participate in our polls than men and we have long-standing methods in place to remedy that imbalance.

However, following the primary elections, we noticed a significant drop in participation rates among women in our cellphone interviewing. Men were, for the first time, actually participating at a higher rate on mobile phones. We adjusted to address this, but the development was likely an early sign that something was amiss among female voters.

Do online polls solve any of these problems, or do they have drawbacks of their own?

No, online polls were just as inaccurate as all the other forms of polling in 2020 and they have no track record of consistent reliability. The largest drawback is that the samples are not truly random and there is no true way to authenticate that the respondent is actually a registered voter. Some online polls use the same panel of respondents over and over again, which transforms the average random voter into a systematic, repeat poll taker.

Is there any data on the performance of polls that pay respondents?

There is no evidence that I am aware of that demonstrates improved accuracy. The answers of random voters will always produce better results than going back to the same group of people again and again. Someone who becomes a "professional poll taker" is going to start to overthink the importance of every opinion they offer.

What one thing do you wish the public understood better about polling?

That poll results cannot predict the future, but can only measure what is going on at the time the poll is taken. Often, the most important results from polls are the trends they may uncover and not what the point spread indicates.

Minnesota provides an excellent example from the 1998 governor race. At the outset, the two front-runners were Democrat Skip Humphrey and Republican Norm Coleman. The third candidate was Reform Party nominee Jesse Ventura, who trailed by about 15 points throughout the race. As Election Day grew nearer, the two leading candidates spent most of their time attacking each other while Ventura offered himself up as an outside alternative.

About a week before the election, Mason-Dixon released a poll showing Ventura had closed the gap to just 5-points behind Humphrey and Coleman. After what many considered a solid debate performance a few days later, Ventura maintained his momentum and ended up pulling out a 3-point win. At the time, some criticized the polling because it failed to "predict" Ventura winning.

Reality is that in the final days of an election, every candidate and their campaigns are capable of making smart decisions to boost their chances of winning, but are also capable of making huge mistakes that will sink them. How things play out in the days or weeks after a poll is completed can always change the outcome of any election.

This article has been updated after publication with an additional question and response about weighting according to educational attainment.

about the writer

about the writer

Matt DeLong

Audience editor

Matt DeLong is an editor on the Minnesota Star Tribune's audience team. He writes Nuggets, a free, weekly email newsletter about legal cannabis in Minnesota. DeLong also oversees the Minnesota Poll and has written numerous reader-focused guides and FAQ articles on a wide range of topics.

See More

More from Politics

card image

Our mission this election cycle is to provide the facts and context you need. Here’s how we’ll do that.