How to better understand the work of election forecasters and their predictive models

This work is somewhat different from the polls that we all know so well.

By Kristen Soltis Anderson

The New York Times
September 3, 2024 at 10:30PM
Stickers reading "I voted" rest beside a ballot scanner as a voter fills in a ballot at a privacy booth, during voting in Florida's primary election on Aug. 20 in a voting bureau at the American Legion in South Miami, Fla. (Rebecca Blackwell/The Associated Press)

Opinion editor’s note: Strib Voices publishes a mix of national and local commentaries online and in print each day. To contribute, click here.

•••

We are now past Labor Day and in the homestretch of the 2024 campaign, and a lot of people are asking me and others in political polling and media: Who’s going to win in November? Is the race Donald Trump’s to lose? Can Kamala Harris turn her momentum into victory?

With people craving this peek into the future, the spotlight is intensifying on a part of my industry that isn’t especially well understood: election forecasters and their predictive models. This work is somewhat different from the polls that we all know so well, and I want to lay out what matters more about election forecasting, some of the reasons their predictive models can yield such differences (like predicting better chances for Trump in some models and better chances for Harris in others) and what I think people should keep in mind about forecasting and models so they don’t drive themselves crazy trying to game out the future over the next nine weeks.

First, the difference between polling and forecasting (and predictive models) boils down to this: Polls give you a snapshot of voter opinion at a particular moment. By contrast, election forecasters try to look ahead and assess the likelihood of a particular outcome. Forecasters draw on those polls as they build a predictive model, into which they continually feed more polls and make adjustments (more on that below) to compute the chances of a given candidate winning. So while a poll might say that Harris is ahead of Trump by two percentage points in a given state — e.g., 49% to 47%— a predictive model might say that she won the presidency in 53 out of every 100 times that the model was run.

Some forecasters’ models think Trump is favored slightly, some think Harris is favored slightly, and some think of the current race as a true coin flip for either candidate. (On election night, my New York Times colleague Nate Cohn gets into the short-term forecasting game with the Needle.) Just as I find polling to be often misunderstood, election forecasting is even more complicated, making it even more likely that the results of a forecast will be badly misinterpreted.

There are considerable debates about what election forecasters should take into account. Think of an election model like a recipe for chocolate chip cookies. The goal is the same: produce the most accurate forecast of how an election will go (or produce the best chocolate chip cookie). But how you get there can vary significantly; New York Times Cooking has an editors’ collection of 16 chocolate chip cookie recipes, some requiring sea salt and one with coconut sugar.

In election forecasts, there’s the main ingredient, of course: polls. Some forecasters believe that an election model should be driven entirely by the results of public opinion polls, arguing that such polls are the only real window into how voters might behave and votes are the only metric that matters in the end. Some models might give more weight to polls with a track record of accuracy or polls conducted more recently. For instance, the Quinnipiac poll that my husband took last week has more weight in Nate Silver’s model than this more dated poll from Morning Consult but less weight than this fresh poll from Suffolk University.

Other models include the equivalent of that sea salt or coconut sugar, taking into account factors like incumbency and the economy — sometimes called the fundamentals. This can get tricky, as you might imagine. Typically, being an incumbent is considered an asset in an election, but what happens when the incumbent is unpopular and globally we are seeing incumbent politicians being ousted left and right? (This is to say nothing of the unique strangeness of this presidential election, in which whether the sitting vice president should be considered an incumbent is a point of debate among pollsters and forecasters.)

Or take the economy. Unemployment is rising slightly, inflation is cooling off, and the Fed is considering cutting interest rates. Which is the best indicator of how the economy may affect the election? The site 538 uses 11 economic indicators in its model. Meanwhile, 62% of voters in a recent poll said they thought the economy was bad.

Ultimately, voters will cast their ballots, and a new president will be elected, regardless of what a forecast says. So why bother? In short, a forecast gives a window into how a campaign might assess the race and make choices about which strategy to deploy. A campaign that is losing ground may need to make some major adjustments to get back on track, or a campaign that is surging may begin taking fewer risks so as not to squander its newfound advantage. Knowing who is up and who is down can help news consumers better understand why candidates and campaigns make the decisions they do.

Just as different pollsters use different methods to measure public opinion, the forecasters who use those poll results to make predictions are also using different ingredients to build their models. The solution for you, the savvy news consumer? Consume a wide variety. There’s no such thing as too many chocolate chip cookies — I mean, election models.

Kristen Soltis Anderson is a contributing Opinion writer for the New York Times, a Republican pollster, a speaker and a commentator. She is also the author of “The Selfie Vote: Where Millennials Are Leading America (and How Republicans Can Keep Up).”

about the writer

Kristen Soltis Anderson