This presidential election has been different. Historically unpopular major party nominees have highlighted a politically polarized American electorate. The election has also demonstrated an American public which is highly dissatisfied by their political institutions and the direction of politics in the United States. Almost half of those surveyed were even unsure they they would vote on Election Day. Many of those who did plan to cast a ballot lacked confidence their vote would be properly tabulated. Even as a projected record number of voters turn out to vote in today’s elections there have been numerous reports of voting problems across the country, potentially adding credence to calls of a “rigged election”.
However, what may assist in alleviating the concerns of some voters that their votes have not been correctly counted and discounting those calls of a rigged system is something else which makes this presidential election different. An Election Day experiment in real-time vote tabulation. For the first time voters (and non-voters) won’t have to wait until polls close to find out what happened when the polling was open. VoteCastr, a data startup, has partnered with Slate and Vice News to publish real-time projections of which nominee is winning at any given moment of the day in seven battleground states: Florida, Iowa, Nevada, New Hampshire, Ohio, Pennsylvania and Wisconsin.
While this experiment is controversial and has experienced some early issues it is a potentially tremendous development in election forecasting. That is because it is not actually forecasting but rather prediction based on real-time data monitoring. This is the same system utilized by the campaigns to track voting activity throughout the day and frame each campaign’s Election Day message. Here’s how the process works, via Slate:
“The project can be broken down into two phases: what happens before Tuesday, and what happens on the day itself. In the lead-up to Election Day, VoteCastr conducted large-sample surveys in eight battleground states. Unlike a typical media poll that might ask hundreds of respondents dozens of questions, these surveys presented thousands of people with just a handful of queries each. The results were then run through predictive models to determine the probabilities of each voter in each of the eight states casting a ballot for Clinton, Trump, Gary Johnson, or Jill Stein. (VoteCastr did not include Evan McMullin in its models. The independent candidate is only on the ballot in two states we are tracking, Colorado and Iowa.)
“The other piece of the pre–Election Day puzzle is early voting, which now accounts for an estimated 30 to 40 percent of the general election vote. Local officials collect and report information about who voted early in each state in advance of the election, and VoteCastr then compares that public info with its own private voter files. To understand how this works in practice, consider my early ballot, which I cast in Iowa City last week. Though VoteCastr doesn’t know who I voted for, it can make an educated guess based on the things it does know about me: my age, race, and party registration. Our friends at VoteCastr tell me the model believes there’s a 97 percent chance I voted for Clinton. When my name shows up on the list of people who voted early in the Hawkeye State, VoteCastr will use that number to fill in the blank. These voter preference estimates allow VoteCastr to make more specific forecasts about the early voting split than most other modelers, which simply sort returned ballots by party registration. (For what it’s worth, the model got it right in my case: I voted for Clinton.)
“That’s the easy part. If everyone voted before Election Day, the final outcome would be pretty easy to predict even without a fancy model. The challenge for VoteCastr and other prognosticators is to figure out which voters will make the trip to their local polling stations and which will stay home. That’s where the day-of tracking comes in. VoteCastr will have hundreds of field workers stationed at preselected precincts around the country. Those field workers will be reporting official turnout numbers as they’re provided to them by poll workers throughout the day. By selecting a representative mix of precincts, VoteCastr will extrapolate the turnout in similar precincts that aren’t being tracked, in the same way it used large-sample polling to draw probabilistic conclusions about how I was going to cast my vote without surveying me directly.
“Let’s assume there’s a particular precinct in Wisconsin in which pre–Election Day polling suggests voter preference for Clinton and Trump is split 50-50. If a field worker stationed there reports that 100 votes were cast in the first hour of voting, VoteCastr won’t simply assume that 50 of those votes were for Trump and 50 were for Clinton. The model will also factor in how likely it believes Clinton supporters in that precinct are to vote compared to Trump supporters. Let’s consider a simple hypothetical in which each of Trump’s likely voters in our Wisconsin precinct is more likely to vote than each of Clinton’s likely voters. If projected turnout is low, then we can assume the more-energized Trump supporters will vote in greater numbers than Clinton supporters. If turnout is high, then we can assume there will be more parity—that the high turnout is an indication that less-energized likely Clinton voters did show up to vote on Election Day.”