In many ways, the 2025 Virginia election was the funhouse mirror reflection of 2017; President Trump is in the White House and has seen his approval rating plummet since taking office in a matter of under a year. Democrats were poised to gain seats in the House of Delegates, and overperformed conventional wisdom on just how many seats they’d gain. Over 7,000 people showed up to a rally for President Obama and the Democratic nominee for Governor.
And, of course, Democrats overperformed the polling average in the Commonwealth by at least 4% in the nonpartisan aggregators. State Navigate’s bipartisan polling team saw this coming from a mile away.
The “Three Buckets” of 2025 Polls
There were three buckets of polls in Virginia and, in large part, New Jersey. Pollsters decided to either weigh their electorates to the previous gubernatorial election in 2021, the 2024 presidential election, or neither. That “neither” bucket came with a mixture of methodological decisions on how to estimate the partisanship of the likely electorate. For example, Emerson weighted their poll to both 2021 and 2024 in Virginia. Some Republican pollsters weighted their polls to a redder electorate than 2021 (and, of course, 2024), while State Navigate decided to weigh to a bluer electorate than both 2021 and 2024.
We categorized the party weighting scheme of each poll conducted from September through Election Day into one the following groups: Weighting on 2021 or 2024 electorate (one much redder than we have seen historically in a gubernatorial election in Virginia when a Republican is president), weighting to a predetermined party identification, weighting to an expected 2025 electorate, or no clear explanation of any kind of weighting by partisanship.
Table I. Poll Types & Governor Margins

Pollsters that weighted to the expected composition of the 2025 electorate clearly outperformed others. Note that pollsters in this category used a variety of factors, including historical outcomes, responses from voters about vote likelihood, and voter file data to compose the expected electorate. Pollsters that were unclear about their methodology and party weighting scheme had the worst showing, including Republican firms like Pulse Decision Science, On Message Inc., Trafalgar Group and InsiderAdvantage (other polls in this category include internal polls by both the Republican and Democratic Attorneys General Associations, as well as polls from Emerson, Quantus Insights, and Research Co.).
Party Identification Choices
For pollsters that appear to have weighted to a predetermined party identification, it’s often unclear how they are choosing that partisanship. In at least two cases, pollsters indicated that they were relying on party registration for their party identification targets, which is impossible because Virginia does not have partisan voter registration. In other cases, pollsters report a partisan makeup that does not seem to be explained by historical data— for example, one October poll by co/efficient indicated that they weighted by self-reported party identification, and also that the self-reported party identification in their poll was 36% Republican, 37% Democratic, and 27% Independent, an electorate redder than both 2021 and 2024. It is unclear how these numbers were chosen.
State Navigate’s writeup of the last poll a few days before the election spent a lot of time focusing on why we were letting the partisanship float more to the Democrats. As we analyzed each and every poll during the months of September and October, the classic southern phrase “That dog don’t hunt” frequently came to mind. This comes from a simple understanding of Virginia history: In every gubernatorial election, the party out of power from the White House has had at least a slight turnout advantage. The question was never “if” the electorate in Virginia would be bluer than in 2024 or 2021, but by just how much. Polls more often than not weighted to electorates with a very different environment in thermostatic public opinion than the current year, i.e. 2021 and 2024. We felt bolstered by a large shift in partisan movement among self-identified respondents in Gallup’s third quarter polling. Using the factors that we talked about before, the team decided to settle at D+8 for the electorate.
But other pollsters’ final polls tended to cluster around a partisanship closer to that seen in 2021 or 2024. Only one other pollster decided to match a D+8 electorate, Quantus Insights. They were close to the final partisan composition of the electorate, but their party loyalty numbers meant that it didn’t help accuracy in their poll. For example, 76% of Republicans in their poll said they planned on voting for Winsome Sears, 87% of Democrats said they planned on voting for Abigail Spanberger, and 34% of Independents said they planned on voting for Abigail Spanberger. In State Navigate’s poll, 94% of Republicans said they planned on voting for Winsome Sears, 97% of Democrats said they planned on voting for Abigail Spanberger, and 51% of Independents planned on voting for Abigail Spanberger. Simply put, while Quantus Insights was close to the actual partisan composition of the electorate, they were the least accurate pollster in determining how partisans and independents would actually vote.
After the 2025 Virginia exit poll was reweighted (the initial one was off as they weighted to 2024 vote choice), we can see an easy map of this.

Demographic Choices
One potential reason State Navigate was more accurate than the rest of the pack this year also came from our methodology and choices on both reaching and weighting demographics. This is similar to the choices made by pollsters in the Party ID weighting: some pollsters may have let their party ID “float” and went more with the raw sample that they collected. It’s possible that this occurred in some pollsters’ demographics by race and age as well. Using Catalist, AP VoteCast, VoteHub, voter file, and voter registration data, State Navigate determined in its final poll that the likely electorate would be slightly younger and more racially diverse than the electorate in 2021.
When conducting its polls, State Navigate went with a purely random sample of respondents on its first day in the field. The raw sample as a result was overwhelmingly white, educated, old, male, and wealthy, which comes as no surprise given that these are subgroups that are much more likely to respond to surveys. As a result in its remaining fielding days, the next targeted samples skewed more nonwhite, less college educated, younger, and lower income. We concurrently conducted a field experiment that helped us reach younger voters, who were underrepresented in most polls this year. By aggressively targeting groups that are difficult to reach in the remaining days of our poll, we were able to have a raw sample closer to our targeted weights. We weren’t perfect in our ability to target every difficult demographic, though, especially with Asian voters. In the future, State Navigate is interested in experimenting with contact methods that may make it easier to reach this demographic (one idea is using WhatsApp, but this is to be determined).
Aside from the issues of some polls’ raw samples, weighting decisions also affected the accuracy of pollsters. Take Trafalgar, for example, the second-worst poll for Virginia in 2025. Their weighted Black sample was at 15%, which hasn’t been that low in a Virginia gubernatorial election since 2009; in 2021 and 2017, the black share of the electorate stood at 17%. While their white sample was at 71% (3% less than 2021), their bias was buoyed by boosting the “Other” racial demographic, which in our experience in our surveys skews more Republican compared to how this racially ambiguous group actually votes in general elections; this is likely due to Republican and Independent-leaning respondents choosing demographic choices that they determine as being the best way to protect their privacy. They make up 1% of every Virginia election. Instead, Trafalgar put them at 5% (and Cygnal at 4%) in all likelihood to try and produce a more favorable result for Republicans.
Table II. Pollsters’ Racial Demos & Governor Performance

Accuracy by Pollster
The State Navigate statewide polls in Virginia proved to be the most accurate polls of the year in the Commonwealth among those that asked voters about every statewide election. In the following chart, average error for each race is calculated as the average absolute difference between the predicted voteshare for each candidate in each pollster’s final Virginia poll and the actual voteshare for that candidate in the election.
Table III. Accuracy of “all statewides” Pollsters

Conclusion
Our goal with this article is not to brag or boast, but to offer our insight on why State Navigate was the most accurate pollster this year. State Navigate is not interested in making a profit off of our polling; we conduct it at the bare minimum cost to field our polls. It cost $3,000 to conduct our Virginia statewide polls and $900-$1,500 to conduct our Virginia district level polls. We conduct polling because we believe that it is a form of data on state governance that should be a right, and not a privilege.
Our forecasting efforts also rely, in part, on polling. What’s good for the goose is good for the gander, and we are not interested in keeping the reasons for our successes a secret. Polling is also the second-best method of determining what the public wants from their government, with the first of course being actual elections. Polling is a fundamental part of democracy in the 21st century— it is a scientific miracle that public opinion can be estimated in real time, informing those in power of what the people want ahead of elections. As such, it is imperative that polling be accurate, otherwise representatives in government will be less responsive to immediate public opinion. To ensure a government of the people, by the people, and for the people, everyone should hope and help polling to be the most accurate it can be.
By offering this analysis, it is our hope that pollsters can learn from their losses and our success, improve their methodology going forward, and thus strengthen American democracy. Eventually, the tables may turn, and it will be State Navigate’s time to learn from its own losses. We’ll make good on that promise should the time come. We are, of course, ecstatic with our success this year, but we always focus on our losses rather than our wins (i.e. failing to have a healthy sample of Asian voters). To our mind, this ought to be the case for every pollster and forecaster, regardless of the year.

