“If you are guided by opinion polls, you are not practicing leadership, you are practicing followership.” Margaret Thatcher
Margaret Thatcher was first called “The Iron Lady” in a Russian newspaper; initially thought to be derogatory, she and her supporters embraced the title as a compliment to her principled and uncompromising style. She became Great Britain’s first female prime minister after first securing leadership of her party, and contrary to what the polls had forecast. The same happened in Thatcher’s re-election bids, sometimes winning in a landslide. She became the longest serving prime minister in Great Britain’s history since Robert Walpole more than two centuries ago.
Polls are curious things as they attempt to forecast the most unpredictable thing on earth known as human behavior. Most pollsters use similar methods for surveying public opinion; the most common are sampling by zip codes, computer-generated lists of phone numbers, or email addresses for online polling. The problem that arises is twofold, one being population density and the other composition of the questions. There’s actually another issue that is often ignored as there are still many Americans, especially senior citizens, who either disdain technology or have no access to it, effectively isolated from polling; maybe they’re the lucky ones.
For example, PEW uses zip codes, randomly taking an address from each to harvest a pool of about 10K potential respondents; the problem is there are over 41K zip codes in the US and the distribution is by population density, not geography. Consequently, you get far more zip codes in urban areas that are predominantly Democratic than rural areas that are predominantly Republican; this skews your data field despite being chosen randomly.
The composition of polling questions can be influenced by whomever is in charge of doing so. Richard Wike is director of global attitudes research at Pew Research Center; previously, he was senior associate for international and corporate clients at GQR Research, a Democratic polling firm established by Stanley Greenberg in 1980. It is fair to be concerned that the composition of PEW’s survey questions will be structured for answers compatible with those of the pollster.
The Heritage Foundation is a conservative organization often in support of the RNC; they conduct polling using RMG Research, founded by Scott Rasmussen in 2003. While promoted as being non-partisan, they are Republican aligned. They use some novel methods like video and audio polling, but their respondent base is not clear. They call their work “Counter Polling” but that’s not clearly defined. For example, RMG conducted a poll regarding the support for additional funding for Ukraine and found the vague result of wanning interest in doing so; questions were prefaced by Ukraine not being a member of NATO and not having any treaty with the US. While this is factually true, its inclusion as a preface to the question shades it reflectively and provides the respondent with a qualified context in which to answer.
The above does not mean all pollsters are inherently biased, but it does question the value and the accuracy that any poll can provide depending on who the pollsters and respondents are, and the composition of the questions asked. Many pollsters mail, email or text surveys with questions that are to be answered as multiple-choice, single-choice, or with options like yes-or-no, or true-or-false; this is efficient but the responses to this format depend as much on the composition of questions as on the viewpoints of respondents. This becomes especially relevant with the American electorate evenly split between Democrats and Republicans at about 30% each, with the largest segment being independents at about 40%.
According to most pollsters themselves, polls have been about 60% accurate historically. That should not be surprising when we consider that polls reflect mostly the population of their polling base more than the predictable results from voters; random sampling by computer programming does not assure accuracy of results, especially given political demographics that are so skewed geographically, which is becoming even more fluid with the increase of US migrations.
Curiously, most polls do not focus on third-party candidates. The third largest national political party is the Libertarian Party; both then incumbent presidential candidate Biden and former President Trump made overtures to the party for support, but few pollsters ever bothered to find out where Libertarians stand with either candidate. Most pollsters have made the mistake in the past, and apparently again now, that Libertarians would vote for the party’s candidate, Chase Oliver; this ignores the fact that many Libertarians vote for Democratic and Republican candidates. The Libertarian Party presidential candidate on average gets only 3% of the vote, while Libertarians overall represent about 19% of the total electorate; the same crossover occurs among other third parties, but most polls do not account for this.
Another aspect that often skews poll results relative to outcomes is the effect that principles versus affiliations have on a respondent’s answers; people who evaluate a candidate by their own principles, even when they bother to respond to pollsters at all, may answer a question by its context implying affiliation, when that’s not their intent. Given the large composition of the electorate as independent, this adds another complexity to polling that can’t be mathematically resolved.
In economics it is human action that accounts for outcomes; the same is true in all human activities, and the larger the society, the more unpredictable the outcome. Taking a sampling of any society in an effort to accurately predict an outcome like an election is extremely difficult; in that context, having a 60% success rate is not all that bad. In baseball, the highest batting average recorded to date is .466, a statistic based on outcomes, not forecasts; relying on polls to know who might win an election is to rely on the opinions of others, fair in marketing the candidate, marginally successful in predicting results.
In the 2024 presidential election, the polls rate the race as a dead heat with an estimated margin of error somewhere around 3%, which may be very generous if not wishful thinking; realistically, it should be more like 5% given the biases and variables in polling. Both major candidates carry a lot of negative baggage which may also contribute to the complexities in current polling. Chase Oliver as mentioned above humorously summed up his advantages as a candidate prior to Biden’s withdrawal with “I’m under the age of 80, I speak in complete sentences, I’m not a convicted felon; it’s a very low bar, but I’ve managed to clear that.”
This is not just an issue for polls in the US when we look at Argentina last December where now President Javier Milei was tied in the polls but wound up getting the highest number of votes in Argentina’s history. Then consider the Israeli polls where Benjamin Netanyahu often trails, but as he repeatedly says “I always lose the election in the polls, and I always win it on election day.” Americans would be better off ignoring polls and really listen to what a candidate says, or fails to say, and think about what value they would bring to their lives so they can better judge who to vote for. Warren Buffet is one of the most successful investors of our time by making value the ultimate goal, and knowing that “A public opinion poll is no substitute for thought.”