1) Poll Results Are Snapshots, Not Predictions
Public opinion is seldom fixed. Views change. Good polls tell us what people think at the time they were interviewed. Someone reviewing polling results might make a prediction based on the data, but that is a personal judgement, not a poll finding. Suppose you know that a runner is leading by ten metres in a 1500m race, with one lap to go. You might predict that the runner is likely to win – and you would make your prediction with more confidence knowing the state of the race at that point. But you cannot be certain of the outcome. If you want to know the future, don’t commission a poll. Buy a crystal ball.
2) Good Polls Are Seldom Exactly Right, But Seldom Badly Wrong
Polls obtain responses from a small fraction of the population – typically 1,000-2,000 out of a population of millions. Good polls seek to match their samples to the characteristics of the population as a whole – by age, gender, region, social class etc. But statistical theory warns us that even the best survey is subject to a margin of error.
Suppose a coin is tossed 1,000 times. We would expect it to land heads roughly 500 times, and tails roughly 500 times – but it would be a fluke if it landed EXACTLY 500 times each. Likewise with polls. If 50% of the whole population hold a certain view, a well-conducted poll should produce a finding between 47% and 53% – but the laws of probability tell us but one time in 20, even a “good” poll will produce a result outside that range. But it’s vanishingly unlikely to be, say, ten points adrift of the truth.
3) A Small, Representative Sample Is Always Better Than A Big, Unrepresentative Sample
Newspapers and television programmes sometimes invite their readers/viewers to text, phone or email their views. They sometimes then say something like: “We now have the verdict of more than 100,000 people far more than any opinion poll”, and claim that the size of the exercise makes it better than a poll of just 1,000 people.
Nonsense. Here’s a cautionary tale. In the 1936 US elections, a magazine, Literary Digest, elicited the voting intentions of more than two million Americans and said that President Roosevelt would be buried after just one term as President, and that his rival, Alf Landon, would win by a landslide. Never heard of Mr Landon? That’s because he lost badly. Gallup Polls conducted a far smaller, but properly representative, survey – and rightly showed Roosevelt well ahead.
An unrepresentative sample is an unreliable sample – simply making it bigger makes no difference.
4) The Details Matter – Dates, Question-wording, Client
So, you see some polling information. You note that it comes from a reputable company and therefore likely to be properly conducted. But does it mean what it seems to mean? Here are some tests to apply.
When was it conducted? Polls often look at current controversies, at a time when public opinion might be volatile. If the fieldwork for the poll was, say, two weeks ago, it might be a less accurate guide to current public opinion than one conducted two days ago.
Watch the latest episode of Social Europe Podcast
Where does the report of the poll findings appear? If it’s a media report, it might tell only part of the story. If it’s put out by a campaigning organisation, it might select only those findings that suit its case. And/or they might – wittingly or unwittingly – abbreviate the questions and/or results in a way that ends up being misleading. To be certain about what questions were asked and what results obtained, the best advice is to go to the polling company’s own website.
Who commissioned the survey? Polling clients often have their own agenda – to promote a cause, a party, a candidate, a product or a point of view. This fact does not necessarily invalidate research they commission. Reputable polling companies make sure that the questions they ask are fair and balanced. But when clients have an agenda, it’s especially important to look under the bonnet and check exactly what questions were asked and what the full results show.
5) Apples And Pears Must Be Compared With Care
Here is a fictional example of real problem. A poll asks people whether they would prefer to spend a sunny summer day in a city or at the seaside. By 60-40%, people say they prefer the seaside. A year later another polling company asks people where would they prefer to spend a sunny summer day: in a city, at the seaside or in the countryside. 45% say the seaside, 30% say the countryside, 25% say a city. The next day, a new report says that the seaside has slumped in popularity, with the number saying they would like to spend a sunny summer day there, down from 60% to 45%.
It’s obviously nonsense, as the first poll offered only two options while the second offered three. It’s just one example of why it is unwise to compare the findings from different polling companies asking different questions. That example is particularly egregious; sometimes the differences are more subtle – for example, telephone surveys often find different numbers of “don’t knows” than online surveys; as a result the numbers for each of the answer options are liable to be different.
The only safe way to be sure of movements in public opinion is to compare surveys through time by the same polling company, using the same interviewing method (online, phone or face-to-face) and asking the same questions each time. And even then, small differences (say by two or three percentage points) may reflect sampling fluctuations rather than real change.
Peter Kellner is a journalist, political commentator and President of the YouGov opinion polling organisation in the United Kingdom.