How not to report opinion polls

Tuesday 3 July 2012, 15:11

Anthony Wells Anthony Wells is associate director of YouGov’s social and political polling and runs the independent UKPollingReport blog. Twitter: @anthonyjwells

1) Don’t report voodoo polls

For a poll to be useful, it needs to be representative.

A thousand people represent only themselves. We can only assume their views represent the whole of Britain if the poll is sampled and weighted in a way that reflects the whole of Britain (or whatever other country you are polling).

At a crude level, the poll needs to have the right proportions of people in terms of gender, age, social class, region, and so on.

Legitimate polls are conducted in two main ways: random sampling and quota sampling (where the pollster designs a sample and then recruits respondents to fill it, getting the correct number of Northern working-class women, Midlands pensioners etc, etc). In practice, true random sampling is impossible so most pollsters use a mixture of these two methods.

Open-access polls (pejoratively called ‘voodoo polls’) are sometimes mistakenly reported as proper polls. These are the sort of instant polls displayed on newspaper websites or through pushing the red button on digital TV where anyone who wishes to can take part.

There are no sampling or weighting controls, so a voodoo poll may, for example, have a sample that is far too affluent or educated or interested in politics. If the poll was conducted on a campaign website or a website that appeals to people of a particular viewpoint, it will be skewed attitudinally, too.

More importantly, there are no controls on who takes part, so people with strong views on the issue are more likely to participate, and partisan campaigns or supporters on Twitter can deliberately direct people towards the poll to skew the results.

Polls that do not sample or weight to get a proper sample, or that are open-access and allow anyone to take part, should never be reported as representing public opinion.

Few people would mistake ‘instant polls’ on newspaper websites for properly conducted polls. But there are many instances of open-access surveys on specialist websites or publications (e.g. Mumsnet, PinkNews etc) being reported as if they were properly representative polls of mothers, LGBT (lesbian, gay, bisexual, transgender) people etc, rather than non-representative open-access polls.

Case studies

The Observer reporting an open-access poll from the website of a campaign against the Government’s NHS reforms as if it was representative of the views of members of the Royal College of Physicians; the Express miraculously finding that 99% of people who bothered to ring up an Express voting line wanted the UK to leave the European Union; the Independent reporting an open-access poll of Netmums in 2010.

2) Remember the margin of error

Most polling companies quote a margin of error of around plus or minus three points. Technically, this is based on a pure random sample of 1,000 and doesn’t account for other factors such as design and degree of weighting - but it is generally a good rule of thumb. What it means is that 19 times out of 20 the figure in a poll will be within three percentage points of what the ‘true’ figure would be if you’d surveyed the entire population.

What it means when reporting polls is that a change of a few percentage points doesn’t necessarily mean anything – it could very well be just down to normal sample variation within the margin of error. A poll showing Labour up two points, or the Conservatives down 2 points, does not by itself indicate any change in public opinion.

Unless there has been some sort of seismic political event, the vast majority of voting-intention polls do not show changes outside the margin of error. This means that, taken alone, they are singularly un-newsworthy.

The correct way to look at voting-intention polls is, therefore, to look at the broad range of ALL the opinion polls and whether there are consistent trends. Another way is to take averages over time to even out the volatility.

One poll showing the Conservatives up two points is meaningless. If four or five polls are all showing the Conservatives up two points, then it is likely that there is a genuine increase in their support.

Case study

There are almost too many to mention, but I will pick out the Guardian’s reporting of its January 2012 ICM poll which described the Conservatives as “soaring” in the polls after rising three points. Newspapers do this all the time, of course, and Tom Clark normally does a good job writing up ICM polls… I’m afraid I’m picking this one out because of the hubris the Guardian displayed in its editorial the same day: “This is not a rogue result. Rogue polls are very rare. Most polls currently put the Tories ahead. A weekend YouGov poll produced a very similar result to today’s ICM, with another five-point Tory lead. So the polls are broadly right. And today’s poll is right. Better get used to it.”

It was sound advice for the Guardian not to hand-wave away polls that were bringing news it wouldn’t welcome, but unfortunately in this case the poll was probably an outlier!

Those two polls showing a five-point lead were the only ones in the whole of January to show such big Tory leads; the rest of the month’s polls showed the parties basically neck and neck – as did ICM’s December poll before, and its February poll afterwards.

Naturally, the Guardian didn’t write up the February poll as ‘reversion to mean after wacky sample last month’, but as Conservative support shrinks as voters turn against NHS bill.

The bigger picture was that party support was pretty much steady throughout January 2012 and February 2012, with a slight drift away from the Tories as the European veto effect faded. The rollercoaster ride of public opinion that the Guardian’s reporting of ICM implied never happened.

3) Beware crossbreaks and small sample sizes

A crossbreak is an analysis of part of a poll’s result (just the answers from women respondents, for instance). It is necessarily based on a smaller sample than the whole poll, and therefore needs to be interpreted with extra caution because smaller sample sizes have bigger margins of error.

So a poll of 1,000 people in the UK as a whole might have fewer than 100 people aged under 25 or living in Scotland. A crossbreak made up of only 100 people has a margin of error of plus or minus 10 per cent. Crossbreaks of less than 100 people should be used with extreme caution; less than 50 and they should be ignored.

An additional factor is that polls are weighted so that they are representative overall. It does not necessarily follow that crossbreaks will be internally representative. For example, a poll could have the correct number of Labour supporters overall but may have too many in London and too few in Scotland.

Pay particular caution to national polls that claim to say something about the views of ethnic or religious minorities. In a standard UK poll, the number of ethnic minority respondents is too small to provide any meaningful findings.

It is possible they have deliberately over-sampled these groups to get meaningful findings, but there have been several instances where news articles have been based on the extremely small religious or ethnic sub-samples in normal polls.

Extreme caution should be given to crossbreaks on voting intention. With voting intention, small differences of a few percentage points take on great significance, so figures based on small sample sizes that are not internally weighted are virtually useless. Voting intention crossbreaks may reveal interesting trends over time, but in a single poll are best ignored.

Case study

Again, this is a common failing, but the most extreme examples are reports taking figures for religious minorities. Take, for example, this report of an ICM poll for the BBC in 2005. The report said that Jews are the least likely to attend religious services, and that 31% of Jews said they knew nothing about their faith. These figures were based on a sample of FIVE Jewish respondents.

Here is the Telegraph making a similar error in 2009, claiming that “79 per cent of Muslims say Christianity should have a strong role in Britain”, based on a sub-sample of just 21 Muslims.

4) Don’t cherry-pick

In my past post on ‘Too frequently asked questions’, one of the common misconceptions I cite about polls is that pollsters only give the answers that clients want. This is generally not the case: published polling is only a tiny minority of what polling companies produce - the shop window, as it were - and major clients that actually pay the bills want accuracy, not sycophancy.

A much greater problem is people reading the results seeing only the answers they want, and the media reporting only the answers they want (more a problem with pick-up of polls from other media sources; papers which actually commission a poll will normally report it all).

Political polls are a wonderful tool. Interpreted properly, they allow you to peep into what the electorate see, think, and what drives their voting intention. As a pollster, it’s depressing to see the media chucking out and dismissing anything that undermines their prejudices, while trumpeting and waving anything they agree with. It sometimes feels like you’ve invented the iPad and people insist on using it as a doorstop.

You should always look at poll findings in the round. Public opinion is complicated and contradictory. For example, people don’t think prison is very effective at reforming criminals but they also tend to be strongly opposed to replacing prison sentences with alternative punishments. Similarly, people tend to support tax cuts but also oppose the spending cuts they would require.

Taking a single poll finding out of context is bad practice; highlighting poll findings just because they bolster your argument is downright misleading.

Case study

Almost all of the internet! For a good example of highly selective and partial reporting of opinion polls on a subject in the mainstream press, though, take the Telegraph’s coverage of polling on gay marriage. As we have looked at before, most polling shows the public generally positive towards gay marriage if actually asked about it. Polls by ICM, Populus, YouGov and (last year) ComRes have all found pretty positive opinions. The exception to this is ComRes polling for organisations opposed to gay marriage which asked a question about “redefining marriage” that didn’t actually mention gay marriage at all. This has been presented by the campaign against gay marriage as showing 70% people are opposed to it.

Leaving aside the merits of the particular questions, the Telegraph stable has dutifully reported all the polling commissioned by organisations campaigning against gay marriage – here, here, here and here. As far as I can tell, it has have never mentioned any of the polling from Populus or YouGov showing support for gay marriage.

The ICM polling was actually commissioned by the Sunday Telegraph, so the Telegraph could hardly avoid mentioning it. But its report heavily downplayed the finding that people supported gay marriage by 45% to 36% (or, as the Telegraph put it, “opinion was finely balanced” - which stretched the definition of balanced somewhat). Instead, it ran heavily on a question of whether it should be a priority or not.

5) Don’t make the outlier the story

If 19 times out of 20 a poll is within three points of the ‘true’ picture, that means one time of out 20 it isn’t – it is what we call a ‘rogue poll’.

This is not a criticism of the pollster: it is an inevitable and unavoidable part of polling. Chance will sometimes produce a wacky result.

This goes double for crossbreaks, which have a large margin of error to begin with. In their headline figures, one in 20 polls will be off by more than three points; in a crossbreak of 100 people from those polls, one in 20 of those crossbreaks will be off by more than 10 points!

There are around 30 voting-intention polls conducted each month, and each of them will often have 15 to 20 crossbreaks. Inevitably, that random sample error will spit out some weird rogue results within all that data. These will appear eye-catching, astounding and newsworthy, but almost certainly are not. They are just random statistical noise.

Remember Twyman’s Law:

“Any piece of data or evidence that looks interesting or unusual is probably wrong.”

Case study

Here’s the Guardian in February 2012 claiming that the latest YouGov polling showed that the Conservatives had pulled off an amazing turnaround and won back the female vote, based on picking out one day’s polling that showed a six-point Tory lead amongst women.

Other YouGov polls that week showed Labour leading by three to five points amongst women, and that that day’s data was an obvious outlier.

See also PoliticalScrapbook’s strange obsession with cherry-picking poor Lib Dem scores in small crossbreaks.

6) Only compare apples with apples

All sorts of things can make a difference to the results a poll finds. Online and telephone polls will sometimes find different results due to things like interviewer effect (people may be more willing to admit socially embarrassing views to a computer screen than an interviewer), the way a question is asked, the exact wording, or even the question order.

For this reason, if you are looking for change over time, you should only compare a question asked now to a question asked using the same methods and using the same wording. Otherwise, any apparent change could actually be down to wording or methodology.

You should never draw changes to voting-intention figures from one company’s polls to another. There are specific house effects from different company’s methodologies which render this meaningless. For example, ICM normally shows the Lib Dems a couple of points higher than other companies, and YouGov normally shows them a point or so lower… so it would be wrong to compare a new ICM poll with a YouGov poll from the previous week and conclude that the Lib Dems had gained support.

This post was originally published in UKPollingReport. It is reproduced here with the kind permission of the author.

Comments

Jump to comments pagination
 
 
 

This entry is now closed for comments

Share this page

More Posts

Previous
The challenge of reporting history in two and-a-half minutes

Friday 29 June 2012, 12:00

Next
How to sell a tweet

Wednesday 4 July 2012, 11:26

About this Blog

A blog for the College of Journalism at the BBC Academy, discussing current technical, ethical, production and craft issues in journalism.

Blog Updates

Stay updated with the latest posts from the blog.

Subscribe using:

What are feeds?

Follow us on Twitter

New twitter image News and comment about journalism and interaction with the College:

@BBCCollege

Also from the College

Esra Dogramaci

Web analytics: The Basics by BBC digital consultant Esra Dogramaci

 

Mukul Devichand

How to be a digital innovator by Mukul Devichand, creator and series producer of BBC Trending

 

Writing for mobile

Writing for mobile by BBC mobile editor Nathalie Malinarich

 

James Montgomery

Leading innnovation in news by BBC News director of digital publishing James Montgomery

Blogroll

Other great places to follow debates about journalism and media:

George Brock: thoughts on journalism past, present and future from City University's head of journalism

The Media Blog: lively and often funny topical detail about UK media output

British Journalism Review: selected pieces from the authoritative quarterly journal

MediaShift: PBS monitoring of the changing media world from a US perspective

Arts & Letters Daily: more interesting ideas and good writing than you will ever have time to read

Alltop Journalism: links to the most recent posts on many journalism blogs

About the BBC: varied BBC blog about all things BBC-ish

Columbia Journalism Review: US academic perspectives

Facebook + Journalists: Facebook's own guide to its use by journalists

Jon Slattery: UK media news from the former deputy editor of Press Gazette

Meeja Law: Judith Townend's guide to media and legal issues 

Roy Greenslade: Guardian blog by the former Mirror editor now journalism prof

Wannabee Hacks: information and experiences from aspiring journalists.