Reporting Statistics

Being Clear About Significance

Statistical Significance

When assessing data that suggests something has an effect we have to decide if the observed differences are ‘statistically significant’, which means they are unlikely to have occurred by chance alone. For example [Sense About Science and Straight Statistics, Making Sense of Statistics, David Spiegelhalter, p11], statistical significance can help us understand if the difference between a drug and a placebo is a real clinical effect or not. If the finding is statistically significant we can be more confident that the difference can be explained by something other than chance.

Confidence Intervals / Margin of Error

Statisticians may express significance using ‘confidence intervals’ or ‘margins of error’. These tell you how well the sample results from an experiment, a survey or an opinion poll should represent what is actually happening. 

For example, an opinion poll may try to predict the results of a general election based on a sample of the voting population. Pollsters will carry out statistical calculations to try to ensure their findings genuinely represent voters’ intentions. One cannot say that any opinion poll is “right”, because they are all predictions, so they only suggest an outcome. Pollsters work out how close to the “right” figure their results should be by calculating a ‘confidence interval’, better known as a ‘margin of error’.  For a typical 1000 person poll, the margin of error is plus or minus 3% - so if the headline figure for a party’s support is 32%, the poll is providing evidence that suggests support is between 29% and 35%.  19 times out of 20 a poll will be accurate to within 3%. i.e. in 1 in 20 the true answer will lie outside the margin of error (though out of those 20 polls, it can’t tell you which one).

Usually, the smaller the sample, the larger the margin of error and the less likely the result represents the whole group robustly. Results which fall well within the margin may not indicate anything at all. For example [Why do we report unemployment every month? Anthony Reuben, BBC News: http://www.bbc.co.uk/news/business-34486717], we cannot be confident unemployment has actually fallen over a three month period when the level of the fall, 79,000, is within the margin of error of plus or minus 81,000. Conversely, if a change lies outside its margin of error, this is essentially the same as ‘statistical significance’. A statistically insignificant figure is practically meaningless.

We must report the margin of error in graphics if the result falls within the margin to enable audiences to judge the significance of a poll or survey.

For more discussion about surveys, opinion polls, questionnaires, votes and straw polls see:

Editorial Guidelines Section 10: Politics, Public Policy and Polls: Opinion Polls, Surveys and Votes

Guidance:  Surveys, Opinion Polls, Questionnaires, Votes and Straw Polls 

Practical Significance

However, even if something is statistically significant, that doesn’t mean it is important to society. Consideration should also be given to whether the statistics are practically significant to our audiences. For example, do the short-term changes in unemployment figures tell us about how the labour market has changed, or do we need to look at the longer term trends?

We should give a balanced view, highlighting any caveats or doubts about significance, taking care not to overstate statistical significance. For example, a fall in the monthly rate of CPI inflation from 0% to minus 0.1% should not be reported as, ‘Britain plunged back into deflation’. 

However, it is just as important to be clear when there is no change, in say, unemployment, inflation or GDP growth.