League tables that don't include uncertainty do not make it easy to spot a good school, Michael Blastland says in his regular column.
One year a school brims with eager bright sparks; the next, an excess of strugglers and dossers.
The school goes up and down the league. But how good is it really? Ups and downs in pupil ability make underlying standards of teaching hard to see.
Statisticians can actually measure the degree of uncertainty around this underlying quality, though this is seldom reported.
The animation suggests how it might look.
Each school is pictured bouncing around within its range of uncertainty to show what league tables might look like if they took uncertainty seriously. Think of it as showing the variety of league-table positions any school might achieve - given constant teaching quality but different batches of pupils.
This range usually includes - somewhere, though we don't know where - the real, underlying performance.*
You might have picked the school that seemed best, but maybe its quality was flattered by a dazzling year group. Maybe when your child came to GCSEs, the school you chose did less well, or maybe better.
In fact, I've rigged the animation so that any of the five schools here could be the best, or worst. Some are a little more likely to be better than others, but we can't be sure. In the event, someone probably gets lucky with a good intake. Maybe you mistook that luck for quality - which turned out not to be so high.
By the way, we haven't let the schools just bounce around anywhere. Each has a particular uncertainty interval around it.
There's more uncertainty for small schools where a few high/low-achieving pupils can bring a big difference to the results - but make the underlying teaching standard harder to see.
And there's less uncertainty around big schools, which is why Clots Comp (small) bounces around a lot with plenty of uncertainty, and why Brookside (big) doesn't bounce so much. We're a little more confident that the Brookside results are closer to representing the underlying quality of the school.
Just to be clear, in our league table, the real quality of each school never varies. We're just not sure what that quality is. And of course, we're only talking about quality in terms of achieving a good position in the leagues. Some will want much more from a school than that.
Some use coloured bands of uncertainty around a central estimate, as the Bank of England does for its inflation and GDP forecasts (pdf).
We've used animation to try to overcome the problem that graphs often make things appear concrete, league tables too, and to take away any fixed point to use as a mental anchor. The idea has its weaknesses, no doubt. But in the real world, there's a lot of fuzziness about - and it's often ignored.
We can make judgements about schools, indeed we have to, and often, quite rightly, we use whatever information we can lay our hands on, exam results and all. But what if we routinely tried to show the fuzziness? What if we made a point of proclaiming uncertainty rather than brushing it aside?
Mind you, you can't ever see it in Premier League football.
*Statisticians talk of 95% confidence intervals, meaning that if we created confidence intervals around a school's results and our method is sound, these intervals will include the true, underlying performance 95% of the time. There is uncertainty around all educational performance tables, whether raw, value-added, or whatever, as there would be around hospital league tables.