If you submitted to Script Room and haven’t checked your inbox, then there should be an email winging its way to you with our longlist decision – ie whether or not your script is being given a full read and feedback. It’s taken longer than we hoped, simply due to the sheer numbers of submissions, and our need to be as thorough and rigorous as we can when dealing with so many scripts. We had a team of 16 readers working across 6 solid weeks to get from the full set of submissions to a longlist of scripts getting feedback.
So here are some stats about where submissions got to in the process – and a few thoughts about the process as extrapolated from the stats.
At the first 10-page sift, 83% of all submissions were given a NO verdict, which means they didn’t progress beyond that first sift stage. Proportionally, that’s more or less the same as the last two Script Rooms - which means that since we had more scripts submitted, more scripts progressed through and there was physically more work still to do. What we have noticed is that at this stage, the proportion of scripts in particular genres that were given a NO were more or less the same as those received overall – so no marked difference in how genres progressed at this stage.
At the first sift, 5% of submissions were given a MAYBE verdict – which means the reader wasn’t sure and wanted another reader to take a look at the next sift stage, at which point it was given another 10-page look by another reader and either became a NO or was put through for a full second stage sift. Which means the remaining 12% were put through to the second sift as a straight YES.
I’ve blogged before about why script didn’t progress so I shan’t repeat myself here, other than to say the key thing at this stage was identifying the spark of something interesting enough to make the reader want to read on.
At the second stage, we asked the readers to do a 20-30 page sift of all scripts – making sure that a new reader looked at each given script, ie one reader didn’t sift the same script twice across the two stages. First, we looked at the MAYBEs to decide which would progress to the second sift. And then we began looking again at everything that had progressed from the first sift. Some felt confident after 20 pages of making a verdict, some read further, and sometimes readers read beyond 30 pages if they felt they needed to in order to make a final decision about the longlist. It was at this stage that the decisions in some senses became more difficult, less immediately clear, less clear cut, and therefore harder work. As such, what was a little different at this stage was seeing if and how that spark of something interesting managed to develop and grow as the script progressed. At this stage, having a fantastic first 10 pages wasn’t enough – the script needed to keep on being effective and engaging.
At the second sift, over half of the 12% we started with was given a NO verdict – which left us with a remaining 5%.
So, the percentage of scripts going to a full read this time is 5%. That’s exactly the same percentage as last time round, and slightly less than the time before (though we did receive far more scripts this time, so it’s actually more scripts). We don’t work to a quota – so it’s interesting how close those stats are. These scripts will get a full read, feedback, and then we’ll sit down with the readers and decide which of those scripts they are recommending for a look by someone like me in the writersroom team. (Again, no quotas on that – but previously between 30-40% have then been recommended on.)
A few comparative stats for you:
Total Submissions vs Full Reads
TV/Radio Comedy 33% TV/Radio Comedy 20%
TV Drama 24% TV Drama 27%
Film 23% Film 23%
Radio Drama 10% Radio Drama 8%
Stage 8% Stage 16%
Children’s 2% Children’s 6%
As you can see, Film stayed the same and TV and Radio Drama changed a little, but the proportion of Comedies progressing dropped a lot, while the proportion of Stage scripts progressing doubled and Children’s scripts trebled. You wouldn’t want to extrapolate anything concrete from this necessarily, other than the fact that at the second sift stage of further deeper assessment, some genres fared better than others. (We’re still collating the stats the readers gave us on the reasons for saying NO at different stages.)
What I hope is clear from this is how intensive the process has been. 2,800 scripts, 16 readers, 6 weeks. If you are one of the people getting a full read, then very well done on getting this far. If you are not – then don’t despair. Which is easier said than done, I know. But when we receive nigh on 3000 scripts in one go, the odds are always going to be stacked heavily against you. And as I think it’s always important to note, just because we are saying no, does not mean we are saying your script wasn’t any good. Our job is to rigorously work our way through everything and find a way to identify what will necessarily be a small proportion of writers that we think we should begin to develop our relationship with.
Judging by previous times, in the end we may only be able to bring together a final group of around 25 writers – and a quick go at the maths tells you this is less than 1% of the total of submissions. With odds like that, it’s important that you don’t see failing to reach that small number simply as failure. See it as an incentive to send a better script next time. To try something new and do things differently next time. To watch more TV, listen to more radio, read more scripts in our archive, see more interviews with established writers. Whatever it takes to get better, do better, get closer. Because the real danger for that 1% is that they might think they’ve made it and the pressure is off – but they haven’t, and it isn’t. It’s just the first step on a long road – the same one you are all on. And that’s the same one all writers are always on for as long they have the desire to create better work, communicate with audiences, and continue to express their voice.