Six Months in the Life of the Radio & Music Product
Team Eno's morning SCRUM meeting
I joined BBC Future Media in January 2011 as Head of Radio and Music with the job of defining and delivering the new Radio and Music product as part of BBC Online's 10 products, 4 screens 1 service strategy.
I manage around thirty people which breaks down into:
- Product Management (decide what we build)
- Project Management (decide when we build it)
- Development (decide how we build it and actually build it)
I was extremely lucky to inherit teams of very smart and passionate people with a deep understanding of the portfolio of sites that make up the product, and of the infrastructure which supports that. I am also very fortunate to have a great partnership with the fantastic editorial team in Audio and Music under Mark Friend, who work directly with the Radio Network staff and help us to understand their priorities.
On Friday 4th May at the next BBC Online Briefing Mark and I will be presenting an update on the Radio & Music product. So this seemed like a good time to give you an in depth look at what we've been doing in the past six months.
As some of you will know each of the BBC's radio networks have long had their own websites. These have evolved organically over the years to support the needs of their very different audiences, and behind those sites there is a complex set of systems which manage the metadata, schedule, encoding, rights, messages about what is now playing and so on, all based on a trail of technologies which reflect the evolution of the Internet.
My goal was to work out what to replace and how so that we could deliver an even better experience to the audience and simplify the operational work to reduce costs.
A large part of the challenge was how to build something that works equally well for Radio 1 and Radio 4 listeners without becoming the lowest common denominator. Another part of the challenge was to make it easier for features developed for one network to be available for another.
What do we want?
After a series of facilitated brainstorming sessions with Product Management, Editorial and Technical teams we had produced a list of over 250 major features ("epics") based around three big bets of "Live", "Audio Discovery" and "Music".
At that point we had a pretty good idea of where we wanted to get to, along with some idea of the user experience from our UX colleagues. Next we took a swing at how long it would take us. We used the model of tee shirt sizes to get a sense of the size for how long each "epic" might take.
When do we want it?
Based on the fact that the Olympics was an interesting milestone and about 12 months out, we decided to go for a series of releases, one every three months.
This was just enough of a "left to right" plan to get going knowing that:
- no plan survives first contact with the enemy
- we had only high level estimates on the huge number of epics we had pulled together
- we had no track record on which to base our estimates of velocity anyway
- the end date was over the horizon
So - let's get started!
Now the question became "where?"
It turned out after reviewing our portfolio of sites that www.bbc.co.uk/radio/ was a pretty good candidate for our first deployment.
This is a site which had been around for years, is linked to from the global navigation, receives a reasonable amount of traffic (400k-450k unique browsers per week) but had a very contained set of user journeys ("I want to listen live to Radio 2", "Get me to the Radio 4 homepage"), and, as luck would have it, was in need of some technical work anyway.
And then how?
We did a quick analysis of the primary user journeys and agreed with the stakeholders a Minimum Viable Product (MVP) which would allow us to launch so that we could then inspect and adapt based on empirical data. After all, "our opinions are interesting but irrelevant"; it's what you think that matters.
Our stakeholders have been great at understanding and embracing the incremental agile approach. It is never easy, but especially in an organisation which is used to "shipping" a finished radio programme only when it is perfect, and so I've been very impressed with the way they have adapted to this crazy agile thing.
We were a tiny bit nervous about upsetting some users, so we started with an idea that we would do some A/B testing on the new site, but given how radically different the new product designs were from the existing site we quickly changed our minds and opted for a beta launch instead.
We'll come back to the A/B testing again on slightly more subtle decisions.
How we organised our teams
We also had to determine how best to assign the people in the team (we agreed very early that I wouldn't use the word "resource") to balance the workload.
There was clearly:
- a lot of new development that needed to be done;
- also nearly five million unique browsers a week visit our pages, so they need to be supported and maintained;
- also the brilliantly creative teams in the networks keep coming up with ideas of how to engage with our audiences online;
Oh and I didn't have infinite headcount (who knew?).
We took a guess at how much support Business As Usual (BAU) would need and then started to roll people over from their existing work to the new product.
As we did this we evolved a plan that involves a set of virtual scrum teams divided between New Product development and Business As Usual. This allows us to create cross functional scrum teams and move people between the teams without having to change their managers.
We let the teams chose their names and they've come up with "Bowie", "Eno", "Propellerheads" and "Gaga" - no prizes for guessing the theme - and we have since added "Moby".
The New Product teams (Bowie, Eno and Moby) operate on 3 week sprints; Gaga have just started tracking workflow with Kanban, it being a better way of handling the business as usual and short order work that they manage.
Teams and Sprints both have names
So we now had something to shoot for and we quickly started creating the detailed user stories and designs.
The teams had already been operating in a sprint model but had been delivering across a wide range of short term projects and now we were moving to working on a single product with a 12 month roadmap so much of the process was new.
As part of this transition we wanted to build in a principle of "Delivering Predictable Quality" so that we could demonstrate to all our stakeholders that we would keep our promises both for time and quality. Those of you familiar with running projects won't be surprised to know that we needed to flex the scope in order to do this, and we've had great support from our stakeholders in managing this.
Because the teams were going to be learning to develop on a new platform (the BBC's Platform sometimes known as "Forge"), we wanted to start off steady, and to give ourselves the best possible chance of success. We decided we needed an "input pack" for each sprint which contained the prioritised list of user stories(groomed backlog) and the detailed designs with notes on how the designs were to respond to user actions (annotated wireframes and visual designs).
We had a bit of trouble initially deciding where in the cycle the acceptance criteria should be created, either by the dev team in the sprint, or as part of the input pack to the sprint, but have settled on the business acceptance criteria being defined by the product management team in the input pack, and the QA folks writing the technical acceptance criteria (in the form of Cucumber tests) at the start of the sprint.
This model has served us reasonably well for the last six months, although as with everything we do it has evolved, and will continue to evolve, based on how well it is doing.
In order to keep the input pack going we created the "Propellerheads" team, who are responsible for defining the work that the development teams will do in the next sprint and beyond.
This team is a combination of UX, product management, business analysis and technical leads who turn the un-groomed backlog (the wishlist) into something that the dev teams can actually get stuck into, make reasonable estimates on, and get good at delivering on those estimates.
As with all new things you've got to give it a few iterations to get good enough to work out whether you need to change the process or just get better.
We've been pretty pragmatic about this and the teams have taken responsibility for making that judgement of when they want things to change.
Yes we did have one, we didn't just make it (all) up as we went along!
Radio and Music Product Release 1.2 Architecture
We knew from the start that we would be building with an Model-View-Controller (MVC) pattern, and with some enthusiastic supporters of responsive design and progressive enhancement on the team we decided to go for a single application for each major feature area, each serving the minimum number of views for each URL, each of which would behave appropriately on the platform in question.
The four platforms we support are Desktop, Tablet, Mobile and Connected Devices (TVs and Hybrid Radios).
We needed to focus our attention on where the opportunity was greatest so went for Desktop, followed by Mobile, then Tablet and lastly connected devices. We analysed the key user needs for the different platforms and decided to serve one view for both desktop and tablet, and use device detection to serve a single view for all mobiles irrespective of form factor.
On Mobile the big question was whether to make a dynamic WebApp or installable Native App or a Hybrid. We agreed that a WebApp would be a good place to start, and that we would plan to follow up with a Native App with a richer set of features that couldn't be supported via WebApp.
So then we started looking at the development approach. One of the first things I read when I joined was feedback from one developer who had not had his code reviewed in three years. That was a surprise but given the work the teams had been doing I could see how it had happened.
We kicked off a code review process where no code is checked in unless it has been reviewed, either by pair programming or a separate code review.
There are plenty of people who can wax eloquent on the merits of code review but for me there are two things: better code and better developers. Fewer bugs on each check-in, and the developers learn from each other.
Anyway, we are sticking with that and also employing other good engineering practices to make sure we continue to deliver predictable quality.
So how have we done so far?
I'm really proud of the way the team have responded to this challenge. I read a post on Joel on Software where he described his developers as "built something with their bare hands", an image which really struck home when I think about the way that the team have created something that simply wasn't there before.
We've shipped to live every sprint since we first went out on beta in August. (That's every three weeks rather than every three months as I had originally suggested, so what did I know?)
We've tried to make sure that these changes are progressive rather than disrupt the audience with a pseudo random sequence of changes. It is always difficult to introduce change but so far the signs are positive, we've had lots of really great feedback, some positive and some constructive, and we are reading all of those comments as they come in, as well as looking at the empirical data to understand how our work is being received.
Our last release included some features we've been working on for a while, such as the ability to customise your preset stations on desktop as well as mobile.
We are using BBC iD to store this information server side (rather than in a cookie) so that it is available on other devices, and we will continue to expand the number of interactions that involve iD.
We strongly believe that the more personalised the service the more valuable it is: whether you are loving tracks on Radio 1 or saving an interesting Radio 4 show to listen to later, all of these things should follow you seamlessly (ok, I'm sorry, I had to get the "s" word in) across devices.
I look forward more improvements with each release of the product. For example, your choice of stations will soon follow you to your mobile phone.
Chris Kimber blogged in February about a significant release for desktop and mobile; and I look forward to hearing what you have to say in the comments.
Andrew Scott is the Head of Radio and Music Product, BBC Future Media