Testing for BBC Online: The creation of the POD Test Group

Head of Test

Tagged with:

I’m Richard Lyon, the head of test for the recently formed Test Group within Future Media Programmes & On Demand (POD) known as ‘POD Test’.

Programme & On Demand’s output includes the BBC iPlayer product, the iPlayer Radio product, /programmes, social plugins and personalisation capabilities across mobile, tablet, desktop and TV. 

With our range of web offerings and apps and the variety of platforms we deliver them on, testing is a huge undertaking for Programmes & On Demand.

Testing elements of iPlayer Radio


Over the last few years each of the product teams within Programmes & On Demand has built up its own testing capability separately.

The test discipline both at the BBC and in the industry in general has evolved rapidly during this time from traditional testing at the end of development to extensive use of automation and the adoption of Behaviour Driven Development (BDD).

BDD is a software development approach with test at its core. Functionality is described by tests written with the customer. Software is then written to make those tests pass to ensure a tight and highly visible coupling between the customer’s requirements and the software that is produced. 

Test is now at the point where it is the ‘glue’ between product management, development and project management.

Tests describe how a product should behave, drives software design and describes the health and risks of a product in development.

Recently we’ve brought the separate test teams into a single POD Test group for a number of reasons:

  • We wanted to ensure that innovation in test tools and processes that happen within individual test teams are leveraged across all of Programmes & On Demand to help us continue to deliver products that our audiences can rely on.
  • Most of our products are built upon shared components. We want to find better ways to perform tests across this complex ecosystem and have more of a sense of collective ownership.
  • We want to promote the test discipline within Programmes & On Demand so that test professionals have a clear path for development and career progression and to generally raise the profile of test.


Bringing the testers together into a single test group will allow us to address these changes more effectively.

As Programmes & On Demand is organised by product area the POD Test team will be the only discipline group within Programmes & On Demand.

That’s not to say that we’ll be adopting an agency model where we ‘hire out’ testers into the product teams, far from it.

We place high value in our testers being embedded in product teams because of the knowledge they gain about the products they work on and the pride they have in those products.

But it will allow more flexibility when we need it for testers to move into different product teams to share innovation, respond to prioritisation or if potentially they just fancy a change.

 

Testing on multiple devices

A little bit about how we test.

Although we invest heavily in automation in test we still place a lot of emphasis on manual testing.

There are some areas that can only be tested manually, particularly when it comes to video and audio quality and some defects that will only ever be uncovered by a skilled exploratory tester.

However we have seen some tremendous efficiencies achieved by automation, reducing the need for time consuming manual test cycles of repetitive checks.

There are also additional benefits we see from automated test suites such as providing ‘living documentation’. The automated tests not only document a system’s requirements but can be executed at any time to ‘check themselves’ against the system they describe to ensure the system is still meeting those requirements.

We aspire to a BDD approach where we define features in a ‘user story’ format. A User story is written in natural language but to a strict format that describes who a feature is for, what the feature is and what benefit that feature delivers, ( e.g. ‘As a …, I want…, so that…’).

Then representatives from the product, engineering and POD Test team come together to flesh out the behaviour of that user story by writing acceptance tests for it.

Acceptance tests will add more detail to the user story. Like the user story they are written to a strict format that describes pre-conditions, an event and an outcome (e.g. ‘Given…, when…, then…’).

The great benefit of user stories and acceptances tests is that they provide a shared artefact that makes sense to a product owner as a description of their requirements, engineering as a specification to build to and test as a set of executable tests.

Traditionally these may have existed as three separate documents that very quickly become out of sync leading to misunderstandings and delays.

Where possible the tests are automated, either by the developers coding the feature or with the assistance of a developer-in-test.

When the tests are passing then the feature is ‘done’, whilst the automated tests build up into a regression pack allowing us to check that new features we develop aren’t breaking existing functionality.

Device testing is one of the biggest challenges we have. This means ensuring that new devices such as mobiles, connected TVs, set top boxes, games consoles and other connected devices will work with our products and that new video profiles and formats and new features perform as expected across existing devices.

For mobile and connected TVs our testing can’t possibly cover all devices. Instead we have to prioritise based on stats from audience use, which tells us which are the most popular devices.

We may then categorise these further, grouping similar devices together in order to give us as broad coverage as possible.

The way we test our products never stands still. As we continue to evolve and develop our products new challenges arise about how we test them, the introduction of responsive design being a recent example.

We’ll be blogging more about how we’re tackling some of these challenges in POD Test and about how we test some of our big product launches this year, so keep checking back here and we’ll let you know how we’re getting on.

It would be great to hear your feedback, please leave a comment and tell us what you think.

Richard Lyon is the head of test for the POD Test Group.

Tagged with:

Comments

This entry is now closed for comments.

  • Comment number 7. Posted by Richard

    on 10 Apr 2013 09:47

    Hi @Eponymous - it can be difficult to test against all setups that users might have. In this case, our investigations into reported problems after the release pointed to issues with certain setups that use cookie blocking/ad blocking apps. We've since rectified the problem in a later realise.

  • Comment number 6. Posted by _Ewan_

    on 4 Apr 2013 19:34

    This comment was removed because it broke the house rules. Explain

  • Comment number 5. Posted by Richard

    on 4 Apr 2013 18:09

    Hi @whitingx - yes, we'll be going into more detail on specific releases and products.

  • Comment number 4. Posted by Eponymous Cowherd

    on 4 Apr 2013 14:09

    Would you care to offer an explanation as to what happened to the February release of the Android iPlayer?

    As soon as it was released the complaints started. Many people finding that the app, which worked previously, no longer worked at all, and many more finding severe problems such as no sound or picture during playback.

    How did this get through testing?

    • This entry is now closed for comments. Number of positive ratings for comment 4: 1
    • This entry is now closed for comments. Number of negative ratings for comment 4: 0
    Loading…
  • Comment number 3. Posted by whitingx

    on 3 Apr 2013 16:44

    Great article, interesting to see how Behaviour Driven Development is used at the BBC.

    Any plans for a follow up article going into more detail about your BDD tools and processes?

    • This entry is now closed for comments. Number of positive ratings for comment 3: 0
    • This entry is now closed for comments. Number of negative ratings for comment 3: 0
    Loading…
  • Comment number 2. Posted by Richard

    on 3 Apr 2013 11:29

    Hi @Ivan - the vast majority of our automated tests are written in Ruby. We use a number of libraries including Calabash, Capybara, Mechanize and Nokogiri and use Cucumber as our story runner.

  • Comment number 1. Posted by Ivan

    on 3 Apr 2013 08:02

    Awesome post. Love to hear that BDD is also being used at the larger companies. Could you describe the tools you're using, like testing frameworks, scripting or programming languages? Thanks!

    • This entry is now closed for comments. Number of positive ratings for comment 1: 0
    • This entry is now closed for comments. Number of negative ratings for comment 1: 0
    Loading…

More Posts

Previous

Next