Testing BBC Connected Red Button

Junior Developer inTest


I am Krishnan, part of the team that brought Connected Red Button service to TVs. I'm a Junior Developer-in-Test working in the Platform Test team at MediaCityUK in Salford. This post is about how we test the BBC Connected Red Button service on Smart TV devices.

The Connected Reb Button service available on connected TV devices

Team Structure

Testers are embedded within our agile development teams, as Test Engineers or Developers-in-Test. We work very closely with Software Developers, Product Owners, Project Managers and Business analysts to develop and test software to ensure we make a great end product.


The team operates a pull-based workflow where we set a limit on our work in progress (WIP). This is set to four tasks, with each tester supporting two tasks each.

CRB Kanban board

Developers pro-actively identify a tester to pair with who has the capacity to support the work moved to in-progress on the KANBAN board. This has helped us regulate work better, to avoid bottlenecks and overburdening. It has improved the engagement of testers and has resulted in more testing happening earlier in the workflow. It has also helped to breakdown the old divides of developer and tester ensuring we work collaboratively by developing and testing in parallel. Overall, the quality and the flow of work to "ready to deploy" has improved.

We use Behavior Driven Development (BDD) tools like cucumber to help us organise and automate our acceptance criteria. Testable Acceptance Criteria are created collaboratively for the feature to be developed; issues can be detected and even dealt with at this early stage. It also helps ensures that everyone has a clear view of what is required and importantly that the product developed and tested relates directly to what the product owner wanted in the first place!

We use feature toggling as a way of letting us ship releasable software any time we want. Even if we are in the middle of building a new feature that isn't ready for users to see. Whenever we develop a new feature it will be ‘feature toggled’ on or off according to its readiness and then a build is created for testing.

We initially test the feature running on a VM (Virtual Machine) and if we find any easily fixable bugs, they are fixed immediately and released in a new build on the VM. We monitor the health of builds using build verification tests running on a browser, as code is continually integrated into trunk and builds are created periodically

CRB build monitor

If a feature requires issues to be resolved then they will be triaged and added to the product backlog. When a viable VM tested build is ready it is released onto our TEST environment. Here we fully test the new feature that has been developed on a TV device and carry out a regression test of major existing features in the application using a combination of automated and manual testing. This is how we work now but it is constantly evolving as we try to continuously improve.

Automated Testing

Automated test are created as features are developed, using a Cucumber and Ruby framework which includes a message queue that sets up a communication channel between a browser or a device. These message queues are created on the fly whenever they are needed, and messages and responses are sent and received using simple HTTP requests. The test results obtained from the automation execution on devices are pushed automatically into our Test case management tool and from that we can produce a combined and dynamic view of product coverage and automated device execution in our test dashboard.

CRB Test dashboard

Our automation tests use a combination of live data and canned data from a static data provider. It is very useful during development of new features when the service layer code that supplies the data in normal operation is still under development.

Regression and Risk based testing

The regression testing of a product is a combination of automated and manual testing. Product coverage is set by prioritising areas of change in the code and key existing features that would cause the greatest impact to the product if broken.

Using a risk based test scope we can reduce our overall product regression effort whilst managing the risk that significant application issues could be introduced. The supported device list keeps increasing and we also take a risk-based approach to our selection of devices by selecting some representative models from each manufacturer to test.


Previously when we had only a small amount of automation a lot of time was spent manually testing builds on a device. Sometimes we needed to repeat this testing once a build was found to have bugs. With more automation both in validating developed builds and testing on devices, we can find bugs faster and spent less effort and time manual testing. Additionally process experiments across the whole development team have found ways to improve the flow of work, making it possible to increase the number of supported devices and still introduce new features into CRB quickly.

Krishnan Sambasivan is a Junior Developer-in-Test in Platform Test, BBC Future Media



More Posts