Friday 15 August 2014, 08:56
Legacy features Vs. New features
Native BBC iPlayer applications on mobile have been around since 2010. As with any tool, we inevitably had a smaller range of automation options available to us in the early days. This means we have a large suite of manual regression tests that are executed prior to deploying new application versions.
It was a decision agreed by everyone that going back and trying to retro fit automated tests to legacy features would be very time consuming and the team wouldn't get the benefits of having automated tests for the features they were currently building. This meant that we would then only build automation tests for features that are actively being developed.
To help address the backlog of manual tests the DiT's on the team, when free, will pair up with Test Engineers and see what legacy areas of the app would benefit from automated tests. Using this approach we are slowly building up an automated regression suite to be run each night on our latest development build.
At present the automation tests are only executed on a small handful of iOS and Android devices plugged directly into our Continuous Integration (CI) server while we get our build processes stable and reporting less false positives. The long term ambition is to run these automation tests on as many real, physical devices as we can. We’re working closely with our Test Tools team who are currently developing a device testing platform for Mobile, Tablet and even smart TVs that will help us scale our testing efforts (look out for a blog post on this soon!)
With the team well underway with creating feature files collaboratively and automating features we hope that this will enable us to release new versions of the app quicker and with more confidence around stability and quality.
Since we started automating more of our testing it has raised some questions on its effectiveness on the development process and whether it really is the right way forward. What do you think? How would you move forward?
50-70 tests are quite easy to manage but what happens when you start to get to 100, 200 or even 500 tests?
UI tests tend to be more flaky than other types of automated testing due to testing the system as a whole rather than in focused units. This leads to greater number of points of failure so how do you limit the number of places the tests could fail?
Our tests currently use a cross platform framework (Calabash) but would a more platform specific tool be better i.e. iOS instruments and Androids Espresso? The DiT team intend to spend some time evaluating the alternative solutions.
UI automation tests only tell you that a specific thing is still working e.g. you can still add an item to your download queue.
However, it will not tell you that the item added is incorrect (perhaps the episode of Eastenders you selected and added is actually Top Gear when you play it back). This is an example that an automated test would never be able to verify and shows where manual testing effort continues to be required and add value.
Automated tests don't prevent bugs they just tell you that a bug exists (admittedly sooner than typical manual tests). We consider them a small part of the bigger picture of software development which should include other best practices of unit testing, test driven development, pair programming and excellent manual testing techniques.
What would you do? How would you move forward with the automation testing? Comment below and we can explore them in future blog posts.
We've learned a lot and still have a lot more to learn but I will be posting again with lessons learned on best practices on how to automate testing for mobile.
If there are other topics that interest you about testing then do let me know in comments!
Jitesh Gosai is Senior Developer in Test, BBC Future Media
Join the discussion...
Thursday 14 August 2014, 13:50
Friday 15 August 2014, 08:15