Test Automation with PhantomJS, Grunt and Friends
Note: Code Snippets for this post are available on GitHub.
I am a senior developer in Test for BBC /programmes.
This post is the latest in a regular series about the work of the BBC's testing team. It's about how we used PhantomJS for testing /programmes pages. We have extensively used PhantomJS for almost all sorts of automated testing including web acceptance testing, performance and accessibility testing.
Headless Web Acceptance Testing
Toolkit: Ruby, Cucumber, Capybara, poltergeist
We are using the Ruby-Cucumber framework to automate acceptance tests. Poltergeist has been used as a phantomJS based Capybara driver to run scenarios in the heedless browser. Poltergeist is very flexible to setup and its benefits include:
• Poltergeist can take screenshots to debug the failing scenarios.
• Poltergeist doesn't need additional setup like XVFB, XVNC to run scenarios on Continuous Integration servers. It only needs PhantomJS to be installed.
• Poltergeist can capture network traffic with details of the all resources loaded during page visits. This feature helped us to use automated iStats.
• Poltergeist can be used to set cookies and request headers.
• Poltergeist allows us to set loads of phantomJS options before page loads e.g with no images, ignoring ssl errors and many more
Poltergeist can be easily configured with Capybara with this code snippet in the ‘support/env.rb’ file. We run all the Poltergeist regression tests after every check in through Hudson and reports are logged into the TestRail. We run more than 50 thousand scenarios per month on an average with Poltergeist.
the BBC programmes site test execution on Testrail
Poltergeist has been also used to check the response code of the all the /programmes page types as a part of the smoke tests. Smoke test builds check response code of the all /programmes page types and ensure that HTTP response code is 200.
Responsive Test Automation
Toolkit: PhantomJS, responsive_screens.js, OlyPNG, Chunky PNG
BBC /programmes has gone through responsive retrofit for almost all page types. All page types including brand, series, episode, clips, galleries and ancillary pages have now gone responsive. We have set an option to run test with different view ports with poltergeist so that we can run our responsive cucumber scenarios with different viewports with environmental variable DEVICE with value of 320, 600, 770 and 1026 screen size. You can find the code snippet for this hook on Github. We can add this as a hook.rb file in the ‘support’ directory of the cucumber. Now scenarios can be run like this:
$ DEVICE=320 bundle exec cucumber
PhantomJS has also been used to take responsive screenshots with different viewports and compare them with different environment. The image different will be seen in the diff image file. We used 'responsive_screens.js' to capture screenshot of the different viewsports and 'compare.rb' ruby script to compare them with the help of Oliy PNG and Chunky PNG ruby gems.
The bash script ‘compare-screen’ will take care of taking screenshots with PhantomJS, compare the screenshot and create diff file. The script can be run like this:
$ ./compare_screen url1 url2 screensize
Toolkit: Phantomas, grunt-phantomas, Yslow.js, Confess.js
PhantomJS can also be used to check the performance of the web pages. We have used Yslow.js and confess,js to check the performance of the selected /programmes pages. Yslow.js and confess.js script is downloaded in the 'lib' directory. Yslow analyses performance of the webpages based on Yahoo's rule of the high performance websites while Confess is a library, which heedlessly analyses web pages for the performance. We got yslow script here to run on Hudson and check if overall score of the page is OK or not. We are checking performance of the pages by passing report location threshold and urls to the script. In the Hudson job below we are checking score of the Radio 4 schedule, Brand and episode page.
The script can be executed locally
$ ./script/yslow report/ D http://www.bbc.co.uk/radio4/programmes/schedules/fm
The Hudson job looks like this:
Ylsow performance Hudson build with test status
We have certain CDN related failures but our overall score always passed.
In order to use confess, we need to create a ‘config.json’ file and run confess script by using this config file. Confess can tell you load time, slowest resource and fastest resource in the form of waterfall diagram.
$ phantomjs /path_to/confess.js http://www.bbc.co.uk/programmes/b006q2x0 performance path_to/config.json
The ‘grunt-phantomas'is another tool which is grunt wrapper around 'phantomas'. Phantomas is PhantomJS based web performance metric collector. We have package.json file with all npm dependencies including 'grunt-phantomas'. In the Gruntfile we have configured task 'phantomas' with all the metrics settings. Now we can run grunt with phantoms task like this
$ npm install
$ ./node_modules/.bin/grunt phatomas
This will create 'phantomas' directory and index.html file with cool results charts.
grunt phantomas execution in local machine
Toolkit: PhantomJS, Grunt, grunt-accessibility
Automated accessibility testing can be achieved with PhantomJS by using ‘grunt-accessibility’ plugin. It grades pages using different levels of the WCAG guidelines. This doesn’t replace manual accessibility testing but can detect some HTML issues with respect to accessibility. In the Gruntfile, we have accessibility task which tests HTML code located in the ‘/html’ directory. We have written a little script to get HTML code of the page and put it inside the ‘html’ directory and execute ‘grunt’ accessibility tasks. The code snippet for the script ‘grunt_accessibility.sh’ is available on GitHub.
We can run this script by passing the url of the page to check.
$ ./grunt_accessibility http://www.test.bbc.co.uk/programmes/b006q2x0
grunt-accessibility execution in local machine
We have created a Hudson job which displays Warning and Notices on the job dashboard. It fails for the major errors.
Accessibility warnings on Hudson
PhantomJS can be used for the web acceptance testing, web performance checking and accessibility checking. We can extend testing of the web application by using extendibility of the PhantomJS and related open source tools.
Shashikant is a Senior Developer-in-Test in Platform Test, BBC Future Media