QA Testing at the UK Government

As they describe in their service manual the UK Government defines the essential role QA testing plays in delivering high quality digital services.

They state regular QA testing is needed to ensure the service is easy to use for anyone who needs to use it, regardless of the device they’re using, is stable, secure and works quickly regardless of how many people need to use it, and that it can be iterated quickly to meet changes to user needs or the political environment.

It’s identified that automated testing should be used as much as possible and that you should run your test suite as part of continuous integration (where your tests form part of your codebase). By testing your code automatically every time you make a change, you’ll be able to find defects more quickly.

Types of testing

They advise that you should run different types of tests depending on what you need to check, for example:

They also suggest it’s useful to hire experts from outside your team, especially for some types of testing like penetration testing, and state that ‘You should test the usability of your service as well as the technical parts’, highlighting the importance of testing to ensure online services meet accessibility requirements.

In this video they demonstrate their User Research Lab where they employ a sophisticated array of tools and practices to assess how users interact with their digital services.

Testing for GOV.UK

In this blog they examine how these practices are applied, providing a detailed analysis of how they test services being deployed to

They highlight the importance they place on testing and the rigour they apply, writing and applying tests to any changes before they are implemented (Test-Driven Development), and walk through the challenges they’ve experienced using manual and automated testing practices for such a complex estate: GOV.UK is an ecosystem of 70 applications and components, which are mostly written in Ruby on Rails.

Addressing this challenge was key to their switching to Continuous Deployment, and to achieve this they adopted a novel approach of applying the same methodology as used for security testing, swapping the concept of an ‘attack surface’ to one of a ‘bug surface’, where they are protecting from ‘attacks’ by bugs rather than hackers.

Different surface areas include server-side code, API adapters, client-side code, production-specific code and manual and periodic tasks, and defined an associated best practice testing regime for each, equipping them with a comprehensive framework to ensure the complexity of the environment is wholly mapped to a testing strategy.

They then audited each app against this new standard, citing a case study for Whitehall (’s oldest app) that explains the detail of the audit and then the remedial actions taken to instantiate the required testing identified to be missing. Their goal is to roll out this standard as the default best practice for all digital service development.

Job Roles

The job roles that implement these types of practices are defined in their professional skills guide, notably a QAT Analyst and a Test Engineer, with activities and responsibilities including:

QAT analyst

A QAT analyst will undertake and execute appropriate test design and perform exploratory testing. At this level, you will:

  • collaborate with delivery teams and determine the testability of functional and non-functional requirements.
  • have domain and business knowledge.
  • take a business and operational view when analysing the system under test.

Test engineer

A test engineer is responsible for writing, debugging and refactoring test code. At this level, you will:

  • work closely with software developers to reach a common understanding of the code base and test coverage at unit level.
  • collaborate with analysts to make sure the required business scenarios are covered in the acceptance test scripts.
  • test engineers work on both functional and non-functional areas of an application.
  • coach and mentor testers.

Skills needed for these roles

  • Functional testing. You can design and execute test cases using standard testing techniques. You can come up with different business scenarios for a feature, working with others in the team. (Relevant skill level: working)
  • Non-functional testing. You can design and execute non-functional test cases using standard testing techniques, in instructed environments. You can come up with different business scenarios for a feature, working with others in the team. (Relevant skill level: working)
  • Technical breadth. You can use a range of technologies for testing. You know how to use one type of tool to write test scripts. You may use technologies to design and execute test cases under guidance. (Relevant skill level: working)
  • Test analysis. You can identify simple patterns and trends. You know how to investigate problems and opportunities in existing processes and can contribute to recommending solutions to these. You know how to work with stakeholders to identify objectives and potential benefits. (Relevant skill level: working)