Learning from an Alpha service assessment

Today, our digital exemplars underwent an assessment by an external panel to review the work to date and recommend whether it continue to private beta.

We adopted the digital service standard in November, shortly after beginning to design our first two digital exemplars. At the end of discovery, the work was assessed by two leading councillors in the county. It was important to get their input, and we’d only just adopted the standard. But whilst it was really good to share the prototypes, and useful to discuss our understanding of user need, it wasn’t reasonable to expect them to give a thorough examination of each of the 18 points in the standard.

For Alpha, we convened a panel of peers, including Phil Rumens, Vice Chair of Local Government Digital and Toby Price, a strategy consultant with a background in Agile and Oracle. We showed the prototype, and then gave a presentation broken into four parts:

  1. Our understanding of user need (points 1,2,12,13,14)
  2. Agile, multi-disciplinary team (points 3,4,5)
  3. Technology and data (points 6,7,8,9,10,11)
  4. Performance measurement (15,16,17,18)

Their assessment will be reported to our Board which will then decide whether to allow the project to proceed to beta. (Which is a diplomatic way of saying I’ll be sharing it with them, first).

In the meantime, the exercise taught me four things about the service assessment:

  1. Building to the standard provides a framework to judge your work. Preparing for the assessment was as valuable as the assessment itself
  2. Service assessments require time and expertise. Phil and Toby gave 2 hours of their day to scrutinise the work, and we could probably have used even more time. Both raised useful observations based on their wider understanding of user needs and their professional experiences of digital development. As more local authorities work to the service standard, we’ll need to figure out the best way of resourcing this
  3. The standard can be enforced in different ways. At discovery, it felt like a check on whether we had found sufficient evidence to continue. At Alpha it felt like a quality assurance and an early warning mechanism: if we missed anything, we could pick it up in Beta. I’m sure in Beta it’ll feel more like a ‘pass / fail’
  4. It moves discussions beyond subjective discussions about look and feel. Whilst the user experience is the critical theme in the standard, the assessment judges the quality of the output, not the colour of the headings. That’s incredibly valuable and more important than perhaps it should be!

We’ll need to consider the approach again for Beta, but we’re learning all the time about how to meet the standard, and how to ensure it’s a valuable part of making services so good, people prefer to use them.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *