Case study: Design sprint
To kick off a new phase of work the entire product team were bought together for a weeklong Design Sprint. The group was made up of people from across digital department; UX, product, engineering and marketing.
Day 1: Discovery (research)
Evaluative testing of the current Reviews and Advice journey was done in the lab, with new customers (or non-members). A large amount of insights were gathered while observing.
Day 2: Analysis & Prioritisation
Working with the user research lead the insights were sorted, analysed and grouped into themes (site wide bugs and usability issue were separated out) 5 key problems and unmet user needs were identified:
- Which? Arrogance
- Navigating content
- Personalisation and customisation to find the best
- Understanding content
Day 3: Sketch
The entire product team took part in a dot vote to prioritise which problems would have the most impact to the user. Smaller, cross function teams were given a problem to focus on and sketch ideas to solve and improve on them.
After spending the morning working in smaller teams on solutions/ideas all teams came back together to collect risks and assumptions before moving forward.
Day 4: Prototyping
Another round of dot votes narrowed down which 3 ideas for each team to take into the lab for testing
- Hub page: A landing page style entry point which will give new users a sample of the breadth of the sites content- this was to tackle the problems we saw with overcomplicated navigation. This would also be a more appropriate point in the journey to inform user about the sites paywall. This would also address the insight around users that felt deceived by using hidden content or or sign up banners that looked like external adverts
- Review page update: A new narrative for this page providing as much value with our free content and a more upfront approach to the paywall. This would also aim to give users quicker answer with key content made scannable and more finable on mobile
- Scale/graph: a new component that would show the range of product scores from lowest to highest. The aim better inform new users about Which? test scores, why they are different to customer reviews and also to clarify “is this a good score?” for existing (logged in) users.
Day 5: User testing
The prototype, a low fidelity Azure wireframe, was put in front of a new group of users. The same journey was replicated and users were given the same tasks to complete, to see if the experience was improved.
After testing the prototype, further analysis of insights was done which concluded the biggest successes and improvement to users. I indecently worked an audit (similar to heuristic evaluation) of the entire UI and visual design of the current journey, recording any issues then grading the severity of impact on the user. These findings were combined in a final prioritisation session, mapping delivery effort against value delivered to customers.
It was agreed to focus on an improved design narrative/hierarchy for the logged-out Review journey, starting with review page itself as this was a key point of frustrations for users.