Assessing our work and approach
Weeknote, w/c 5 January 2026
Weāre currently wrapping up our alpha looking at how to use native code to improve the design of the NHS App and this past week we put the the work through our internal design assurance process āĀ imagine something like the mid-point between a standard design crit and an alpha service assessment, and we always include a clinician as part of the conversation. (There is more to say about this process, and why we have it, but that is a story for another day.) Normally, I am part of the panel reviewing the work but this time I was on the other side of the table, being reviewed. It was a lot of work to prepare for (something I need to reflect on so I can try to lower the burden for our teams) but the review itself was quite fun.
On Friday, I spent a bit of time with Dave and Graham talking about how to translate everything weāve done into a set of public design history posts. I would guess that there will be five or six of them covering the experiments we ran and the functional areas we explored. Across the various experiments, a few high-level themes emerged that weāll try to elaborate on:
- Platform vs public sector conventions (this is going to be very tricky to balance)
- Trust (it doesnāt take much to impart NHS-ness and we might be anchoring to the wrong design paradigm)
- Place-making (the app isnāt a service, it is an environment; it needs texture and landmarks)
I hope what weāve learned will be useful to other teams, so Iām keen to get the material out into the world (plus it would make Frankie happy š). Before we get to what we learned about the App itself, I thought it would help to describe how we learned about the app because we have worked ourselves into a strange corner.
All of the user research we ran during this phase of work was in-person. We were working exclusively with native coded prototypes, and that introduces a few logistical challenges:
- It is much harder to get the prototype onto a research participantās device
- If the participant has an older device, they can be excluded completely
- Observing how a research participant is using the prototype is complicated and has hard limits if you arenāt sitting right next to them (e.g. in remote sessions, there is no pointer to observe)
There are ways to put an app onto someoneās phone for testing purposes (TestFlight for iOS and Google Play Console for Android), but the process is a clumsy faff that requires more time and effort than most people are willing to put up with. For this recent work, in which we were testing ideas quickly and moving on just as fast, weāve found the best approach was to bring along our own devices with the prototypes pre-installed. That means that remote and unmoderated research are off the table, adding to the planning overhead and cost associated with any given round of research.
We used a mixture of pop-in sessions around London and labs sessions in Leeds, running them on alternating weeks. Setting up pop-in sessions requires you to already have a network of organisations to work with or some means of developing one. Doing research in that kind of setting also tends to turn you into ad hoc tech support for a few hours each time, but that feels like a fair price for peopleās time. Setting up lab sessions requires a fair amount of advance planning: you have to book the lab and arrange participants through a recruiter. To simplify this work, we planned to use the lab every other week and gave the recruiter a brief to fulfil before we knew what weād be testing. Between the two approaches, we committed in advance to running research every week, come what may.
Planning and running in-person research every week for a few months is a ton of work but the upside is that it provides for a very rich research situation. You can watch for subtle cues of body language, facial expressions, and tone of voice. You can examine how people use their hands to interact with the device. A common occurrence that felt like an aha! moment happened when we spoke with people who use the NHS App a lot. Weād have a few introductory questions to get to know them, ask them about their current App usage, and then show them our prototypes. Often, you could see their shoulders relax a little shortly after they started poking around. The small changes to visual design, the addition of gestural controls, and the presence of little bits of animation seem to signal something like āthis wonāt be hard to useā, and people instinctively relax a little. You donāt need people to tell you the thing is easier to use āĀ you can see it in their body language, and those moments make all the extra setup worth it.
The added cost inherent in the approach weāve taken has a second dimension. We bang on about designers prototyping in code, but up until very recently that meant web code āĀ HTML, CSS, and a little Javascript. Those skills are moderately common across the sector. So far as I can tell, native code skills are much less common. When Iāve told friends and colleagues how we were working Iāve gotten some raised eyebrows. No one Iāve spoken to has experience of all of the designers working across a large app prototyping in native code directly.
My worry is that this approach wonāt scale beyond the current team, in which we have two exceptionally talented and curious designers who were provided ample time to learn new programming languages outside of the normal delivery grind. Once designers know how to work this way, I am convinced it produces better results, but Iām wobbly on how we get people to that point. If anyone out there has experience of this, Iād love to hear about it.