Patient safety is user-centred design is patient safety
On attempting to develop a conjoined approach
Most designers don’t spend much time thinking about how their app might kill someone. The vast majority of designers out there work on projects that aim to attract more users or generate revenue. If they’re lucky, they might get a chance to make people’s lives a little better, too. My daily life at my job is very different.
Working on the NHS App, we are constantly confronted with all of the ways in which bad design decisions might bring harm to a user. For instance, if a user ignores a push notification because the content is too vague, they might not turn up to an appointment. This in turn might mean they are excluded from a route to accessing care. On the other hand, if the content of the push notification is too specific and the user is in a coercive relationship, we might put them into a dangerous position. That kind of equation is something the NHS App team is constantly attempting to balance. It is complicated work.
Mirrors
As a methodology, user-centred design is a way to first ensure that we understand what the end user needs and then design whatever is required to provide that. What they need: not what they say they want. Not what we think is cool. Not what our bosses claim is important. What is it that users really need to happen in their life? For the most part, you can’t just ask people directly. User needs are a slippery idea that brings together problems, opportunities, contexts, behaviours, emotions, tasks, and capabilities. They can be derived from research and data, but on their own they don’t tell you what to do next. The synthesis required to figure that out takes time and, often, iteration. Designers don’t always get it right on the first try, so we make sure to measure our progress and then adjust the plan if we need to. Test and learn, as they say.
If you work in health or social care, that might sound familiar and for good reason: this is also basically what doctors, nurses, and social workers do.
This first became apparent to me when I was working with Hackney Council to develop a case management system for social care. We were able to (eventually) develop an approach in which we embedded social workers in our teams. We got to understand how they approach their job on a very deep level and what we saw was surprising. The basic approach to doing social care looks something like this:
- Find out about a problem (e.g. a safeguarding concern is raised to the council)
- Investigate the problem (e.g. interview the family)
- Document what you’ve found (e.g. fill out an assessment)
- Develop an idea about what might alleviate the problem (e.g. write a care plan)
- Deploy the idea (e.g. get the family to attend regular counselling)
- Observe the results and check whether the right thing happened (e.g. do a site visit with the family)
- If needed, revise the plan and try again (e.g. complete a reassessment and adjust the care plan)
That list could be from Lean UX, This is Service Design Doing, or any number of other books about design processes. It is the same process. By the time we were six months into the work with Hackney Council, we’d come to the conclusion that social workers do user-centred design.
Slippage
Today, in the teams that are part of the NHS App, we are trying to braid clinical safety and design together. Given that the two disciplines are methodologically aligned, you’d think that this would be easy. It is not.
Some of the challenges are boringly bureaucratic. Embedding clinicians in all of our teams costs money (all staffing choices do), and we can’t always prioritise this level of involvement. We’d need to hire more clinicians, which is challenging when your organisation keeps morphing into new, unpredictable forms. If we don’t have enough staff to embed a clinician into every team full-time, we need to grapple with complicated schedules and competing interests, of which there are many.
Things get trickier when we start trying to align how people speak. Under the hood, I am convinced that the two disciplines intend to accomplish the same thing, but some of the language they each use to describe what they’re doing is different in unhelpful ways. These differences appear in what feels like a semi-random manner, making it hard to track down where the misunderstandings are and identify how to correct them.
Finally, there are questions of methodology. Randomised control trials look incredibly heavy-handed and complicated when compared with most user research approaches. Medical research is literal science, whereas user research comes out of anthropology, sociology, and psychology. The seriousness of a double-blind trial can make running usability testing with seven people appear flimsy, and yet we have plenty of data pointing to how this is enough evidence to make the kinds of decisions that user research about software and services focusses on. Questions of scale and depth aside, the goals of the two approaches are essentially the same – the point is not the research itself, but what you can do with what you find.
The work we’re doing now is about calibration. We’re attempting to help both groups understand where the other is coming from, what each values, and why each is integral to delivering good work. From there, we can figure out how to blend their activities in multi-disciplinary teams.
Standards
Fortunately, the NHS Service Standard already connects the disciplines of user-centred design and clinical safety. Point 16 of the standard is “Make your service clinically safe”. Rather straightforward. The website elaborates on this point:
Clinical risk management is key to creating safe digital services. Work with your clinical safety officer to consider what could go wrong, how serious it could be, and how likely it is, so that you can minimise the risk of harm.
That is the same basic activity any designer would be engaged in for all of their work, except with a clinical perspective. All design work involves evaluating where things might go wrong (“pain points”) or where a user might get a less than ideal outcome (“unhappy paths”), and then trying to ensure the design work accounts for this and provides the best way around possible issues.
The service standard extends this idea with point 15: “Support a culture of care”. This directive goes beyond a clinically-oriented approach to require designers to consider how users feel. This part is described as such:
Digital services must meet the NHS commitment to care and compassion. Small things make a difference when people are, for example, sick or stressed, grieving or dying.
We can improve people’s experience of care by being inclusive and treating them with respect.
Here we connect service design to the emotional dimension of taking care of people. It is an important idea to emphasise, but I struggle with how notes on making your service clinically safe and supporting a culture of care are separated from the earlier points of the service standard, such as “understanding users and their needs in a context of health and care” and “work toward solving a whole problem for users”. In the context of health and care, wouldn’t solving a whole problem that meets real user needs be clinically safe by definition? I suspect that we have made a problem for ourselves by listing the elements of our work that are particular as separate items, rather than rewriting the core elements of the standard.
Incomplete services
The single most challenging aspect of our work on the NHS App stems from the pre-defined limitations of being an app team. We work on a single digital channel, but health experiences unfold over time and typically involve contact with multiple people in different care settings. We need to design for all of the diverse scenarios and unexpected complications that this can bring, but most of the time our team don’t have control of, or influence over, the things that sit outside of our little software bubble. Heck, we don’t even control much of what sits inside our software bubble!
If we are going develop and enact design approach that has patient safety at its heart, we need to consider whole people and whole services. We need to understand how social factors affect medical issues. We need to operate across the full spectrum of how care is provided. We need to think systemically and act holistically. In short, we need to do actual service design. This is the place we are most constrained right now.
We are forever trying to work with partner teams who own specific domains of health, but there are many gaps and ownership can be mysterious. I don’t think anyone believes this is the correct situation, but changing it has proven extremely difficult. Fortunately, there are some green shoots of a better approach sprouting into view. The grand hope here is that by defining a view of design as being part and parcel of creating a safe environment for patients, we can shape a narrative that changes how the whole system behaves. Wish us luck!
Stakes
It can be scary to talk about all of the ways that the thing you are working on might harm another human being. It can sound hyperbolic to say that we shouldn’t launch a product because people might die, even if the likelihood is vanishingly small. If we were talking about a food delivery app, these would be surprising topics to be talking about at work. However, if you are working on software for health, the possibility of patient death is a very real dimension of your work, every day.
The stakes of the work are high, but that’s the job. I’d worry about anyone doing this job who didn’t have serious doubts about their ability to measure up. But with a high bar comes the possibility of doing great things. We can fuss with how the interface looks, and we can worry about how fast screens load, but ultimately the design principle we care most about is “it does not distress, endanger, re-traumatise, or harm the user”.
Thanks to Lia Ali for reading and commenting on early drafts of this text.