When I joined a health-tech startup as a UX designer, I believed something simple: If a product is intuitive and clearly valuable, people will use it.
From day one, I saw the potential of the web-based app I was hired to design. It could support faster and more accurate diagnosis and treatment for patients suffering from epilepsy. The benefit felt obvious to me and I assumed that if something clearly improved clinical decision-making, adoption would naturally follow, and fast.
So, I started to design for optimization: fewer clicks, cleaner layouts, faster task completion. I focused on clarity, hierarchy, and flow efficiency. I did my research, defined personas and user stories, created wireframes and high-fidelity prototypes.
Later in the product development regulatory constraints entered the picture. As part of the MDR compliance and FDA 510(k) process, I had to read IEC and ISO standards and integrate them into the design workflow. That’s when my understanding of “traditional UX” started to shift.
Soon, the interface stopped being buttons, menus, and navigation, and turned into a risk control method.
Every hazard-related use scenario had to be documented, along with its potential use errors, severity of harm, root causes, verification of effectiveness, and, at the same time, be traceable to a user story, a requirement, and a validation activity.
If a severity harm could reasonably fall between two levels, the higher one had to be assigned. If a hazard-related use scenario had more than one risk assigned with different root causes, they had to be independently documented because they would require different design iteration as part of the risk mitigation.
It wasn’t enough if the system worked; it had to hold up under scrutiny. That was the moment I realized that the design wasn’t about making a beautiful user interface – it was about preventing real harm to real people.
The system’s main feature was a dynamic interface designed as a decision-support tool to the clinician. It provided detailed, interactive information intended to help them make faster and more precise treatment decisions. It allowed deeper inspection and supported nuance.
Alongside it, we provided a report, which was nothing more than a concise summary in electronic format. The report existed largely because many hospital environments would rely on established workflows where printed documentation played a central role.
In my head, offering the report was a supplementary feature.
I received feedback that shed light to a disturbing reality: in one hospital, staff would print the report, scan it again, and upload it into their internal system so doctors could access it from there. Surely, there were operational reasons behind this behavior, but it revealed something important: integration into existing workflows often outweighs design elegance, even if the value of the design is as clear and beneficial as it can be.
Usability testing later confirmed something even more unsettling. Some clinicians relied solely on the report to make a diagnosis instead of interacting with the dynamic tool. It was not because the interface itself was confusing and they couldn’t complete the task. It was because their own habits prevented them from following the intended use without realizing it.
The real risk wasn’t a dysfunctional interface. It was unpredictability.
I had assumed that if the tool was intuitive enough and clearly beneficial, clinicians would explore it, adopt it, and trust it immediately. I underestimated how deeply embedded workflows shape decision-making.
From that point onward, my relationship with risk changed.
I began to see potential misuse everywhere. Not because the interface was complex, but because I could no longer assume it would be used as intended. I started mentally simulating edge cases in my head. What happens if the terminology used is not clear? What if they misinterpret a highlighted value? What if time pressure shortens the interaction path?
When potential harm enters the equation, priorities change.
Naturally, my risk sensitivity increased dramatically. I forced myself to slow down and review user flows not just for efficiency, but for interpretive ambiguity. At one point, that heightened awareness bordered on paranoia, because the possibility of harm felt immediate and personal. It took me time to recalibrate.
What helped me was evidence based on structured usability tests, direct discussions with clinicians, observing real interactions rather than imagined ones. I learned to distinguish plausible harm from theoretical harm. I started to ask myself: is this truly a scenario that can happen in real life?
To address the gap between the dynamic interface and the report, there was a proposal to align them more closely in terms of information structure. If clinicians were going to rely on the summary, then the summary needed to reflect the intended interpretive path of the interactive tool.
The idea never became a priority because other features carried higher urgency, as they often do in a fast-paced startup.
Risk analysis was performed and training of the clinicians was selected as the primary mitigation strategy. The residual risk was considered acceptable and the matter was closed.
I understood the trade-off. Organizations must balance solutions with operational realities. But something fundamental had changed in me.
I used to design for what I thought was “proper use”. I assumed curiosity, exploration, and adoption were enough to create a useful and desirable product. Now, after all these years in the health-tech, I design for any foreseeable deviation.
When the usability asks, “Can the user complete the task?”, the human factors ask, “What are the consequences if they don’t?”
The hardest risk to design for isn’t confusion. It’s unpredictability.
Once you recognize that behavior does not always align with intention, it’s easy to stop designing for the “ideal” user, and start designing for habits, constraints, institutional workflows, and human systems that are rarely clean, but entirely real. –
– This article is part of the series I’m calling Designing in the Unknown, where I reflect on the realities of designing in a complex, high-stakes environment. In the series, I unpack moments, decisions, and tensions that shaped my work over time in the health-tech domain. –