When Good Design Breaks
Part 1 of 3: Human-Centered Design in Product Development
This is the first article in a three-part series on human-centered design and how we use it to guide good product development in medical devices. To see the second installment published 12/3/2025, please go here.
A while back I was working at a Fortune 500 MedTech company on a product line that included both implantable devices and the surgical delivery systems used to place them. One of the products that I stepped in to manage was an improved lead introducer system to address problems surgeons were experiencing with existing options on the market. Our design aimed to make the implantation procedure easier and more reliable.
The team was in the final stages of product development; almost at product launch. We’d completed design verification. We’d navigated regulatory requirements, submitted a 510K, and recently gained clearance. We’d transitioned to mass production and had our initial production run. (That’s the first time the device is made using the full-scale production processes and tooling established for mass manufacturing.) We were running final human factors validation testing for our ISO documentation - the formal testing required to prove that users could operate the device safely and effectively.
That’s when the testing revealed the problem.
Quick note for anyone who is confused as to why this was happening AFTER FDA clearance:
We don’t wait until our final manufactured product is made before we submit to the FDA. Companies generally do an FDA submission with all of their safety, clinical data, etc. based on usage of a small-batch of prototype products that typically then get retooled for larger scale production while you wait to hear back from the FDA – since that usually takes a minimum of 3 quarters. We’re able to do this because retooling or slightly changed IFU wording can be handled without an additional 510K or PMA submission, but with a Letter to File. It’s purely internal documentation that goes into your Design History File as part of your ISO quality system. The FDA only sees it if they come inspect you and review your change control records.
So what was the problem? While surgeons were attempting to use the newly manufactured introducer, a needle could get stuck in a groove of the introducer if it was inserted at a sub-optimal angle. Not necessarily dangerous or a safety issue - the surgeon could dislodge it and try again. But it was annoying, it was happening with well-instructed surgeons, and it happened more than once. This was supposed to be the better version of the device and we had introduced other issues!
So here’s the question every MedTech executive eventually faces: Do you release it? In a massive, publicly-traded, Fortune 500 company a lot is riding on hitting that timeline you committed to. How do you decide what to do when something fails human factors testing right before the product launch finish line?
Building the right network of manufacturing partners before you need them matters. MedtechVendors helps medical device teams quickly find, evaluate, and engage specialized suppliers across the product lifecycle. No more endless Googling - connect with qualified CDMOs in minutes. www.MedTechVendors.com
Why Human Factors Testing Catches What Engineering Misses
The FDA requires human factors validation testing under 21 CFR 820.30 (Design Controls) for a critical reason: one-third of medical device incidents involve user error, and more than half of device recalls for design problems involve the user interface.1 These aren’t theoretical risks. They’re patterns that emerge from real-world device failures causing real patient harm.
Human factors testing validates that intended users can operate the device safely and effectively under actual or simulated use conditions.2 The testing must use production-equivalent devices, include at least 15 representatives from each distinct user group, and cover all critical tasks (those that, if performed incorrectly, could lead to harm or compromised medical care).3
When you do good product development, human factors testing in sprinkled in throughout the development stages to double-check that you’re on the right path, and there’s always one last validation happens at the very end of the development process precisely because it needs to test the final design with real manufacturing tolerances, the real IFU, real packaging, and real users.
We do this because human factors testing catches problems that engineering analysis misses. On paper, our tolerance bands were fine. In CAD, the design worked perfectly. In bench testing, the device performed as intended. But when actual users handled actual manufactured units under realistic conditions, the issue revealed itself.
This is human factors testing doing exactly what it’s supposed to do: catching problems before they reach patients. The challenge is that when it works, you’re staring down a painful reality at the worst possible moment.
When Problems Surface at the Finish Line
We’d already manufactured the entire first batch. The human factors testing was supposed to be a formality. The final validation for ISO documentation before we started shipping to customers. Instead, it revealed a problem that nobody had predicted: tolerance stacking. The groove width was within spec. The needle diameter was within spec. But when manufacturing produced parts at opposite ends of their acceptable ranges, the combination created just enough gap for the needle to catch at certain angles.
The manufacturing team hadn’t flagged this up to design because they’d followed the specifications exactly. Everything was within the tolerance bands we’d given them. They had no reason to think there was a problem.
This is the nightmare scenario: you’ve invested millions, made commitments to hospital systems, built financial projections around the launch timeline, and now users are telling you there’s an issue with a device you thought was done.
The pressure to rationalize becomes enormous. “It’s not really a safety issue.” “Surgeons can work around it.” “We can address it in training.” “The next manufacturing run will fix it.” All of these things might be true, but they’re also the exact rationalizations that lead to poor decisions.
Revenue targets don’t care about tolerance stacks and missing the launch window can have massive implications. So how do you make sure your decision to go back to the drawing board or continue with the launch is properly thought through and not full of bias that could come back to bite you in the end?
Are we making a sound strategic decision to launch with a manageable issue that we can address through training and documentation? Or are we making excuses for releasing a subpar product because we’re terrified of delaying the launch? The first scenario is legitimate business judgment. The second is how you end up in the FDA’s MAUDE database.
Its hard for teams to tell the difference in the moment because the organizational incentives all point toward launch. The people in the room have been working on this device for years. Their performance reviews depend on hitting milestones. The company’s quarterly numbers depend on this revenue. The pressure to see what you want to see rather than what’s actually there becomes overwhelming.
When Multiple Heads Make Sense
Giant MedTech companies have cross-functional product teams that seem ridiculously huge and at times unbearable. They slow down the process and can cause all sorts of strife when handled incorrectly or operating under poor work environments. But in late-stage crisis moments? All those heads in the room become your greatest asset if you manage it correctly. (and if you have a startup, this isn’t the time to decide on your own! Spend a bit of time and money pulling your experts back into the room to help with this analysis)
Here’s the thing about cross-functional teams: some people are inclined to see every issue as a crisis. Others are dangerously nonchalant about everything. In a room with 15 people from different functions, you’re guaranteed to have both types.
You need both. The crisis people force you to take the issue seriously and consider worst-case scenarios. The nonchalant people prevent you from overreacting to manageable problems. The key is creating enough psychological safety that people can be intellectually honest about what they’re seeing rather than defending predetermined positions.
The failure mode is when this becomes a finger-pointing exercise (”Why didn’t manufacturing catch this?”) or groupthink takes over (”Everyone agrees we should launch, right?”). The success mode is when you use those diverse perspectives to stress-test your assumptions and force honest risk assessment.
The Framework You Need
The good news is you should have Standard Operation Procedure (SOP) in place to walk through this assessment. ISO 14971 requires manufacturers to create their own risk management process to identify and control risks, including those from reasonably foreseeable misuse. This process includes analyzing hazards, evaluating risks, and implementing controls, which often involves usability engineering and post-market surveillance to identify and mitigate user-related issues.
The problem is that ISO does NOT provide a standard for evaluating user error and sometimes we don’t realize that our internally-developed SOP isn’t actually that helpful until we try to use it! You need to make sure that as you go through this analysis you have a way to hear all voices in the room, weigh them not by the loudest person in the room but by a logical matrix that evaluates difference aspects, and collects the proper sign-offs by the right experts for the situation. And, you should be able to address the following things:
1. Recognizing system signals versus crisis - Is this issue revealing a fundamental design flaw, or is it an edge case that emerged under specific conditions?
2. Structuring the decision space - What are the actual options, and what are the implications of each? This requires input from every function: manufacturing on how quickly tolerances can be adjusted, clinical on whether workarounds are trainable, regulatory on notification requirements.
3. Leading the team psychologically - How do you get people to objectively assess the situation rather than defending their predetermined positions?
4. Designing organizational buffer - How do you build programs that have the flexibility to make these calls without panic?
[Once again – If you would like to see a complete decision-making framework with detailed criteria and decision tree, check out our paid subscription options.]
How We Actually Made the Call
We spent days in cross-functional reviews. Quality ran risk assessments. Regulatory mapped out our options. Clinical consulted with surgeons about the workaround. Manufacturing calculated how quickly they could adjust tolerances for the next production run.
Here’s what tipped us toward launching:
The issue was annoying, not dangerous. If the needle got stuck, the surgeon would need to dislodge it and reposition. No harm to the patient. No identified compromise to the procedure outcome. Just frustration.
The workaround was genuinely trainable. It required pulling the needle at a specific angle, which surgeons could learn in a single case. We weren’t asking them to develop some elaborate technique. We’re talking about 30 seconds of instruction.
Our sales reps are always in the OR for these procedures. This was standard practice for our implantable devices - you have company reps present to provide technical support. They could ensure proper technique during the learning curve.
The solution: We launched with enhanced training protocols and updated instructions for use. The IFU explicitly detailed the proper needle angle and included troubleshooting guidance if the needle caught in the groove. We documented everything through a letter to file - the internal change control documentation showing we’d properly assessed the risk.
We immediately started work on the manufacturing adjustments. A few months later, the next production run had the corrected tolerances. The issue never caused a significant problem in the field because we’d properly assessed that it was manageable with training and documentation.
What Customer Discovery Actually Does
I know in this case we were lucky and I truly believe we weren’t just deceiving ourselves about the level of risk. I’m also lucky that in my career of over 25 product launch or re-launches I have never had to face a more significant hurdle that close to the finish line.
Human factors testing caught a problem that every other validation step had missed. The design was sound. The engineering analysis was correct. Manufacturing followed specifications. But real users handling real devices under realistic conditions revealed an issue that existed only in the intersection of manufacturing variance and use patterns.
This is why human factors testing isn’t just a regulatory checkbox. It’s the last line of defense against launching products that don’t work the way you hope they do.
You can’t predict all issues, but you can try to make sure that any issues that pop up can be handled with minor edits so that you can keep on track without major impact to the bottom line.
I think that, generally, companies can avoid major hurdles at the finish line if they iteratively use good customer feedback throughout the process. If you skimp on that or get annoyed at the time, money, and effort it takes to do these checks during your product development process, you open yourself up to risk of greater issues in the final human factors testing.
Thank you for reading The Device Files!
Blythe Karow is a strategic management consultant specializing in corporate strategy, product portfolio management, commercialization, and upstream product development for medical device companies. She has previously held roles at Fortune 500 MedTech companies, management consulting companies, startups, and even led her own medical device startup.
Footnotes
Carstensen, P. “Human Factors and Medical Devices.” FDA presentations on human factors engineering, 2005. Available at: https://www.fda.gov/medical-devices/human-factors-and-medical-devices
21 CFR 820.30(g) Design Validation. “Design validation shall include testing of production units under actual or simulated use conditions.”
FDA. “Applying Human Factors and Usability Engineering to Medical Devices - Guidance for Industry and FDA Staff.” 2016. Available at: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/applying-human-factors-and-usability-engineering-medical-devices







