Usability engineering is one of the most commonly cited sources of deficiencies in FDA 510(k) submissions. Teams test their device, observe errors, and file the results — without the documented iterative process, risk-linked scenarios, and complete Usability Engineering File that regulators actually require. The test report is the last page of the story, not the story itself.
This guide covers what IEC 62366-1:2015 (including the 2020 amendment) actually requires — and where it differs from FDA Human Factors guidance — so you can build a compliant process rather than a compliant-looking test report.
Does IEC 62366 apply to your device?
IEC 62366-1 applies to any medical device with a user interface. The standard’s scope is intentionally broad: if a person interacts with the device to achieve its intended use, usability engineering applies. That includes physical interfaces (buttons, dials, displays), software user interfaces (screens, menus, alerts), and companion mobile applications that form part of the device system.
What about IEC 62366-1 vs. IEC 62366-2?
IEC 62366-1:2015 is the normative standard — the document that notified bodies assess and that submissions reference. IEC/TR 62366-2:2016 is a Technical Report: informative guidance with worked examples and supplementary methods, but no requirements. Part 2 is a useful resource for implementation. It is not what your submission is assessed against. Any claim that “Part 2 specifies your testing protocol” misreads the document structure.
What changed with Amendment 1:2020?
Amendment 1 (published 2020) updated the ISO 14971 reference to the 2019 edition and introduced several substantive changes. The most significant:
Bidirectional exchange with ISO 14971. Before the amendment, information flowed one direction — from usability engineering into risk management. The 2020 version requires bidirectional exchange: usability engineering inputs to risk management, and risk management outputs back to usability engineering. Parallel-but-separate processes now fail audit.
“Use difficulty” added alongside use errors.The amendment requires capturing not only actual use errors but also “use difficulties” — close calls where a problem was observed but no use error was committed. These near-misses indicate latent design weaknesses that may produce errors in actual use. If your observation protocol only records completed errors, it does not satisfy the post-2020 requirement.
“Physical mismatch” replaces “action error.”The 2020 amendment replaced “action error” with “physical mismatch” to encourage analysis of the full range of use problems, not just unintended actions.
Training as a third-priority risk control. The amendment added training as a recognized risk control measure (alongside design and information for safety), but explicitly at third priority. Training cannot be the first or only mitigation for a known hazard-related use scenario.
Usability engineering vs. UX design
Usability engineering under IEC 62366-1 is a safety process, not a design improvement process. The standard’s scope is identifying, analyzing, and eliminating or reducing use-related hazards — not making the device easier or more pleasant to use (though those outcomes often follow).
The distinction matters for audit. A UX portfolio showing iterative design improvements and user satisfaction scores does not satisfy IEC 62366-1. An auditor reviewing your Usability Engineering File is looking for documented hazard scenarios, evidence of systematic evaluation against those scenarios, and traceability from observed use errors back to risk management decisions. If what you have is a design history and a test report, you have a gap.
The clearest statement of this distinction: human error is not the root cause of harm — it is the consequence of a design that fails to account for the user’s real capabilities and limitations. That framing drives everything in IEC 62366-1. The question the standard asks is not “can users operate this device?” but “what happens when they interact with it in ways the design team didn’t anticipate?”
Use error vs. abnormal use — why the distinction matters
IEC 62366-1 distinguishes between use errors and abnormal use. A use error is an unintended action or omission by the user that differs from what the manufacturer expected and that did or could cause harm — regardless of whether the user intended the action. The user tries to do the right thing; the interface leads them wrong.
Abnormal use is a conscious decision by the user to take an action outside the intended use or contrary to instructions. The user knows they are deviating. Manufacturers are expected to design against foreseeable use errors. They are not expected to design against all forms of abnormal use, though obvious abnormal use patterns should be documented in the risk management file.
This matters practically: if a patient overrides a safety interlock by pressing the same button three times in quick succession, that is likely abnormal use. If a clinician misreads a displayed value because the font weight and background color are indistinguishable under OR lighting, that is a use error. The first may fall outside the design team’s responsibility; the second is their gap to fix.
Defining intended users and use environment (Clause 5.1)
Clause 5.1 requires a Use Specification describing the intended users, intended use environment, and user interface characteristics relevant to safety. This document is the foundation of the entire usability engineering process — everything downstream depends on how clearly you define who your users are and where they operate.
What level of user-profile specificity does FDA expect?
“Healthcare practitioners” or “clinicians” are not user profiles — they are market segments. FDA reviewers frequently flag Refuse-to-Accept (RTA) deficiencies when user profiles are defined at that level of generality. Your Use Specification needs physical and cognitive characteristics:
Before (not acceptable):“Intended users are healthcare professionals with medical device training.”
After (acceptable):“Intended users are ICU nurses in US acute-care settings with at least two years of clinical experience. Users operate under high cognitive load and frequent interruptions. A subset of users may have visual acuity limitations; the device interface must be legible under variable OR lighting conditions. Users may not have received device-specific training prior to first use — initial use without training should be considered a foreseeable scenario.”
The 2020 amendment expanded the environment characterization to include social factors: who else is in the room, what time pressure exists, what cognitive demands exist from parallel tasks. For SaMD, document the hardware environment (device type, screen size, connectivity), the clinical workflow context (interruptions, time pressure, co-use with EHR or other systems), and any scenarios where the software is used simultaneously with other devices.
Multiple user groups and what it means for testing
If your device has genuinely distinct user groups — for example, the clinician who programs a device and the patient who operates it at home — each group requires its own section of the Use Specification, its own set of hazard-related use scenarios, and separate summative evaluation coverage. FDA human factors guidance states 15 participants per distinct user group. Two user groups means 30 participants minimum. Budget for this before your summative evaluation timeline.
Hazard-related use scenarios (Clause 5.4)
A hazard-related use scenario describes a situation in which an interaction with the user interface could lead to a hazardous situation and potential harm. It is not a feature story or a test case — it is a description of the dangerous version of an interaction.
Hazard-related use scenarios vs. user stories
Agile SaMD teams frequently conflate hazard-related use scenarios with user stories. They are structurally opposite:
User story (not what IEC 62366 requires):“As a clinician, I want to see my patient’s blood glucose reading so I can make a treatment decision.” This describes the intended interaction — the happy path.
Hazard-related use scenario (what IEC 62366 requires):“A clinician operating under time pressure misreads a displayed value of 150 mg/dL as 50 mg/dL due to font rendering on a low-brightness screen in a high-ambient-light environment. The clinician administers insulin appropriate for a hypoglycemic reading, causing hypoglycemic harm.” This describes the worst plausible interaction — the failure path.
A practical generation method: for each user task, ask “What is the most dangerous way a user could complete — or fail to complete — this task?” That question is the unit of scenario generation.
Linking scenarios to the ISO 14971 Risk Table
Every hazard-related use scenario must be linked to a corresponding entry in your ISO 14971 Risk Table. The 2020 amendment made this requirement bidirectional: not only do usability scenarios feed into the risk table, but new hazards identified in risk management analysis must feed back into your usability engineering process to ensure coverage. An auditor will trace a scenario from the Hazard-Related Use Scenarios list to a Risk Table row and then to a summative evaluation test case. If that chain is broken at any point, it is a gap.
Both single-use errors and sequences of use errors leading to harm must be covered. Cognitive errors and misinterpretation of displayed information are common sources of missed scenarios — teams identify obvious physical errors (wrong button, wrong dose entry) but miss cognitive failures like negative transfer (a user experienced with a previous device version applies the old interaction model to the new interface with dangerous results).
Capturing use difficulties (post-2020 requirement)
The 2020 amendment introduced the concept of “use difficulty” — a situation where a user experienced a problem during interaction but no use error occurred. Think of these as near-misses: the user hesitated, backtracked, expressed confusion, or recovered from a near-error. These are not errors to record in the pass/fail column, but they are signals of latent design weakness. Your observation protocol must be structured to capture them. If your data collection only records completed errors and task outcomes, you are missing the post-2020 requirement.
Formative evaluation during development (Clauses 5.7 and 5.8)
Formative evaluation is conducted during development with the intent to identify weaknesses in the user interface design and drive improvements — not to validate the final design. IEC 62366-1 does not prescribe a single method; common approaches include expert heuristic reviews, cognitive walkthroughs, think-aloud sessions with representative users, and contextual interviews.
What formative evaluation does and doesn’t prove
Formative evaluation demonstrates that your design improved over the development cycle. It does not demonstrate that the final design is safe to use — that is the job of summative evaluation. The risk of confusing the two is real: teams that run multiple rounds of formative evaluation sometimes conclude that validation is complete. It is not. A device that has been iterated extensively based on formative feedback still requires summative evaluation against the final device before commercial release.
Clause 5.7 requires a Usability Evaluation Plan before evaluations begin. The plan should schedule formative evaluation checkpoints at defined development milestones — prototype, alpha, beta — with the method and intended outcomes for each. Formative findings drive UI improvements before summative evaluation. An undocumented round of user feedback sessions that led to UI changes is not formative evaluation for compliance purposes.
When can summative scope be reduced?
Clause 5.9 permits manufacturers to use data from summative evaluations of products with an equivalent user interface to reduce summative evaluation scope, but only with documented technical rationale for how the prior data applies to the current device. The rationale must be specific — not “the device is similar to the previous version” but “Task X was validated in the prior summative evaluation; the only change to the UI for Task X is a label text update; the task structure, workflow, and safety-critical elements are unchanged.” Without that specificity, summative evaluation for the affected tasks is required.
Summative evaluation — the step you cannot skip (Clause 5.9)
Summative evaluation is conducted on the final or near-final device to confirm that intended users can operate it safely and effectively without correction. It is not a usability test — it is a design validation activity. Every hazard-related use scenario identified in Clause 5.4 must be covered by a summative evaluation test case.
Sample size: what FDA actually requires
FDA human factors guidance (Appendix B) cites 15 as “a practical minimum number of participants for human factors validation testing,” citing research (Faulkner 2003) indicating that 15 users detect a minimum of 90% and an average of 97% of known usability problems. Critically, that 15-participant floor is per distinct user group. If your device has two user groups, you need a minimum of 15 from each.
The same guidance is explicit that human factors validation testing is “primarily a qualitative rather than a quantitative exercise.” Use errors are recorded, but the purpose is not to quantify the frequency of a particular use error against a statistical acceptance criterion. The goal is to observe what problems exist and analyze their safety implications — not to achieve a pass rate.
Use-Related Risk Analysis (URRA) — what FDA expects
FDA expects a Use-Related Risk Analysis in a specific format: use errors mapped to hazardous situations mapped to harms. Raw FMEA output — structured around failure modes and effects for component failures — does not satisfy this expectation. The URRA uses a different lens: for each use error observed or foreseeable, what is the specific patient harm pathway?
A minimal URRA row contains: the use error or foreseeable use difficulty, the hazardous situation it creates, the potential harm, the probability and severity (feeding to the ISO 14971 Risk Table), and the risk control measure applied. Use errors observed during summative evaluation that were not pre-identified in the URRA must be added and analyzed before the Usability Evaluation Report is finalized.
Root-cause analysis structure: two complementary frameworks
When a use error occurs during summative evaluation, the test data needs to be analyzed to determine how the interaction led to the error. FDA Appendix C organizes this analysis around four factors: device design, use environment, user characteristics, and instructions for use. For each observed error resulting in potential serious harm, document which of these factors was the primary driver.
A complementary framework for the same analysis is the Perception-Cognition-Action (PCA) model: did the user fail to perceive the relevant information (display too small, contrast too low, alarm masked by noise)? Did they perceive it but fail to understand it (ambiguous label, unfamiliar symbol, unexpected mapping)? Or did they understand but fail to act correctly (physical mismatch, control too stiff, confirmation step too fast)? Answering the PCA question for each error gives you the design improvement target. The FDA four-factor structure and PCA are parallel frameworks, not alternatives — applying both produces a richer root-cause analysis.
What happens when you find errors during summative evaluation?
Any use error observed during summative evaluation must be assessed for risk impact: does it represent a new hazard not in the Risk Table? Does it indicate an existing risk control is ineffective? If either is true, update the Risk Table before finalizing the Usability Evaluation Report. Post-summative UI changes require either a new summative evaluation or documented rationale for why the change does not affect validated critical tasks. There is no shortcut here — an unexplained zero Risk Table updates after summative evaluation is itself an audit signal.
If your product fails summative evaluation, the evaluation becomes a formative round — the finding drives a design change, and a new summative evaluation is required on the revised design. This is not a failure of the process; it is the process working. Teams that reschedule around a failed summative evaluation to avoid the additional cycle create the risk of shipping a device with unresolved use-related hazards.
Building the Usability Engineering File (Clause 4.2)
Clause 4.2 requires a Usability Engineering File — a complete, traceable record of all usability engineering activities and outputs. Three documents form the core:
Usability Evaluation Plan. Describes the evaluation approach, scope, participant criteria, and schedule for both formative and summative activities. The plan must exist before evaluations begin. Include a traceability matrix confirming that all hazard-related use scenarios are covered by summative test cases — build this before you start recruiting, not after.
Hazard-Related Use Scenarios list. A standalone document listing each scenario with its description (user, action, environment, hazardous situation, potential harm), the linked Risk Table entry, and the summative test case covering it. This is the cross-reference document that makes the Usability Engineering File traceable.
Usability Evaluation Report. Captures the results of all formative and summative evaluations: participant demographics, test conditions, observed use errors and use difficulties, root-cause analysis for errors with safety implications, Risk Table updates triggered, and the conclusion on whether the device can be used safely and effectively by intended users.
What FDA’s HFE/UE Report structure requires
FDA Appendix A (Table A-1 of the Human Factors guidance) specifies an 8-section structure for the HFE/UE Report submitted with a 510(k): Conclusion; Intended users, uses, environments, and training; Device UI description; Known use problems; Hazards and risks analysis; Summary of preliminary analyses; Critical task identification and categorization; and Validation testing details. Your Usability Evaluation Report must cover all eight sections to be submission-ready. A document that covers only testing details without the critical-task identification and hazards analysis sections is incomplete.
A common filing error is submitting only the summative testing report. FDA expects the full Usability Engineering File — including the Use-Related Risk Analysis, formative evaluation records, and the iterative design process that led to the final design. The test report is evidence that the process concluded. It is not the process.
Mapping IEC 62366 to ISO 14971, IEC 62304, and ISO 13485
IEC 62366-1 does not operate in isolation. Each clause cross-references the broader quality system, and auditors trace those connections during review. The table below maps the six IEC 62366-1 atoms against their ISO 13485 and FDA QMSR counterparts.
| IEC 62366-1 CLAUSE | REQUIREMENT | ISO 13485 / FDA QMSR |
|---|---|---|
| 4.1 (Usability Engineering Process) | Documented UE process integrated into device development; UI risk controls linked to risk management file | ISO 13485 §7.3.2 / FDA QMSR §820.30(b) |
| 4.2 (Usability Engineering File) | UE File collecting all outputs: Plan, Hazard-Related Use Scenarios list, Evaluation Report | ISO 13485 §4.2.4 / FDA QMSR §820.40 |
| 5.1 (Use Specification) | Documented intended users, intended use environment, and safety-relevant UI characteristics | ISO 13485 §7.3.3 / FDA QMSR §820.30(c) / ISO 14971 §5.2 |
| 5.4 (Identify and describe Hazard-Related Use Scenarios) | UI hazards and hazardous situations from use errors and environment; linked to ISO 14971 Risk Table | ISO 13485 §7.1 / FDA QMSR §820.30(g) / ISO 14971 §5.4 |
| 5.7 (Usability Evaluation Planning) | Documented plan for formative and summative evaluations; all hazard-related use scenarios covered by test cases | ISO 13485 §7.3.6 / FDA QMSR §820.30(f) |
| 5.9 (Summative Usability Evaluation) | Summative evaluation on final/near-final device; participants match use specification; results documented in Usability Evaluation Report | ISO 13485 §7.3.7 / FDA QMSR §820.30(g) |
The ISO 14971 integration point
The 2020 amendment made the ISO 14971 integration bidirectional. Before the amendment, hazard-related use scenarios fed into the Risk Table — one direction. The post-2020 requirement is that risk management outputs also feed back into usability engineering: new hazards identified through risk analysis must be checked against the Hazard-Related Use Scenarios list to ensure coverage. If your risk management team identifies a hazard that isn’t represented in the usability scenarios, that gap must be resolved.
In practice, this means your usability engineering lead and your risk management lead need a defined interface and a regular touchpoint during development. Two separate documents with no traceability between them is the most common form of this gap.
The IEC 62304 integration point
IEC 62304 governs the software development lifecycle, including software requirements and software verification. IEC 62366-1 Clause 5.6 requires a User Interface Specification — this document is also a software requirements input under IEC 62304. Formative evaluation findings that identify UI design deficiencies may generate IEC 62304 anomaly reports. Your software development SOP should reference both standards and define how usability findings flow into the software anomaly management process.
FDA Human Factors guidance — where it differs from IEC 62366-1
FDA’s Applying Human Factors and Usability Engineering to Medical Devices (February 3, 2016) and IEC 62366-1 cover the same domain but differ in terminology, methodology, and geographic scope. The table below maps the key terms:
| FDA TERM | IEC 62366-1 TERM | NOTE |
|---|---|---|
| Human Factors Engineering (HFE) | Usability Engineering (UE) | FDA footnote: 'the two terms are considered interchangeable for purposes of this document' |
| Critical Task | Hazard-Related Use Scenario | FDA critical task: a task that, if performed incorrectly or not at all, could cause serious harm |
| Human Factors Validation Testing | Summative Evaluation | Both are confirmatory; FDA explicitly calls this 'one portion of design validation' |
| Use Error | Use Error | Same term; FDA definition §3.9 matches IEC 62366-1 usage |
| HFE/UE Report | Usability Evaluation Report (part of UE File) | FDA Appendix A specifies an 8-section report structure for 510(k) submissions |
Three areas where FDA guidance goes further
US-user recruitment requirement. FDA guidance states that professional users must work within the US healthcare system relevant to the intended use. Recruiting internationally — or with users who have no clinical workflow experience comparable to US practice — produces results that FDA reviewers will challenge. IEC 62366-1 is less geographically prescriptive.
Critical task methodology. FDA guidance (Sections 6.1 and 6.3) expects specific methods for identifying critical tasks: FMEA, fault tree analysis, and task analysis. IEC 62366-1 accepts a wider range of methods for hazard-related use scenario identification. For US 510(k) submissions, document the specific method used to identify critical tasks, not just the resulting list.
HFE/UE Report structure. FDA Appendix A specifies an 8-section report structure. IEC 62366-1 defines the content of the Usability Engineering File but does not prescribe a specific report format. For 510(k) submissions, structure your Usability Evaluation Report to the FDA 8-section template regardless of what IEC 62366-1 requires, to avoid deficiency letters on format.
User Interface of Unknown Provenance (UOUP, Clause 5.10)
Clause 5.10 of IEC 62366-1:2015 addresses User Interfaces of Unknown Provenance — defined in the standard as a user interface or part of a user interface of a medical device previously developed for which adequate records of the usability engineering process are not available. The key word is “records”: it is not about the UI’s age or origin but about whether documented usability engineering evidence exists.
When a UI qualifies as UOUP, Clause 5.10 permits manufacturers to substitute the full Clauses 5.1 through 5.9 process with the streamlined evaluation path described in normative Annex C. This is not an exemption from usability engineering — it is an alternative evaluation method with its own requirements.
What qualifies as UOUP?
The practical question is whether adequate usability engineering records exist for the UI under evaluation. Common SaMD scenarios where UOUP status arises:
Third-party component libraries and UI frameworks. A commercial charting library or data-grid component embedded in your SaMD may have been developed without IEC 62366 process documentation. If users interact directly with that component to perform safety-critical tasks, it may qualify as UOUP.
Licensed or acquired UI modules. UI code licensed from another company — a workflow screen, a dosage calculator, a sensor visualization widget — may carry no usability engineering file. If the licensor cannot supply documented evaluation evidence, the component is UOUP.
Legacy internal UI elements. An internal UI module developed before your organization adopted IEC 62366-compliant practices may have no process records. If the records cannot be reconstructed, UOUP status applies.
The Annex C evaluation path
Normative Annex C of IEC 62366-1:2015 defines a focused five-requirement evaluation path for UOUP:
C.2.1 — Use Specification. Document the intended users, intended use environment, and the use tasks performed via the UOUP element.
C.2.2 — Post-Production review. Review existing post-market data — complaint records, adverse event reports, known use problems — related to the same or similar UI in prior or comparable uses.
C.2.3 — Hazards and Hazardous Situations. Identify foreseeable hazards and hazardous situations arising from use of the UOUP element, drawing on the use specification and post-production data.
C.2.4 — Risk Control. Implement and verify risk controls for identified hazards, and ensure the UOUP element meets the use specification under the intended use conditions.
C.2.5 — Residual Risk evaluation. Evaluate residual risks for acceptability per ISO 14971 before incorporating the UOUP element into the device.
The Annex C path is substantially narrower than the full Clauses 5.1–5.9 process, but it still requires documented outputs for each step. “We reviewed it and it looked fine” is not an Annex C evaluation.
The modification boundary — when UOUP treatment ends
Clause 5.10 and Annex C contain an explicit boundary condition: if any modifications are made to the user interface or its parts, only the unchanged parts remain UOUP. The changed parts are subject to the full Clauses 5.1 through 5.8 evaluation process. This matters in practice — SaMD teams that customize a licensed UI component (reskin it, add controls, change interaction flows) may inadvertently exit UOUP treatment for the modified elements and must apply the full usability engineering process to those changes.
UOUP and IEC 62304 SOUP — a parallel concept
UOUP is structurally parallel to Software of Unknown Provenance (SOUP) under IEC 62304. Both concepts address the integration of externally developed components where the original development records are unavailable. Where IEC 62304 SOUP triggers functional verification requirements at the software level, UOUP triggers the Annex C usability evaluation at the user interface level. For SaMD that incorporates both SOUP libraries and their associated UI components, both evaluations are typically required and should be coordinated — the same third-party charting library may generate IEC 62304 SOUP anomaly tracking and IEC 62366 UOUP risk documentation simultaneously.
Common audit gaps
Frequently asked questions
- Does IEC 62366 apply to a mobile app that connects to a medical device?
- Yes. IEC 62366-1 applies to any user interface that is part of a medical device, including software user interfaces and companion apps that are considered part of the device system. If your mobile app is the primary means by which users interact with the device — initiating measurements, reading results, or acting on clinical outputs — then the app's UI is in scope. For SaMD (Software as a Medical Device), the software itself is the device, and usability engineering applies in full. Document your reasoning in your Intended Use and Use Specification.
- Can formative evaluation replace summative evaluation?
- No. Formative evaluation is developmental — it tells you where your design is weak during development so you can fix it. Summative evaluation is confirmatory — it demonstrates that the final device can be used safely and effectively by intended users without correction. IEC 62366-1 requires both. The standard recognizes one narrow path to reducing summative scope: Clause 5.9 permits use of data obtained from summative evaluations of products with an equivalent user interface, together with a documented technical rationale for how the prior data applies to the current device. The rationale must be specific to the tasks and user-interface elements you are claiming equivalence on; 'we did a lot of formative testing' is not a substitute.
- What counts as a 'representative user' for summative evaluation?
- Your participants must match the intended user profile defined in your Use Specification — not the user who is easiest to recruit. FDA human factors guidance is specific: professional users must work within the healthcare system relevant to your intended use. If your device targets ICU nurses in US acute-care settings, testing with German nursing students or your own employees produces results FDA will challenge. Physical and cognitive characteristics matter: if your Use Specification describes users with high cognitive load and time pressure, your test should simulate those conditions. Testing immediately after training also inflates performance — consider a 24–48 hour washout between training and the test session to better simulate real-world conditions.
- We tested with employees. Is that acceptable?
- Generally no, and the FDA is explicit about it. Employees familiar with the device have knowledge and opinions that make their performance 'misleading or incomplete' as evidence for intended-user behavior. The only acceptable exception is if employees truly represent the intended user population — for example, a device designed for use by lab technicians, tested with lab technicians at a different facility who have not been involved in device development. Document your recruitment rationale and have it reviewed before your summative evaluation, not after.
- How does IEC 62366 interact with IEC 62304 for software devices?
- IEC 62304 governs the software development lifecycle — architecture, unit tests, verification, and anomaly resolution. IEC 62366 governs the user interface design and evaluation process. They overlap at the user interface software layer: Clause 5.6 of IEC 62366 requires a User Interface Specification, and IEC 62304 requires that software requirements cover the user interface. In practice, your software development SOP should reference both standards: 62304 drives technical implementation and testing; 62366 drives hazard-based evaluation of whether users can operate the software safely. Usability findings from formative evaluation may generate IEC 62304 anomaly reports for UI defects.
- How do you handle usability engineering for AI/ML outputs?
- AI and ML create use-error categories that don't appear in traditional hardware HF assessments: automation bias (users over-trust a confident prediction without checking the input), alert fatigue (users stop responding to warnings that fire too often), and silent failure modes (the model degrades or produces out-of-distribution outputs without visual indication). Your Hazard-Related Use Scenario analysis under Clause 5.4 should include these explicitly. For each AI output your users act on, ask: 'What is the most dangerous way a user could misinterpret this output?' That question is the unit of scenario generation for AI interfaces.
- What is the difference between Part 1 and Part 2 of IEC 62366?
- IEC 62366-1:2015 (with Amendment 1:2020) is the normative standard — the requirements document that notified bodies assess and that regulatory submissions reference. IEC/TR 62366-2:2016 is a Technical Report, meaning it is informative guidance only, not requirements. TR 62366-2 contains worked examples of hazard-related use scenarios, methods for formative evaluation, and supplementary implementation guidance. It is a useful practical reference but does not impose requirements. Any claim that 'IEC 62366-2 specifies the testing protocol' misreads the document structure. When your submission references IEC 62366 compliance, it references Part 1.
- What is the minimum number of summative evaluation participants?
- FDA human factors guidance specifies 15 participants as a practical minimum number — but that is 15 per distinct user group. If your device has two user groups (for example, surgeons and operating room nurses), FDA expects a minimum of 15 from each group, 30 total. Research cited in FDA Appendix B (Faulkner 2003) indicates that 15 participants detect a minimum of 90% and an average of 97% of known usability problems. IEC 62366-1 does not state a specific minimum — it requires 'representative and sufficient' participants. For FDA 510(k) submissions, 15 per user group is the floor you need to be able to justify.
Further reading
Regulatory guidance
FDA Applying Human Factors and Usability Engineering to Medical Devices (February 3, 2016; Docket FDA-2011-D-0469). Appendix A provides the 8-section HFE/UE Report structure; Appendix B covers sample size rationale; Appendix C describes use error root-cause analysis methodology.
FDA Human Factors Studies and Related Clinical Study Considerations in Combination Product Design and Development (February 2016). Relevant for combination products with overlapping usability and clinical study requirements.
Standards (paywalled; consult your standards library or notified body)
IEC 62366-1:2015 — Medical devices — Part 1: Application of usability engineering to medical devices (including Amendment 1:2020). This is the normative standard. IEC/TR 62366-2:2016 — Medical devices — Part 2: Guidance on the application of usability engineering to medical devices. This is informative guidance only. ANSI/AAMI HE75:2009/(R)2018 — Human factors engineering — Design of medical devices. A comprehensive human factors engineering reference used by many FDA reviewers as a methodology resource.
Related guides on this site
IEC 81001-5-1 + FDA 2023 Cybersecurity Guide — covers the intersection of security risk management and safety risk management, including how security threats can interact with usability-related hazards.
Standards Crosswalk — maps IEC 62366-1 clause requirements to ISO 13485, FDA QMSR, and ISO 14971 equivalents.