Since March 29, 2023, FDA has had statutory authority under Section 524B of the FD&C Act (codified at 21 U.S.C. §360n-2) to require cybersecurity documentation for any premarket submission of a “cyber device.” A transitional policy delayed enforcement until October 1, 2023. Since then, submissions missing the required cybersecurity documentation receive Refuse to Accept (RTA) letters at the 15-day acceptance-review mark and do not move to substantive review.
This guide exists because no one publishes the direct mapping between what IEC 81001-5-1 requires, what ANSI/AAMI SW96:2023 requires, and what the FDA cybersecurity guidance (current edition February 3, 2026) expects in a premarket submission — so quality and regulatory leads spend days resolving the same question: “Am I doing this twice?” The answer is no, if you understand which standard does what work.
What follows covers both the FDA path and the IEC/AAMI standards path in parallel: how to determine whether your device is in scope, what the two central standards require and how they divide the work, how to build the required artifacts without a dedicated security team, and how to respond if you already have an RTA letter in your inbox.
Is your product a “Cyber Device”?
The FDA guidance is pretty clear here. Does it have a network interface and accept (or work with) external files. If the answer to both is yes, you are in scope for the cybersecurity requirements under Section 524B of the FD&C Act, and your submission will require a Software Bill of Materials, threat model, security risk management documentation, and a post-market vulnerability management plan.
What does the 524B statutory definition actually cover?
The statutory definition of a “cyber device” under Section 524B (21 U.S.C. §360n-2(c)) has three tests that all must be met: the device (1) includes software validated, installed, or authorized by the sponsor as a device or in a device; (2) has the ability to connect to the internet; and (3) contains any such technological characteristics validated, installed, or authorized by the sponsor that could be vulnerable to cybersecurity threats.
In practice, a cloud-connected app with an API backend clearly qualifies. A device with only Bluetooth connectivity to a phone is a borderline case — the statute’s “ability to connect to the internet” is read broadly by FDA reviewers and typically includes indirect connectivity through a paired phone or gateway. An entirely offline device with a USB port that accepts diagnostic files does notstrictly meet the statutory definition (no internet connectivity) but FDA still recommends cybersecurity documentation under Sections IV–VI of the 2026 guidance because software+vulnerable-characteristics create risk regardless of connectivity. If you are unsure whether your device crosses the 524B line, assume it does and document your reasoning — FDA reviewers err toward inclusion for borderline devices.
What changes for Class I vs. Class III devices?
Most medical devices fall into Class II (moderate risk; 510(k) pathway). Class I devices (lowest risk) have lighter cybersecurity expectations. A Class I device with no network capability and no ability to accept external files is not a cyber device and the Section 524B requirements do not apply. A Class I device that does have network connectivity is still subject to the statutory requirements, though FDA’s proportionality principle means some elements — independent penetration testing, for example — may be scaled back. Document your reasoning either way.
Class III devices (highest risk; PMA pathway) carry the heaviest expectations. For a Class III cyber device, plan for independently verified threat models, third-party penetration testing reports, and a more rigorous post-market surveillance plan. FDA’s Refuse to File (RTF) review for PMA submissions is the analog to the 510(k) RTA, and the cybersecurity expectations are higher across the board.
This guide focuses on the Class II 510(k) reality. Adjust upward for Class III, and note that Class I devices may satisfy lighter versions of the same artifacts.
What if my device doesn’t meet both criteria?
If your device meets only one criterion — say, it connects to the Internet but does not accept external files — the statutory cyber device definition may not apply, but FDA still expects you to document that analysis. Write a brief memorandum describing how you evaluated the device against the Section 524B definition and reached your conclusion. That memo becomes part of your design control file. If your device never connects to any network and has no mechanism to accept external software or files, cybersecurity requirements under Section 524B do not apply; the FDA’s quality system requirements (QMSR, 21 CFR Part 820) still apply, and baseline security hygiene is still good design practice.
The two standards you need: IEC 81001-5-1 and ANSI/AAMI SW96:2023
Two standards form the technical backbone of a cybersecurity submission: IEC 81001-5-1:2021 covers how you build software securely across the life cycle; ANSI/AAMI SW96:2023 covers how you identify, assess, and treat security risks as a parallel discipline to ISO 14971 safety risk management. They are complementary, not redundant.
What does IEC 81001-5-1:2021 cover?
IEC 81001-5-1 is a process standard published December 2021 by the International Electrotechnical Commission. It covers secure software development across the entire life cycle: design, implementation, verification, release, and post-market maintenance. The FDA 2023 cybersecurity guidance explicitly references IEC 81001-5-1 as a viable Secure Product Development Framework (SPDF). It is harmonized in the EU under the Medical Device Regulation (MDR) and is the standard most widely expected by EU notified bodies for cybersecurity-related SDLC evidence.
The standard’s operative clauses run from Clause 5 (security management) through Clause 9 (post-market activities). Clause 7 covers security risk management, but at a level of detail that practitioners describe as “laconic” — ANSI/AAMI SW96:2023 provides the prescriptive treatment of that portion.
What does ANSI/AAMI SW96:2023 add?
ANSI/AAMI SW96:2023 is a companion standard, formally recognized by FDA on November 7, 2023. Published by the Association for the Advancement of Medical Instrumentation (AAMI), SW96 focuses specifically on security risk management as a parallel discipline to ISO 14971 safety risk management. It prescribes the threat modeling, vulnerability identification, security assessment, incident response, and coordinated disclosure practices that IEC 81001-5-1 covers but does not prescribe in depth.
SW96 is not an SPDF — it does not replace IEC 81001-5-1. The two standards address different questions: IEC 81001-5-1 answers “how do we build software securely?” and SW96 answers “how do we identify and manage security risks across the device’s life cycle?” They overlap at the security risk management seam, and SW96 provides the more detailed treatment of that seam.
SW96 also explicitly contextualizes security risk alongside safety risk, making it easier for teams already running a 14971 process to plug in without building a parallel universe of documentation.
Which standard applies for a US submission vs. EU submission?
For a US-only submission, both standards work together: IEC 81001-5-1 covers secure SDLC evidence, and SW96 covers security risk management artifacts. For EU-only submissions, IEC 81001-5-1 is the primary requirement (it is harmonized; SW96 is US-origin and not a harmonized EU standard). For US + EU, adopt both — the combined coverage satisfies both regulatory environments and the incremental work over using 81001-5-1 alone is modest.
AAMI TIR57:2016/(R)2023 (Principles for medical device security — Risk management, FDA recognition no. 13-83) and AAMI TIR97:2019/(R)2023 (Principles for medical device security — Postmarket risk management for device manufacturers) are the predecessor technical information reports that SW96 consolidates and upgrades into a normative standard. Per AAMI Standards Program policies, TIRs are recommendation documents (“should”) whereas SW96 is a consensus standard with requirements (“shall”). TIR57 and TIR97 remain useful as further reading and are commonly referenced in audit contexts; your notified body or FDA reviewer may expect familiarity with TIR57 terminology.
SPDF for small teams
To some degree, following modern software development and security best practices, including working with certified 3rd party partners is the core of what is needed for SPDF. Then you just have to be able to document it thoroughly. Many modern software development tools and platforms have their certifications readily available and allow you to save or access logs of the configurations and activities in those platforms. Making it a habit to follow the best practices and regularly get a snapshot or download of the logs or work you have done on the platform will go a long way to helping you establish an SPDF without needing to do tons and tons of extra work.
In practice, this means: (1) use GitHub, GitLab, or Azure DevOps with branch protection, signed commits, and audit logs enabled; (2) enable static application security testing (SAST) in your CI/CD pipeline — most platforms offer this natively (GitHub Advanced Security, GitLab SAST); (3) use a dependency scanner (npm audit, Dependabot, Snyk) configured to block on critical vulnerabilities; (4) document your architecture decisions and threat models in version-controlled markdown or a design control document; (5) keep a security incident log (even if empty most of the time — the log itself demonstrates you are monitoring). Save screenshots or exports of your CI/CD logs, SAST reports, and dependency scans quarterly or before major releases. These snapshots become your evidence that you followed secure practices, not a one-time event.
A single engineer wearing a “security discipline” hat, using existing platform features, can demonstrate SPDF compliance. You do not need a separate security team or a dedicated Chief Information Security Officer (CISO). IEC 81001-5-1 §4.1.4 (Security expertise) requires designated security expertise with documented competence, but does not require a full-time dedicated role — an engineering or quality lead with documented security training and responsibility is sufficient when the scope of work justifies it.
Using cloud infrastructure or HTTPS encryption does not, by itself, discharge the security requirement. You must document how your application logic validates data integrity, enforces authorization, and resists tampering. A team that says “we use AWS” without explaining how their application enforces role-based access control, input validation, or database encryption will not pass the acceptance review.
Integrating SPDF into existing QMS procedures
You do not need to rewrite your design-control procedure or risk-management process. Add a cybersecurity SOP that references IEC 81001-5-1 Clauses 4 through 9 (Clause 4 general requirements, Clause 5 software development, Clause 6 software maintenance, Clause 7 security risk management, Clause 8 software configuration management, Clause 9 software problem resolution) and maps each clause to the evidence you collect: design reviews, code reviews, SAST reports, testing reports, release checklists. Your QMS already collects this evidence for functional safety under IEC 62304; cybersecurity reuses it, filtered for security-specific concerns. The link to IEC 62304 matters: cybersecurity testing and verification activities (SAST, DAST, security testing for release) complement IEC 62304’s software verification and validation requirements and should be referenced together in your SOP, not maintained as separate procedures.
Managing security risks (SW96 artifacts)
ANSI/AAMI SW96:2023 requires a set of artifacts that sit alongside your ISO 14971 safety risk management file. These are not new documents — they are extensions of risk-management thinking into the security domain.
What artifacts does SW96 require?
Security Risk Management Plan.The security equivalent of your ISO 14971 risk management plan. It describes your process for identifying, analyzing, evaluating, and treating security risks across the device’s life cycle. It should name the threat-modeling methodology you use (e.g., STRIDE), the team responsible for threat modeling, how often you update the threat model, and how you integrate security findings back into design decisions. A 5-page document is sufficient.
Security Risk Management File.A table or matrix documenting each threat identified, the mitigation you implemented, the residual risk level, and whether the risk is acceptable. This mirrors ISO 14971’s risk register but focuses on malicious threats rather than failure modes. If you already have a 14971 risk management file, add a “threat source” column (malicious actor, supply chain, environmental) to distinguish security risks from safety risks. One document, shared columns.
Threat Model.A documented analysis of your system’s data flows, trust boundaries, and potential attacker paths. At minimum: an architecture diagram, a list of “what if” scenarios (e.g., “What if an attacker intercepts network traffic between the device and cloud backend?”), and how each scenario is mitigated. Section 4 of this guide covers the artifact format in detail.
Coordinated Vulnerability Disclosure (CVD) Policy. A public statement of how external researchers can report security vulnerabilities to you responsibly. It should include a security contact email, a response-time commitment (typically: acknowledge within 48 hours, provide a fix or workaround within 30–60 days for critical issues), and a statement that you will not pursue legal action against good-faith reporters. This is required by Section 524B of the FD&C Act and expected by SW96. Sample CVD policies are available from CERT/CC and CISA; adapt one to your organization.
Incident Response Plan. Describes how you detect, respond to, and communicate security incidents. It should include: detection mechanisms (log monitoring, user reports), escalation paths (who owns the decision to issue a patch or recall), communication plans (how you notify users and FDA), and timelines (e.g., critical vulnerabilities assessed within 24 hours, patch released within 30 days or documented justification). This ties directly to your post-market surveillance plan.
How do SW96 artifacts integrate with existing ISO 14971 documentation?
Each artifact integrates into your QMS. The security risk management plan and file live in your design-control documentation alongside your ISO 14971 file. The threat model is a design artifact. The CVD policy is published on your website. The incident response plan is part of your post-market procedures.
The integration point with ISO 14971 matters: SW96 explicitly operates in the context of the ISO 14971:2019 safety risk management process. Security risks (harm from malicious actor) and safety risks (harm from device failure) are distinct categories that may interact — a cyberattack that corrupts a diagnostic output is simultaneously a security event and a safety hazard. Your combined risk management approach should make this connection explicit. NIST SP 800-30 (Guide for Conducting Risk Assessments) provides a useful methodology reference for the threat and vulnerability assessment portion, particularly for teams that need a framework for quantifying likelihood and impact in a way that maps to 14971’s risk acceptability criteria.
Threat modeling without the jargon
For a small SaMD team, an adequate threat model is a documented, structured conversation about what could go wrong with your software’s specific data flows, mapped to a realistic impact on patient safety.
It is not a comprehensive security audit, and it should not require a security engineering degree.
The “Good Enough” definition
An adequate threat model for a 10–100 person team consists of:
1. Architecture/Data Flow Diagram (DFD).A simple visual (whiteboard sketch is fine) showing where patient/device data enters, where it’s stored, and where it leaves (e.g., to an API or cloud backend).
2. Threat Analysis.A prioritized list of “what if” scenarios based on the DFD. If your product is a Class II SaMD, focus on Integrity (can the data be silently corrupted?) and Availability (can the doctor/patient access it when needed?).
3. Mitigation Mapping. A summary of how your existing controls (e.g., HTTPS, OAuth, encrypted databases) address those scenarios.
Expected timeframe
First pass: 4–8 hours of focused, cross-functional time (Engineering lead + Product/Quality lead). Preparation: 2 hours to get the DFD on a whiteboard. Workshop: 2–4 hours to walk the DFD and ask, “What if…?” Documentation: 2 hours to turn the notes into a living record.
Smallest credible artifact
Avoid complex tools; favor human-readable, version-controlled text.
1. The lighter choice: a Threat-Scenario Matrix (Markdown or spreadsheet).Columns: Component (from DFD) | Potential Threat | Potential Impact (Safety) | Existing Control | Residual Risk (Low/Med/High). Why: STRIDE is a great mental checklist for the facilitator, but forcing engineers to fill out a STRIDE table for every entry creates “jargon fatigue.”
2. The visual choice: a simple Data Flow Diagram annotated with Trust Boundaries (where data moves from user-controlled to server-controlled).
Start with a DFD + a simple Markdown table.It’s enough to satisfy an auditor (showing you considered the risks) and enough to guide engineering priorities (showing you fixed the obvious holes). Anything more, and you are doing “security theater” rather than building a safer product.
The MDIC Threat Modeling Playbook for Medical Devices (co-authored with FDA, MITRE, and Adam Shostack) is the gold-standard tactical reference for threat modeling in a medical device context. It is dense (~80 pages) and was written assuming a facilitated workshop setting, but the worked examples are worth reviewing before your first session. The DFD + Threat-Scenario Matrix approach described here is a lean distillation of the Playbook’s methodology.
Threat modeling is not an IT exercise — it is a safety document. Every threat you identify should map to potential patient harm (wrong data, unavailable data, data tampering). If your threat model lists “network eavesdropping” as a concern but your system transmits only diagnostic results (not treatment instructions), the risk level is lower because the harm is lower. Conversely, if an attacker can change a dosage recommendation or alter a diagnostic algorithm output, the harm is severe. Frame your threat analysis in terms of ISO 14971 hazards: what patient outcome does each threat enable? This connection is the primary difference between submissions that pass and those that trigger RTA deficiency letters.
Mapping IEC 81001-5-1 and SW96 to the FDA guidance
Most quality managers face this question: “Which IEC 81001-5-1 clause does the FDA actually require?” The answer is distributed across the FDA’s cybersecurity guidance, most recently revised February 3, 2026 (Cybersecurity in Medical Devices: Quality Management System Considerations and Content of Premarket Submissions). The current version supersedes the original September 27, 2023 guidance and the June 27, 2025 revision. The February 2026 revision added Section VIIspecifically addressing the §524B Cyber Device documentation requirements — if your device is a Cyber Device under the statute, Section VII is where FDA expects your 524B compliance artifacts organized. The table below maps the operative clauses of IEC 81001-5-1 (verified against the standard’s TOC) to the corresponding sections of the current FDA guidance.
Nobody publishes this table publicly. We do because it saves a regulatory lead roughly 20 hours of cross-referencing the two documents. The mapping is approximate — FDA often bundles what the standard splits, and notified body expectations for EU submissions may differ from FDA expectations on the same clause.
| IEC 81001-5-1 CLAUSE | REQUIREMENT | FDA 2026 GUIDANCE SECTION |
|---|---|---|
| 4.1.4 (Security expertise) | Designate security expertise; competence documented | IV.A (Cybersecurity as Part of Device Safety and the QMSR) |
| 4.2 (Security risk management framework) | Establish security risk management process as part of QMS | IV.A.1 (SPDF may satisfy the QMSR) |
| 5.1 (Software development planning) | Plan secure development activities across the life cycle | IV.A.1 (SPDF); VII.C.2 (for Cyber Devices under §524B(b)(2)) |
| 5.1.3 / 5.5.1 (Secure coding standards) | Define and apply secure coding standards | Appendix 1.D (Code, Data, and Execution Integrity) |
| 5.2.1 (Health software security requirements) | Derive security requirements from threat model | V.B.1 (Implementation of Security Controls) |
| 5.3 (Software architectural design — defense in depth) | Document architecture with trust boundaries | V.B, V.B.2 (Security Architecture and Views) |
| 5.7.2 (Threat mitigation testing) | Test that controls address identified threats | V.C (Cybersecurity Testing) |
| 5.7.3 (Vulnerability testing) | SAST, DAST, fuzz testing | V.C (Cybersecurity Testing) |
| 5.7.4 (Penetration testing) | Independent penetration testing | V.C (Cybersecurity Testing) |
| 5.8 (Software release) | Security sign-off and release documentation | V.C; Appendix 4 (Premarket Submission Documentation) |
| 6.1.1 / 6.2 / 6.3 (Timely security updates; problem and modification analysis; modification implementation) | Deliver security patches; manage software modifications across life cycle | Appendix 1.H (Firmware and Software Updates); VI.B (Cybersecurity Management Plans); VII.D (Modifications — for Cyber Devices) |
| 7.2 (Identify vulnerabilities, threats, adverse impacts) | Threat and vulnerability identification | V.A.1 (Threat Modeling) |
| 7.3 (Estimate and evaluate security risk) | Use exploitability (not probability) for security risk | V.A.2 (Cybersecurity Risk Assessment) |
| 7.4 (Controlling security risks) | Map risk controls to identified threats | V.B.1 (Implementation of Security Controls) |
| 7.5 (Monitoring effectiveness of risk controls) | Ongoing post-market monitoring | V.A.6 (TPLC Security Risk Management); VI.B; VII.E (Reasonable Assurance of Cybersecurity) |
| 8 (Software configuration management) | Machine-readable SBOM and component tracking | V.A.4 (Third-Party Software Components); Appendix 4; VII.C.3 (for Cyber Devices under §524B(b)(3)) |
| 9.2 (Receiving vulnerability notifications) | CVD intake process | VI.B (Cybersecurity Management Plans); VII.C.1 (for Cyber Devices under §524B(b)(1)) |
| 9.5 (Addressing security-related issues) | Incident response and coordinated disclosure | VI.B; VII.C.1; ANSI/AAMI SW96:2023 Clause 10 (Production and Post-production Activities) |
Which clauses satisfy which FDA sections?
Every operative IEC 81001-5-1 clause has a counterpart in the FDA 2026 guidance, though FDA often bundles what the standard splits. The FDA guidance uses Roman numerals with letter subsections (I–VII) plus five Appendices. Section V (“Using an SPDF to Manage Cybersecurity Risks”) contains most of the general technical expectations; Section VII (“Cyber Devices”) adds the §524B-specific documentation requirements that apply to any device meeting the statutory Cyber Device definition. If your device is a Cyber Device, expect FDA reviewers to look for both sets of artifacts. Your premarket submission should cross-reference explicitly: “Clause 7.2 (Identification of vulnerabilities, threats and associated adverse impacts) is satisfied by Exhibit 14, Section 3 (Threat Model), which aligns with FDA guidance Section V.A.1 (Threat Modeling).” This explicit traceability prevents reviewers from concluding that a requirement is missing.
What if I already have ISO 14971 and IEC 62304 procedures in place?
You do not need to rewrite them. Create a “cybersecurity extensions” section in your design-control SOP and risk-management SOP. Reference IEC 81001-5-1 and SW96 explicitly, clarify which controls address security vs. safety, and ensure your design reviews and risk assessments include security-specific criteria (authentication controls, input validation, data integrity). This incremental approach costs far less than a full rewrite and is commonly accepted by FDA reviewers. IEC 62304’s software verification and validation activities form the foundation; IEC 81001-5-1’s security-specific testing requirements are additive, not duplicative.
SBOMs and supply chain
An SBOM is a machine-readable inventory of the software components — commercial and open-source — that make up your product. Section 524B(b)(3) of the FD&C Act requires an SBOM in a standardized format (SPDX or CycloneDX), covering all components, and listing the version, license, and end-of-support date for each.
SPDX or CycloneDX — which format should I choose?
Both are acceptable to FDA. SPDX (Software Package Data Exchange) is maintained by the Linux Foundation and is the older standard; SPDX JSON is widely supported. CycloneDX is maintained by the OWASP Foundation and was built specifically for security supply-chain use cases; it includes VEX (Vulnerability Exploitability Exchange) support for explaining why a vulnerability in a component does not affect your product. Pick whichever your build tooling supports natively. Most teams choose based on their CI/CD platform: GitHub users often gravitate toward SPDX (via Syft); Node.js/JavaScript shops often use CycloneDX tools (npm sbom, cdx-npm) because they integrate tightly. Either format must be machine-readable — a PDF table of components is not an SBOM.
What constitutes “all components”?
Your SBOM must include: (1) every commercial library you license and include in your binary, (2) every open-source library your build pulls in, including transitive dependencies, and (3) every piece of software your device relies on at runtime (embedded Linux kernel, OpenSSL, PHP runtime, etc.). A common RTA trigger is an SBOM that lists only major libraries (e.g., “OpenSSL”) without naming transitive dependencies or version specificity. Most modern dependency-management systems (npm, Maven, Python pip) can auto-generate SBOMs — use that. Manual SBOMs are a source of gaps.
How do I maintain the SBOM after release?
An SBOM is not a one-time submission artifact. Per Section 524B(b)(3), keep the SBOM current as your software evolves. In practice: every time you update a dependency or push a patch, regenerate your SBOM (your CI/CD pipeline should do this automatically) and attach it to the release tag. If a component reaches end-of-support while devices are still in the field, document your mitigation — either you have compensating controls in place, or you are planning a mandatory software update. Do not let your SBOM drift stale post-market.
The SBOM alone does not satisfy Section 524B. You must also provide a Vulnerability Management Plan that describes how you monitor for new CVEs affecting your components, how you assess whether each CVE affects your device, and how you prioritize fixes. Include specific monitoring sources: NVD (National Vulnerability Database), ICS-CERT, security advisories from your component vendors. Name explicit SLAs: “We assess critical vulnerabilities within 24 hours. If a fix is available, we release a patch within 30 days or provide documented justification for delay.” An SBOM without a vulnerability management plan will be cited as incomplete.
Post-market vigilance
The FDA 2023 guidance places significant emphasis on post-market cybersecurity management. Section V.A.6 (Total Product Life Cycle Security Risk Management) and Section VI.B (Cybersecurity Management Plans) together require a plan describing how you will identify, assess, and respond to vulnerabilities discovered after your device is on the market.
What does a post-market cybersecurity plan need to include?
Your post-market plan should address four areas:
(1) Vulnerability detection and monitoring. Who receives security reports? How are reports escalated? A public security contact email or a coordinated disclosure program (see CVD policy in Section 5) allows external researchers to notify you of issues. Name your monitoring sources explicitly: NVD, ICS-CERT, security advisories from your component vendors.
(2) Triage and assessment. Once a vulnerability is reported — internally or externally — how quickly do you assess whether it affects your deployed devices? For critical vulnerabilities, plan for assessment within 24 hours. For high-severity issues, 48–72 hours. FDA expects specific timelines, not intent language.
(3) Patch release and deployment. How do you deliver patches to deployed devices? If your device is cloud-connected, patches may be automatic; if not, you may rely on users downloading an update from your website. Document the mechanism. Plan to release critical patches within 30 days or provide a documented exception with technical justification.
(4) Communication. How do you notify users, healthcare providers, and FDA of vulnerabilities and patches? Mandatory FDA notification for a specific post-market vulnerability is governed by 21 CFR Part 806 (Medical Devices; Reports of Corrections and Removals). Under §806.10, any correction or removal initiated to reduce a “risk to health” (as defined at §806.2(k)) must be reported within 10 working days; records must be maintained for non-reportable corrections as well (§806.20). Section 524B(b)(2) separately requires a plan for proactive security updates, which is a distinct obligation. Your incident response plan should include communication templates and timelines that cover both obligations.
This is often the hardest post-market gap to address: a device architecture that cannot be patched. If your device is a standalone embedded system with no network interface and no storage for signed updates, you cannot patch a critical flaw without a hardware recall. FDA reviewers will ask: How do you mitigate this risk? Possible answers include: (a) you architected the system with hardware-backed secure boot and remote update capability; (b) you designed with defense-in-depth so that if one component is compromised, others compensate; or (c) you have a voluntary recall and user communication protocol planned if a critical vulnerability is discovered. If you have none of these mitigations, your submission will be cited as incomplete. Design your software architecture assuming that you will need to patch a critical flaw post-market.
Responding to an RTA letter
If your submission receives a Refuse to Accept (RTA) letter on the cybersecurity review, you have 180 days to address every deficiency listed in the letter. The clock starts on the RTA notification date; day 181, your submission is withdrawn automatically unless you have resubmitted a comprehensive response. Piecemeal responses do not work — if you address some deficiencies but not others, FDA issues a second RTA (“RTA2”), and the clock resets.
How does the FDA acceptance review work?
FDA performs a 15-day acceptance review on every 510(k) submission before substantive scientific review begins. This is where cybersecurity RTAs are typically issued. Your submission arrives; FDA’s acceptance team checks a checklist: “Is there a threat model? Is there an SBOM? Is there a post-market plan?” If any artifact is missing or incomplete, FDA issues an RTA at the 15-day mark and your submission never reaches the substantive review queue. Since October 1, 2023, missing cybersecurity artifacts — SBOM, threat model, vulnerability management plan, security risk assessment — are stock RTA triggers. FDA publishes the current Acceptance Checklist items at fda.gov/medical-devices/premarket-notification-510k/acceptance-checklists-510ks; consult it directly for the exact line items applicable to your submission.
For PMA submissions, the analog is a Refuse to File (RTF), issued within 45 days. For De Novo submissions, a similar acceptance review applies.
What are the most common RTA deficiency patterns?
The most frequent RTA deficiencies in cybersecurity submissions fall into five patterns. Understanding which pattern you have helps you respond efficiently.
What response structure works?
The strongest RTA responses follow this structure:
1. Executive summary (1 page):Describe the changes at a high level. “We have completed a detailed threat model using STRIDE, added a formal vulnerability management plan with specific SLAs, regenerated our SBOM using Syft, and engaged an independent penetration tester. All artifacts address the deficiencies noted in the RTA letter of [date].”
2. Deficiency-by-deficiency table (main body):For each deficiency listed in the RTA letter, include three columns: (a) FDA deficiency, (b) where you addressed it (file name, exhibit number, section), (c) evidence attached. Reviewers can triage in minutes. Example: Deficiency: “You did not provide a threat model.” Response: “Threat model provided in Exhibit 14, IEC 81001-5-1 Design Control File, Section 4.2. Includes DFD with trust boundaries, threat-scenario matrix, and traceability to design review meeting notes from [date].”
3. Updated or new exhibits: Attach the revised artifacts. Do not include only a written description — provide the actual documents.
4. Evidence of integration: For each deficiency, show how the fix integrates with your other QMS documents. Reference your design-control procedure, risk-management file, and post-market surveillance plan. Reviewers are trained to trace a threat finding through to the risk file and labeling — if you show that thread, deficiency letters decrease.
5. Tone: Direct and professional. Acknowledge that the original submission was incomplete and describe what you learned and fixed. FDA reviewers respond better to systematic response than to defensive explanation.
Raw SAST or DAST output is not a submission artifact. If you include a 1,000-line static analysis report with no analysis, FDA will cite it as unintelligible or incomplete. Instead, provide a summary: “We ran SonarQube SAST analysis on [date] and identified 47 issues. Of those, we classified 12 as security-related. Of the 12, we triaged as follows: 3 Critical (affecting patient data integrity — fixed before release), 6 High (affecting application availability — addressed in patch 2.1, scheduled for [date]), 3 Medium (defense-in-depth hardening — planned for next minor release). We have documented the triage and mitigation plan in Exhibit X.” This demonstrates you understand the findings and have made a risk-based decision. Raw output is available upon request — do not include it in the primary submission package.
Frequently asked questions
- Is my Class I device subject to these requirements?
- A Class I device with no network capability and no ability to accept external files is not a cyber device — Section 524B requirements do not apply. A Class I device that does have network connectivity and accepts external files is in scope, though FDA's proportionality principle means some elements (third-party penetration testing, for example) may be proportionately lighter. Document your analysis of the Section 524B definition either way.
- What about in vitro diagnostic devices (IVDs)?
- IVDs are subject to the same FDA cybersecurity requirements if they are cyber devices (connected, accepting external files). EU IVD manufacturers should note that the IVDR (In Vitro Diagnostic Regulation) references IEC 81001-5-1 directly. The standards and guidance are the same; the regulatory pathway may differ. Consult your notified body if pursuing EU IVD approval.
- Does EU MDR require the same cybersecurity artifacts?
- IEC 81001-5-1 is a harmonized standard under EU MDR, meaning notified bodies expect compliance with it. The EU MDR does not have an FDA-equivalent cybersecurity guidance document; instead, MDCG 2019-16 Revision 1 (July 2020) provides the current EU-side guidance on cybersecurity for medical devices, available at health.ec.europa.eu. Your artifacts will be similar (threat model, secure development evidence, post-market plan) but organized under IEC 81001-5-1 structure rather than the FDA section framework. Building your submission to satisfy IEC 81001-5-1 + SW96 will largely cover the EU MDR cybersecurity expectations as well. Verify with your EU notified body on the specific documentation format they expect.
- If I use Auth0, Okta, or Stripe, do I inherit their security posture?
- Partially. If you delegate authentication to Auth0, you benefit from their secure design and maintenance — your SBOM and threat model should acknowledge this explicitly. However, FDA still expects you to evaluate risks that remain in your application: proper token handling, logging of authentication failures, rate-limiting on failed attempts. Your threat model should include a row for "third-party identity provider compromise" and your mitigations (e.g., token expiration, refresh-token rotation, detection of impossible-travel scenarios). You inherit the provider's security controls; you do not inherit their audits. You remain responsible for your application's security controls and for monitoring the provider's security advisories.
- How much documentation is enough for a 5-person startup?
- Proportional to risk, not to company size. A 5-person startup building a non-critical Class II SaMD can satisfy all cybersecurity requirements with approximately: (1) a 5-page security management plan, (2) a threat model in a Markdown file with a simple DFD and threat-scenario table, (3) an SBOM auto-generated from your build tooling, (4) a 3-page vulnerability management plan referencing your QMS incident procedures, (5) a CVD policy adapted from CERT/CC, and (6) evidence of SAST integration (CI/CD logs). That is roughly 20 pages of new documentation, not 200. One engineer reading this guide and a focused day of work can produce all of the above.
- What if I discover a critical vulnerability before I ship?
- If discovered during development but before your submission, fix it, document the fix in your design control file, and update your threat model to reflect the control. No RTA trigger. If discovered after your submission but before FDA approval, submit an amendment describing the issue, the fix, updated test evidence, and assurance that the device remains safe and effective. If discovered post-approval (after your 510(k) is cleared), follow your vulnerability management and post-market surveillance procedures. Mandatory FDA notification is governed by 21 CFR Part 806 (Medical Devices; Reports of Corrections and Removals) — §806.10 requires reporting any correction or removal initiated to reduce a "risk to health" (defined at §806.2(k)) within 10 working days. Prepare a patch and communicate with users.
- Can I use open-source components, or does FDA prefer commercial licenses?
- FDA accepts open-source components, provided you: (1) list them in your SBOM with license information, (2) comply with the license terms (copyleft licenses may require disclosure or source code sharing), (3) monitor them for vulnerabilities via your vulnerability management process, and (4) have a plan for end-of-life or unmaintained components. Many Class II SaMD devices rely heavily on open-source (Linux, OpenSSL, Node.js) and pass FDA review. Deprecated, unmaintained open-source components that you have abandoned are a deficiency trigger.
- What about devices with industrial control system (ICS) or operational technology (OT) components?
- Devices that incorporate ICS or OT elements — process controllers, industrial protocols, SCADA interfaces — should also consult IEC 62443 (Industrial Automation and Control Systems Security) alongside IEC 81001-5-1. IEC 62443 addresses security requirements at the component, system, and operational levels for industrial environments, and FDA reviewers familiar with connected infrastructure devices may expect its methodology referenced in your threat model. The interaction between 81001-5-1 (health software SDLC) and 62443 (industrial systems security) is a specialist area; if your device crosses this boundary, engage a regulatory consultant with dual expertise.
- Does handling protected health information (PHI) add requirements beyond cybersecurity?
- If your device handles PHI, the HIPAA Security Rule (45 CFR Part 164, Subpart C) applies in addition to FDA cybersecurity requirements. HIPAA and FDA cybersecurity have overlapping controls (access control, audit logging, transmission security) but different regulatory frameworks and enforcement bodies (OCR vs. FDA). A SaMD that collects diagnostic data and transmits it to a covered entity's EHR will need to satisfy both. The technical safeguards under HIPAA (45 CFR §164.312) map reasonably well to the authentication, encryption, and audit-log controls your FDA cybersecurity submission requires — document the overlap explicitly rather than treating them as separate compliance programs.
Further reading
Regulatory guidance
FDA Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions (September 27, 2023; updated June 27, 2025; current edition February 3, 2026).
MDIC Threat Modeling Playbook for Medical Devices.
MDCG 2019-16 Revision 1 (July 2020) — Guidance on Cybersecurity for Medical Devices (EU).
FDA Acceptance Checklists for 510(k)s.
Innolitics — How to Avoid 14 Common FDA Cybersecurity Deficiencies.
Standards (paywalled; consult your standards library or notified body)
IEC 81001-5-1:2021 — Health software and health IT systems security — Secure Product Development Framework (SPDF). ANSI/AAMI SW96:2023 — Standard for medical device security — Security risk management for device manufacturers. AAMI TIR57:2016/(R)2023 — Principles for medical device security — Risk management (FDA recognition no. 13-83). AAMI TIR97:2019/(R)2023 — Principles for medical device security — Postmarket risk management for device manufacturers.
Reference and tooling
NIST SP 800-30 — Guide for Conducting Risk Assessments (methodology reference for threat and vulnerability assessment). NTIA Software Component Transparency: Framing Guidance. OpenRegulatory IEC 81001-5-1 resources.
Standards placement: This guide references specific sections of IEC 81001-5-1 and ANSI/AAMI SW96:2023. Both standards are available from IEC (iec.ch), AAMI (aami.org), and national standards bodies. Costs: approximately $200–300 USD per standard. Many notified bodies and regulatory consultants have copies available for reference.