Breaking Down the FDA’s Draft Guidance on Artificial Intelligence-Enabled Device Software Functions

📄 Based on FDA Draft Guidance (January 2025): Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations

Disclaimer: This article summarizes publicly available draft guidance from the FDA and does not constitute regulatory advice. Hattrick IT is not a regulatory agency, and we recommend consulting qualified regulatory professionals for compliance guidance.

At Hattrick IT, we’re a software development company specializing in digital health and medical device solutions. While we do not provide regulatory consulting or legal guidance, we work closely with clients building FDA-regulated software, and we follow these developments closely to build with regulatory awareness from the very start.

In this article, we summarize key points from the draft and reflect on what it may mean for development teams building AI-enabled health software.

Why This Guidance Matters Now

Artificial intelligence is reshaping the future of healthcare. From clinical decision support tools to diagnostic software and real-time monitoring platforms, AI is helping providers personalize care, automate analysis, and scale delivery in ways previously unimaginable.

But with power comes complexity—and risk. AI-enabled software functions present unique challenges in terms of reliability, transparency, bias, validation, and ongoing performance. To help the industry navigate this evolving space, the FDA has recently released a draft guidance (back in January this year) focused specifically on AI-enabled medical device software.

This draft document is a step toward providing clarity in a field that’s still catching up to its own pace of innovation. It outlines regulatory expectations for how AI-based device software should be developed, monitored, updated, and submitted for review. It also offers insight into the agency’s thinking around change control, transparency, and the use of real-world data.

A Lifecycle Approach to AI in Medical Devices

One of the most important messages in this draft guidance is the need for a Total Product Lifecycle (TPLC) approach when building AI-based software. Unlike static devices, AI-based tools often evolve over time through model updates, retraining, or adaptation to new data. The FDA wants developers to think about the entire life of the product, not just the point of submission.

This includes:

  • How models are trained, validated, and tested

  • What kind of data is used—including real-world or synthetic

  • What happens after deployment (monitoring for drift or degradation)

  • How updates will be managed without compromising safety or performance

The TPLC framework urges proactive planning. The more predictable and well-documented your development and update processes are, the smoother your path through regulatory review.

Predetermined Change Control Plans (PCCPs)

A major innovation in the FDA’s draft guidance on AI-enabled medical device software is its reference to Predetermined Change Control Plans (PCCPs) as a framework for enabling updates to learning models post-authorization. This draft references concepts developed more fully in a separate FDA document, “Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions” (January 2024).

According to that guidance, a PCCP must include three interconnected components:

  • Description of Modifications: A detailed list of proposed future changes to the AI model, with clear rationales for each one. These changes should be specific, well-defined, and verifiable. The FDA expects transparency regarding whether modifications will be automated or manual, and whether they will apply globally (all devices) or locally (adapted to specific users or conditions).
  • Modification Protocol: A documented method for validating and implementing each proposed change, including data handling, re-training procedures, performance evaluation metrics, and update procedures. These must align with the manufacturer's quality system and ensure continued safety and effectiveness of the device.
  • Impact Assessment: An evaluation of the cumulative benefits and risks of implementing the planned modifications. This includes analyzing potential bias, unintended harms, and interactions between different changes. It ensures that the combined effect of updates won’t compromise patient safety or device performance.

By using PCCPs, manufacturers can make certain updates more efficiently—without the delays of submitting a new application for each one. However, the FDA emphasizes that PCCPs must be reviewed and authorized during the initial marketing submission of the device. Any changes made outside the approved PCCP may still trigger the need for a new submission.

This approach acknowledges the iterative nature of AI development while upholding the FDA’s commitment to safety, transparency, and accountability.

Transparency: Model Cards and Beyond

A major theme throughout the draft guidance is transparency. The FDA wants AI-enabled devices to be explainable and interpretable to all relevant stakeholders—regulators, clinicians, and even patients.

One way to achieve this is through the use of Model Cards, which provide standardized information about an AI model’s:

  • Intended use and users

  • Training data characteristics

  • Performance metrics

  • Limitations and risks

Model Cards offer a clear and consistent way to communicate what an AI model does, its strengths and limitations, and appropriate use cases. While not mandatory, they align with AI ethics principles like accountability and fairness, and can serve as an effective tool to summarize key characteristics, performance metrics, and constraints of AI-enabled devices.

Postmarket Management – Device Performance Monitoring

AI software doesn’t end when it ships. The FDA expects sponsors to have systems in place to monitor model performance in the real world. This includes:

  • Tracking for performance drift

  • Monitoring inputs and outcomes

  • Logging decisions for auditability

These activities should be risk-based and connected to a broader risk management framework (aligned with standards like ISO 14971). It’s about ensuring that software remains safe and effective as it interacts with new environments, new data, and new use cases.

Cybersecurity and AI-specific Threats

This FDA draft guidance also touches on cybersecurity, which is especially important for connected devices using AI. Risks include:

  • Data Poisoning: Attackers may inject falsified or manipulated data into training datasets, potentially compromising critical outcomes such as medical diagnoses.

  • Model Inversion and Stealing: Adversaries might use crafted inputs to extract sensitive training data or replicate proprietary models, risking privacy breaches and intellectual property theft.

  • Model Evasion: Inputs designed to mislead AI models can result in incorrect outputs or classifications, threatening the reliability of the device.

  • Data Leakage: Weaknesses in security architecture can allow unauthorized access to sensitive training or inference data.

  • Overfitting: Deliberate overfitting of models can make them brittle and prone to failure in real-world conditions, opening doors for adversarial exploitation.

  • Model Bias: Manipulated or biased training data may embed systemic inequities in the model’s behavior. Attackers could exploit these biases or introduce new ones through backdoor attacks or skewed fine-tuning processes.

  • Performance Drift: Gradual changes to the data distribution—whether intentional or environmental—can degrade model performance over time, increasing the risk of inaccurate predictions and vulnerability to attacks.

To manage these vulnerabilities, developers are expected to incorporate AI-specific threat modeling, cybersecurity risk assessments, and appropriate mitigation strategies. The FDA reinforces that these expectations are consistent with the 2023 Premarket Cybersecurity Guidance, which outlines requirements such as fuzz testing, penetration testing, and safeguards against data leakage through access controls, encryption, and data anonymization.

The Role of Early Engagement

As with many novel technologies, the FDA encourages developers to use the Q-Submission Program to initiate early discussions. This can help:

  • Clarify regulatory expectations

  • Discuss the appropriateness of a PCCP

  • Align on validation strategies or risk frameworks

Early engagement is especially important when you’re dealing with novel data sources, learning models, or new approaches to transparency. It can save time and reduce surprises later in the process.

Implications for Digital Health Product Teams

For startups and innovators building AI-powered health products, this guidance matters. It shows that the FDA is embracing adaptive, intelligent software—but also setting boundaries to ensure patient safety.

It’s not enough to "move fast and break things" in healthcare. Software teams must build with traceability, documentation, and lifecycle risk in mind. That doesn’t mean slowing down—it means building smarter from the start.

Some practical considerations:

  • Document your data sources early

  • Plan for post-launch performance monitoring

  • Use version control and traceability tools

  • Build Model Cards and Explainability Information and Visualization into your UI/UX from day one

  • Work with teams that understand both agility and compliance

How We Approach This at Hattrick

At Hattrick we build high-quality software for the healthcare and medical device industries, with a deep awareness of the regulatory landscape. We follow best practices from:

  • IEC 62304 (software lifecycle processes)

  • ISO 14971 (risk management)

  • AAMI TIR45 (agile practices in regulated environments)

Our development process is agile, but never careless. We focus on:

  • Living documentation

  • Built-in traceability

  • Modular architectures ready for change control

  • Continuous testing and risk-based QA

If you're developing an AI-enabled device, we can help you build the foundation for a safe, scalable, and submission-ready product.

Final Thoughts

The FDA’s draft guidance on AI-enabled device software functions is a thoughtful, forward-looking document. It reflects the agency’s recognition that AI is different—but manageable—within a regulated framework.

For developers and innovators, the key takeaway is this: compliance and innovation are not mutually exclusive. With the right development practices, you can build products that are both agile and auditable, adaptive and trustworthy.

If you'd like to read the full guidance, you can find it here:
🔗 FDA Draft Guidance

Interested in partnering with a development team that understands the regulatory landscape?
Let’s talk: hello@hattrick-it.com