Monitor AI performance 

Why does this matter?

Regularly monitoring the performance of an AI tool is critical for patient safety and quality of care. Healthcare providers should establish a monitoring plan that ensures an AI solution remains effective and meets all the applicable regulatory, liability, and organizational objectives regardless of whether it is internally built or externally procured. Monitoring performance is one aspect of the overall monitoring that is expected to occur when operationalizing an AI tool in the clinic (see how to monitor work environment).

Monitoring the AI solutions’ performance is necessary to: 

Ensure the AI tool remains effective: Healthcare delivery organizations and patient populations are always changing, and AI solutions that were once effective may become outdated or irrelevant. By monitoring the AI tool’s performance over time, healthcare providers can ensure that it remains effective and adapts to changes in healthcare.  

Identify opportunities to increase efficiencies: Monitoring performance can help improve patient outcomes and identify opportunities to streamline healthcare processes.

How to do this? 

Here is how to monitor AI tools’ performance in healthcare.

Step 1: Develop a monitoring plan

  • Develop a sustainable plan outlining the frequency and methods of evaluating the AI tool’s performance.
  • Include routine checks on data quality, algorithm performance, user satisfaction, and any potential biases or ethical concerns. 
  • Define pre-specified objectives and desired outcomes, taking into account known risks of the specific AI tool, your users and conditions for use, and the context for which you are applying the AI tool.
  • Identify relevant performance metrics to track and assess the tool’s effectiveness in relation to organizational objectives and desired outcomes over time. 
  • For an example of components to include in a monitoring plan, see Appendix A of the FDA guidance for predetermined change control plans.
  • Additional note: Many elements, including performance thresholds, should have been identified when first developing measures of success for the AI product integration (see how to define performance targets and identify and mitigate risks). The plan should sustain any improved outcomes achieved through the use of the AI tool, including the impacted work environment.

Step 2: Assign a multidisciplinary team

  • Assign a multidisciplinary team to monitor AI products based on the circumstances and assigned roles and responsibilities (see how to define AI product specification under the procurement phase). The level of expertise represented on the team and frequency of activities should be risk-based with more intense oversight for higher-risk uses of AI. The team, at minimum, should include a clinical champion deeply familiar with the context of use; a data scientist or statistician familiar with the AI product; an administrative leader responsible for quality of care within the implementation context.
  • For a developer, this may include responsible clinical and technical expertise (including clinicians, statisticians, data scientists, engineers, and administrators) with clear roles. 
  • For a user, monitoring may still necessitate the availability of expertise but for the purpose of assessing local performance and/or providing feedback to the developer even if they are only required in particular circumstances. 
  • Encourage users and impacted parties to provide feedback and become a  part of the broader monitoring team, which could be trained to gather metrics and escalate concerns for review. For example, look at FDA’s medical device reporting processes.

“In addition to making sure that it is keeping the lights on and still working, as with AI and ML, [the product] is going to change over time. So we’re gonna have to keep revalidating that the model [to make sure it] is not drifting. I think we’re gonna get the data scientists involved.”

Technical Leader

Step 3: Review guidelines and regulatory changes

  • Data monitoring techniques can change, make sure to periodically review methods, latest research, guidelines, best practices, and regulatory changes with AI in healthcare so that teams can make informed decisions as advancements are made over time.
  • Look to the regulators to help determine state-of-the-art best practices, for example, in Appendix A of FDA’s draft guidance for predetermined change control plans (PCCP) for AI/ML, FDA recommended specific considerations for data management practices, model re-training, performance evaluation, and update procedures.

Step 4: Execute the monitoring process

  • Executing the monitoring process will enable the continuous collection of feedback from all impacted parties, including physicians, patients, and operational staff using the AI tool.
  • Create processes, including roles, responsibilities, tracking methods, and templates/forms for handling/triaging feedback.
  • Create methods to gather continuous feedback from healthcare professionals and patients using the AI tool to identify areas for improvement and provide insights into potential issues or challenges.  Examples of such methods include: Providing clear contact information, creating surveys, and listing product/ IT system features
  • The process should also ensure that regular audits are included in your process. Internal audits of the AI tool’s performance, data handling practices, and compliance with healthcare regulations, such as FDA’s Quality Systems Regulations in 21 CFR 820, will help identify potential areas for improvement and prevent issues before they become critical.

Step 5: Transmit monitoring outcomes

  • Regardless of whether the tool was built internally or externally procured, establishing a route for transmitting monitoring outcomes to AI tool developers to incorporate updates.
  • Establish a route for transmitting monitoring outcomes to end-users to build trust (see how to determine if updating or decommissioning is necessary).
  • Monitoring will also lead to the identification of new risks, which should be updated in the product monitoring plan.
  • Consider using a system to automate this process to minimize the additional burden of data management on developers and end-users. 

adopt health ai