Prevent inappropriate use of AI
Why does this matter?
Medical AI products are designed to support healthcare professionals in their provision of care within a specific context of use. However, if these products are used improperly, they can impede care, potentially cause harm to patients, result in reputational and financial loss, and erode trust in the healthcare delivery organization.
How to do this?
Step 1: Develop clear guidelines and standards
- Clearly define and document the appropriate use of an AI product (see guide on defining the role of AI )
- Develop protocols that document how to identify and deal with inappropriate use of an AI product. This could include when to pause the use of the AI product, how to investigate the cause of the inappropriate use, and how to determine if the AI product needs to be updated or decommissioned (see the guide on determining if updating or decommissioning is necessary ).
Step 2: Constrain AI product outputs to the scope of use
- Restrict the use of the AI product outside of the intended context. This can be done by:
- Limiting access to dashboards and visualizations to intended users. Tightly control the process by which new users are added.
- Restricting the communication channel through which AI product outputs are shared. In general, avoid writing model outputs to tables or data sources that are broadly available within the organization.
- Only generating AI product outputs for patients within the target population. This means disabling the product from being able to run on patients outside the scope of use.
- Making AI product labels and documentation easily accessible at all points in which frontline workers or leaders interact with AI product outputs. This includes all written material that may be inadvertently shared outside the scope of intended use.
Step 3: Establish a robust training framework and schedule
- Train users on how to use AI products safely and effectively. This may include:
- The proper scope of use of each AI tool
- How to interpret AI-generated recommendations
- How to integrate AI into clinical workflows
- How to manage AI-related risks
- Host regular refresher training sessions to continually educate and maintain awareness about proper AI product use
“Education across the continuum is going to be important because I think the more dialogue that’s being had…that’s going to be helpful. Ultimately, I think it’s going to be grounding things in the real world as applicable and relevant to clinicians if they’re going to effectively use this technology. I think that they’re going to have to be able to appreciate real time how this technology is going to impact their clinical decision making”Clinical Leader
Step 4: Consider phased rollout of an AI product
- The incremental rollout of an AI product can improve product integration into existing work.
- Manage changes in the work environment caused by AI product rollout (see the guide on managing changes to the work environment )
- Collect feedback from initial phases of rollout before wide-scale rollout.
Step 5: Monitor and evaluate AI products in clinical settings
- Monitor and audit (see guides under the phase monitor AI solution’) the AI product to ensure appropriate use within the defined scope of the product (see guide for how to define the scope of use).
“Say we’re bringing a new robot on board, we bring the team members that would be there for the first case– physicians, anesthesia– we would do a mock run of bringing the patient in the room, where we would position the table, how we would set up the robot, how we would set up the instrumentation, and then go through that and look at any gaps, any opportunities at that point, and then put the project plan in place to say, here’s the guidelines… Here’s how to set up the room. Here’s who was present. Here’s where we’ll station the robot. Here’s what was at the station, and make sure that all the key stakeholders are involved”Operational Leader