Identify potential risks
Why does this matter?
Emerging risks of AI products in healthcare settings. AI products pose new risks that may fall outside the considerations healthcare organizations are primed to detect or anticipate. Many AI products are not fixed at the moment of deployment and some products may have approved predetermined change control plans (see recent FDA guidance). Some are designed to incorporate new data that will change their performance (ideally for the better) over time. As software products, they may also be updated by their developers to address bugs, add features, or change licensing terms. This also means that new risks may emerge after they are integrated into healthcare settings, not only following the changing environment surrounding the AI product but following changes to the design and performance of the AI product itself. Effective monitoring must establish processes to identify and mitigate emergent risks that follow from changes after product integration.
How to do this?
Step 1: Maintain communication links with clinical champions and end users
- Establish communication links with clinical champions, end users, and affected clinicians of the AI product.
- Periodically ask for structured feedback about their experience using the product and any concerns they have about its impact.
- Specifically, ask about any concerns about differing impacts across patient groups.
- Refer how to monitor work environment for guidance on gathering feedback.
Step 2: Establish a dedicated multidisciplinary committee of stakeholders
- Establish a dedicated multidisciplinary committee of stakeholders to review the AI product periodically. Compose the committee with patient or community perspectives, social workers, and professionals with experience in ethics and bias assessments in addition to the institutional expertise of hospital administrators, clinicians, and IT professionals.
- The stakeholder committee should review the impact of the AI product periodically.
- AI product reviews should get triggered at the time of new software updates that might change risk factors or the expected workflow for users.
- It is also important to review the AI product in the aftermath of an adverse event. This requires that earlier in the product lifecycle there are communication channels and mechanisms established for frontline clinicians to report adverse events, near-misses, and errors.
- The review step is related to the auditing process are described earlier.
“How many people need to be hit by a bus in your organization…. before the organization falls down?… Our IT organization is pretty robust, because that’s what they do. They support applications long term. But our AI scientist and data scientist groups [aren’t as robust]. In fact, the factor is one, basically, there’s one person who really has a handle on any given algorithm. And they do one algorithm. Unless you’re XXX, who’s got 15 things flying off his one algorithm, you sort of roll it out, then put it to rest, and you move on to the next one. So that is definitely a downside of bespoke development for procurement. Yeah…it’s perfectly tailored to what we do, but it’s hard to fix if the resources to fix it aren’t always there.”
Technical Expert
Step 3: Monitor AI product reporting on regulatory databases and industry press
- Identify the AI product and others with overlapping use cases on regulatory databases. For example, review adverse events reported to the FDA Medical Device Reporting database.
- These databases hold information about adverse events or product recalls, which may inform proactive risk mitigations. Similar insights can be gained from searching academic literature periodically. For example, a recent review summarized characteristics of over 266 safety events reported to the FDA Manufacturer and User Facility Device Experience (MAUDE).
- External product vendors of regulated AI products should do this as part of their post-market surveillance plan.
- External vendors of AI products that are not FDA-regulated devices do not currently have reporting requirements. Ask external vendors to facilitate introductions to other similar organizations that are familiar with the use of the procured AI product.
Step 4: Regularly assess the availability of alternative, improved AI products
- Healthcare delivery organizations should avoid vendor lock-in throughout the AI product adoption process. Maintain communication channels with multiple vendors in a product category. Potential risks or performance challenges associated with an AI product may become apparent when a competing vendor releases an alternate, improved AI product.