Identify and mitigate risks
Why does this matter?
Prior to integrating the AI product into a clinical setting, it is important to identify predictable downstream consequences and potential risks associated with the specific use case and the integration context of the AI product. Risks associated with the AI product can vary, and your organization must prepare for novel risks. Even when risks appear insoluble, acknowledging them is crucial to maintain trust in the AI product and identifying opportunities to mitigate persistent risks. Following engineering and integration best practices can serve as a starting point to consider potential risks. Communication among stakeholders is essential to identify scenarios beyond the anticipated consequences and concerns.
How to do this?
Step 1: Identify your organization’s risk management team
- Engage clinicians, AI experts, software developers, regulatory experts, end users, and patient representatives.
- To identify predictable risks, engage end-users to work with technical staff. Frontline clinicians are closer to the clinical reality and often different individuals than those authorizing AI product use.
- Pay attention to potential conflicts of interest that may bias judgment toward positively framing the tool.
- To identify additional risks, consider mechanisms for soliciting external feedback and input.
“…The question is just when and how soon, are you able to identify it and mitigate the negative impacts?”
Community Expert
Step 2: Agree on the scope of use to be evaluated
- Consider whether there are any changes to your organization’s scope of use, as small changes could have a big impact on the risk assessment.
Step 3: Prespecify your risk management plan
- Consider pre-defining the criteria by which your organization will evaluate and mitigate risks throughout the lifetime of the AI. This plan may include the methods that will be used to assess risks, the acceptance criteria that will be used, the manner in which risk control measures will be implemented, and the monitoring activities that will be applied to ensure risks remain acceptable.
- Consider defining “risk” as a metric in your plan that multiplies the severity of the harm by the likelihood that the harm will occur.
Step 4: Perform a risk analysis
- Identify potential foreseeable harms and sources of each harm, unique to the AI and its use case. Harms may include bias, false positive/negative results leading to patient harm, privacy breach, and social harm. Sources of harm may include an unrepresentative training dataset, overreliance on the AI, model drift, model overfitting, inadequate data management and security controls, and poor integration into clinical workflows.
- Discuss all possible scenarios of negative consequences that could occur if the ideal is not executed with the relevant internal stakeholders.
- Estimate the risk of each harm in accordance with a risk plan by combining the severity of the harm and the probability that the harm will occur.
“They need to consider where biases may arise in the specific use case. That may require a degree of evidence gathering by going into the literature looking at, you know, census data and national data on disadvantaged groups. Then, in a targeted way, looking to see whether an AI tool has been validated in those groups.”
AI Bias and Fairness Expert
Step 5: Evaluate and control risks
- Categorize identified risks by key characteristics, such as urgency, severity and visibility, to prioritize the available options for triage and action.
- Consider and implement risk control measures to reduce the severity of harm or the likelihood of it occurring to an acceptable level. Risk control measures may include inherent safety by design, protective measures such as a quality check in the software, and providing safety information, such as warnings and training.
- For any residual risks above a predefined threshold from the risk plan, a written justification can be considered to determine whether the benefits outweigh the risks.
Step 6: Monitor risks throughout the lifecycle
- Specific risks, such as drift, bias, and performance, should be monitored over time. See later guides for monitoring the AI product and affected work environment.
- Establish eligibility criteria for the use of an AI product to prevent product use on subgroups where performance targets are not met. If this approach is taken, monitor care outcomes between eligible and ineligible populations for any introduction or exacerbation of disparity.
- Establish a-priori processes for clinician end-users, business unit managers, and technology leaders to submit safety concerns. Establish processes for evaluating safety concerns and suspending the tool. See the guide on determining if updating or decommissioning is necessary for more on when to update or decommission AI products.
- Establish a-priori thresholds for initiating an initial audit or monitoring exercise.