Guides

Assess legal risks 

Why does this matter?

  • To assess risk in a changing legal landscape. Various laws and regulations apply to developing and using medical AI products, and the regulatory landscape for AI in healthcare is constantly changing. It is crucial to stay up-to-date with changes in the regulatory landscape and ensure compliance with applicable laws and regulations to mitigate legal risk.
  • Ensure vendors meet legal standards. The potential of non-compliance by vendors with applicable rules makes assessing the legal risk associated with AI products challenging and requires expert consideration.
  • Avoid undesirable downstream outcomes. Besides causing potential financial and reputational harm, lawsuits and regulatory enforcement resulting from failure to comply with applicable rules can also threaten the trust that users of AI products (healthcare professionals and patients) are beginning to build.

How to do this? 

Step 1: Identify relevant laws and regulations

  • Work with legal advisors to assess AI products of interest. These advisors may be in-house or external but will usually be more than one person as the legal issues span a number of specialist areas. 
  • Identify the relevant laws and regulations that apply to the AI product’s use by the organization.1 For example, is the AI product regulated as a medical deviceA medical device is an instrument, apparatus, machine, or software used for medical purposes, such as diagnosis, treatment, monitoring, or prevention of disease under the US Federal Food, Drug, and Cosmetics Act? Does it fall under an exclusion, like FDA’s criteria for clinical decision support? Is it subject to enforcement discretion or could the product be outside of FDA’s purview for the practice of medicine reasons? 
  • Evaluate the AI product and vendor’s ability to comply with applicable laws and regulations.
  • Assess the organization’s readiness to comply with the rules.
  • Ensure thorough due diligence regarding compliance with all legal and regulatory requirements, as they can still pose a risk even if all parties appear well-equipped to comply. 

Step 2: Assess areas of potential legal risk associated with use of the AI product

  • Understand the AI product, including how it was developed, how it works, and the scope of its intended use.
  • Feed any issues identified in the AI product’s preliminary quality assessment, including determining which regulations apply to the risk assessment.
  • Evaluate the potential for patient harm the AI product introduces and associated legal risks. For example, common examples of harm for diagnostic aid AI products include false negatives (potentially resulting in delayed care) or false positives (potentially resulting in unnecessary further testing). More commonly, AI products carry a risk of privacy breaches in relation to the sensitive health data collected, transferred, and stored, which can be anticipated and mitigated. 

Step 3: Create an action plan for managing and mitigating risk

  • Create an action plan for managing and mitigating each risk identified.
  • Consider how precursors of patient harm might be measured and how effective monitoring and reporting systems based on those precursors might be implemented to mitigate the risks.
  • Determine appropriate and effective risk mitigators, such as enhanced explainability, training, labeling, retrospective audits, or design changes.
  • Use a written risk assessment as a helpful tool to record risk and its action plan. This can be as simple as a table with a column to describe the identified risk, a column for risk rating (e.g. “low”, “medium”, “high”) to reflect the likelihood and severity of risk, a column for risk mitigators/controls and a notes column for any additional information.

“Well, if sensitive health data gets out there and it was covered by HIPAA, potentially it’s $1,000 per record. So…, you can kind of come to a rough number. You can understand there are general metrics out there about, if you have a privacy breach, how much does that cost in the US?”

Legal Key Informant

Step 4: Determine responsibility for risk and mitigation

  • Between healthcare organization implementing the AI product or the vendor, consider who should be responsible for the risks and implementing mitigations. This may depend on factors such as:
    • the vendor’s level of involvement in the design, development, and implementation of the AI product, 
    • the degree of customization or adaptation of the AI product to the organization’s needs, 
    • the degree of control each party has over the AI product once deployed, 
    • the level of expertise of each party 
    • the level of explainability and autonomy of the AI product.
  • Determine key liabilities and responsibilities up-front to avoid surprises and make contract negotiations more efficient.

Step 5: Feed into the cross-functional team

  • Make legal assessment findings available to the wider project team so that the ultimate decision on whether to proceed or not can be taken. Roles that should be represented on the team are similar to those in the guide to assess quality of external AI product options.

References

  1. These may include medical product laws and regulations, such as US Federal Food, Drug and Cosmetics Act and regulations, privacy laws, such as the Health Insurance Portability and Accountability (HIPAA), the Health Information Technology for Economic and Clinical Health Act (HITECH), and the California Confidentiality of Medical Information Act (CMIA). In the future AI-specific laws are likely to apply.

adopt health ai