alignment with standards

Why does it matter?

How to do this?

The National Institute of Standards AI Risk Management Framework focuses on mitigating risks associated with AI for organizations and society. The framework aims to incorporate trustworthiness into designing, developing, using, and evaluating AI products, services, and systems. The framework is directed towards organizations, AI developers, and users across sectors, with broad considerations that include harms in relation to the environment, global financial systems, supply chains, and natural resources. Health AI Partnership provides additional specificity for healthcare delivery organizations to comply with the guidelines laid out in the framework.

For a crosswalk form the NIST RMF to ISO/IEC FDIS23894, OECD Recommendation on AI, Executive Order 13960, and Blueprint for an AI Bill of Rights, please see: Crosswalks to the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)

Table 1: Mapping NIST’s framework requirements on the HAIP Guides

The Blueprint for an AI Bill of Rights concerns the protection of the rights of the American Public from AI-related harm. The Bill emphasizes protecting individual rights and liberties, underlining AI tools’ impact on bias, discrimination, privacy, and access to critical services. An accompaniment to the Bill focuses on how all levels of society, from the government to companies, to individuals, can adhere to American values when creating and using AI, while the Bill remains sector agnostic. The guidance of the Health AI Partnership compliments the government’s AI Bill of Rights to protect American liberties when developing and implementing AI for healthcare.

Table 2: Mapping AI Bill of Right’s requirements on the HAIP Guides

Executive Order (EO) 13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government aims to improve the use of reliable AI in the operations and services of the Federal Government. The order outlines principles for the use of AI in the Federal Government, establishes a common policy for implementing the principles, directs agencies to catalog their AI use cases, and calls on the General Services Administration (GSA) and the Office of Personnel Management to enhance AI implementation expertise at the agencies. The EO is directed towards federal agencies and improving the use of AI in Government operations and services. Some of these services deal with healthcare and apply to healthcare organizations. The EO puts an emphasis on the Nation’s values consistent in the constitution, such as privacy, civil rights, and civil liberties. Health AI Partnership builds off of the values laid on in this executive order to include healthcare delivery organizations in addition to those in the federal government.

Table 3: Mapping Executive Order (EO) 13960’s requirements on the HAIP Guides

The World Health Organization’s Generating Evidence for AI-Based Medical Devices: Framework for Training, Validation, and Evaluation of AI-based medical devices is targeted at developers and researchers of AI-based software as medical devices, as well as global policymakers and implementers. The document offers guidance for health systems broadly, including public health initiatives, government services, health researchers, and organizations. It is intended to guide those seeking to understand the evidence-generation requirements from development to post-market surveillance of these devices. The work of the Health AI Partnership aligns with the World Health Organization’s focus on AI incorporation in the health sector, with a focus on developed health systems.

Table 4: Mapping World Health Organization’s Framework requirements on the HAIP Guides