Executive Order (EO) 13960

Mapping Executive Order (EO) 13960’s requirements on the HAIP Guides.

EO 13690 Section 3RequirementHAIP Guides
Lawful (3a)Respect for national values, consistent with the Constitution and all other applicable laws and policiesUnderstand existing regulatory schemes applicable to AI in healthcare

Assess legal risks

Monitor AI for continual regulatory compliance

Monitor changes in regulations to ensure continual compliance

Consider regulatory requirements in expanded settings
Performance driven (3b)Benefits outweigh the risks and can be assessed and managedCreate an action plan for managing and mitigating risk

Define performance targets

Identify and mitigate risks before deployment

Monitor potential risks after deployment
Accurate, reliable, and effective (3c)AI is consistent with the use cases for which it was trainedLocal validation

Sustain improved outcomes

Evaluate expansion to new settings
Safe, secure, and resilient (3d)Protect against systematic vulnerabilities, adversarial manipulation, and malicious exploitationPrevent inappropriate use of the AI product

Minimize disruptions from decommissioning an AI product
Understandable (2e)Outcomes and operations are sufficiently understandable by subject matter experts, users and others as appropriateDisseminate information to clinician end-users
Responsible and traceable (3f)Clearly defined human roles and responsibilitiesDefine the role of AI

Define successful use

Manage changes to the work environment
Monitored (3g)Regularly tested against required principles with mechanisms to deactivate applications if inconsistent with intended useMonitor AI performance

Monitor the work environment

Determine if updating or decommissioning is necessary

Minimize disruptions from decommissioning
Transparent (3h)Disclose relevant information to appropriate stakeholdersShare performance metrics with all stakeholders to ensure transparent communication

Disseminate information to clinician end-users

Disseminate information about updates to clinician end-users
Accountable (3i)Regularly tested against required principles with mechanisms to deactivate applications if inconsistent with the intended useAudit AI solutions and the work environment

Communication between stakeholders as an accountability check