Identify potential downstream impacts
Why does this matter?
Using AI products in healthcare involves coordination among numerous stakeholders and workflowsSeries of tasks performed by the care providers to deliver care in any health care delivery settings. Refer to the topic guide on identifying problems across the organization for more details. And each stage of the lifecycle affects different stakeholders. Considering contingencies and affected parties early in the innovation process can help healthcare delivery organizations identify and mitigate unintended consequences. Failure to consider affected parties and downstream consequences may lead to negative impacts on health equityEquity refers to the fair distribution of resources and opportunities according to individual or group needs and circumstances, rather than treating everyone the same regardless of difference. Health equity refers to the fair distribution of resources and opportunities that contribute to advancing health outcomes. Health equity recognizes that health disparities exist among populations often as a result of social and economic factors and seeks to address these disparities with attention to those in disadvantaged positions. and patient care.
How to do this?
Step 1: Create process maps to help identify the affected parties
- Process maps detail what the tool will do, how it will be integrated with practices, and who it will affect.
- Be vigilant about affected parties that may be easily neglected. Make sure to involve individuals familiar with challenges faced by historically marginalized populations in the process of mapping.
Step 2: Consult with the affected parties to identify potential downstream impacts in each step of the project lifecycle
- Conduct interviews or focus groups with affected parties to discuss the potential concerns and consequences of using an AI product.
“There’s a missing middle. Patients or laypersons won’t have the background to anticipate all of the things that might happen… Clinicians won’t [have the background to] know the technology. Then the technologists often are not trained to think about societal consequences, or even about failure points…there needs to be some in between group that’s involved”Ethics Expert
Step 3: Research how other health systems or industries have addressed similar concerns
- Contact researchers, advocacy groups, or vendors to learn about similar projects to identify potential risks.
|Affected parties||Important things to consult|
(e.g., Leadership involved in funding, managing insurance reimbursements, and administration)
|Clinical stakeholders who will interact with the AI product in their workflow|
(e.g., end users)
Establish contingency plans in case of unexpected events or AI failure.
(e.g., IT personnel, data scientists, and data engineers)
|Patient communities who are being served, especially communities that are historically marginalized|
“The one framework we have used… it’s called a sociotechnical framework…it’s a useful, very quick, how does your technology help your task and how does that link to people and process”Clinical and Operational Leader
Step 4: Systematize and standardize the process
- Create assessment rubrics and checklists to keep track of assumptions, considerations, and stakeholder input.
- Create staff positions or review committees to evaluate AI products, ideas, and potential impacts.