Monitor work environment
Why does this matter?
No AI product works in isolation, it becomes entwined with the healthcare professionals, practices, tools, and systems in which it is placed.
The performance of AI products is conventionally focused on the tool in isolation and using past data where the outcomes are known. Conventional approaches also focus on averages and fail to identify systematic errors affecting particular demographic groups.
Once clinical integration has taken place, AI products change the work environment in which they are placed and are in turn affected by change in that same work environment. These effects accumulate over time and monitoring of the work environment is required in addition to the AI product itself. To ensure the AI product is working as intended, it is important to have processes in place to monitor whether the product is being used, to observe how it is used in context, to be able to measure its effect on desired outcomes, as well as how it is perceived by stakeholders including end users.
How to do this?
Step 1: Be mindful of turnover and training needs in all staff groups
- Depending on the AI product and its position within workflows, there may be a large workforce of end-users requiring continuous training.
- For AI products built in-house, there will also be a small number of individuals with a deep technical understanding of the product. Interpreting and addressing issues identified through monitoring and audits depends upon access to these individuals. Processes must be in place to preserve institutional knowledge of the AI product in the event they leave their role.
- For AI products built by external vendors, there must also be a consistent point of contact able to provide technical support. Given the dynamic market of AI products (e.g., acquisitions, company closures), there should also be contingencies in place with the external vendor in the event the product is no longer supported.
“The problem is, how do you keep that up? How do you continuously maintain [the system] given turnover, especially because it is, not just nursing staff, but clinician staff turnover. We’re losing four or five of our interventional cardiology attendings this year. We have new fellows every three months coming in. So how do you maintain that?”
Clinician
Step 2: Monitor levels of usage and off-label usage of the product
- Monitor the extent to which products are used.
- If the product is not effectively adopted, seek to understand why. Speak directly with frontline clinicians and business unit managers.
- Make proactive decisions about how to address barriers to the use of the AI product. To learn more about addressing potential barriers, see this article.
- Maintaining a tool requires resources. If end users are not using a tool, these resources may be better committed elsewhere.
- Monitor off-label use of the AI product. While technology users often discover new, valuable applications of a tool, this must be carefully assessed in a healthcare setting. Proactively monitor for adjacent use cases identified earlier. If off-label use is detected, meet with clinicians and technical experts to discuss the potential expansion of the scope of use. If the intended use is not expanded, decommission the use of the tool beyond the intended context.
Step 3: Take a mixed-methods approach to engage with end users
- Seek periodic feedback from end users about their use of the AI product and where they perceive its value. This can be helpful in understanding the non-use of an AI product as well as identifying off-label use.
- To gain insight into users’ perceptions, useful methods include semi-structured interviews, focus groups, and surveys.
- To understand more directly their actual practices, consider log data analysis, in situ observation, or think aloud walkthroughs.
- Qualitative data can be just as illuminating as quantitative data and together provide a richer picture of the tool in use.
- Periodic education and testing of end-user competency for using the product can also be helpful.
Step 4: Remain vigilant to sources of bias from data shifts and product use
- Measures of AI product performance that produce overall averages for an entire patient population may conceal systematic biases in who benefits most or least from the tool.
- Ensure that measured outcomes are broken down by key demographic groups in the target patient population to ensure that systematic patterns of miscategorization or maldistribution of resources are minimized. Specifically, look at historically marginalized populations that face the greatest barriers to receiving high-quality care in the context of use.
Step 5: Capture data that meaningfully represent the work environment
- Robustly capture good baseline data prior to the introduction of the AI tool to allow pre and post-intervention comparisons.
- Monitor a range of outcomes influenced by the AI product, not simply its technical performance or accuracy.
- Depending on the use case, this may include descriptive statistics such as referral rates, interventions made per clinician, readmission rates, etc.