Researchers at the Icahn School of Medicine and the University of Michigan discovered that using models to adjust how care is delivered can change the baseline assumptions that the models were trained on.

 

The team wanted to find out what would happen when a machine learning model is deployed within a hospital setting, with the intent of influencing physician decisions for the collective benefit of patients.

 

The objective was to comprehend the wider implications when a patient is protected from adverse outcomes such as kidney damage or mortality. AI models have the capability to learn and establish correlations between incoming patient data and the resulting outcomes. However, the use of these models inherently has the potential to modify these associations.

 

As corresponding author Akhil Vaid, M.D, said, “Problems arise when these altered relationships are captured back into medical records”.

 

The study involves simulating critical care scenarios at two healthcare institutions, analysing 130,000 critical care admissions at both the Mount Sinai Health System in New York and Beth Israel Deaconess Medical Center in Boston. Three key scenarios were explored:

 

1. Model retraining after initial use

Current practice retrains models to mitigate performance degradation over time. However, the Mount Sinai study suggests that this can paradoxically result in further degradation by disrupting the learned relationships between patient presentation and clinical outcomes.

 

2. Creating a new model after one has already been in use

Using a model's predictions can spare patients from adverse outcomes like sepsis, which could lead to a patient's death. However, the model will work to prevent both. Any new models developed for death predictions will now be subject to upset relationships. As the exact relationships between all possible outcomes is unclear, any data from patients with machine-learning influenced care might be unsuitable for training future models.

 

3.Concurrent use of two predictive models

Predictions should be based on freshly collected data. Co-senior author Karandeep Singh, MD, said, “Model performance can fall dramatically if patient populations change in their makeup.”

 

“Agreed-upon corrective measures may fall apart completely if we do not pay attention to what the models are doing—or more properly, what they are learning from”. 

 

It's crucial to recognise that these tools demand regular maintenance, a deep understanding, and proper contextualisation. Neglecting the monitoring of their performance and impact can undermine their effectiveness.

 

Health systems need to establish a tracking system to monitor individuals affected by machine learning predictions.

 

Source: Mount Sinai

Image Credit: iStock

«« Microsoft Unveils AI Tools to Enhance Healthcare Delivery


AHIMA Announces Data for Better Health Initiative Aimed to Transform Healthcare »»

References:

Vaid A et al. (2023) Implications of the Use of Artificial Intelligence Predictive Models in Health Care Settings: A Simulation Study. Annals of Internal Medicine.



Latest Articles

Predictive AI,Healthcare,Icahn School of Medicine,University of Michigan Researchers at the Icahn School of Medicine and the University of Michigan discovered that using models to adjust how care is delivered can change the baseline assumptions that the models were trained on.