AI and Sepsis
Each year in the United States, sepsis kills more than a quarter million people—more than stroke, diabetes, or lung cancer.
Currently, there’s no single test for sepsis. Health-care providers must rely on their own clinical impressions. The SIRS criteria identifies at risk patients if two of four clinical signs—body temperature, heart rate, breathing rate, white-blood-cell count—are abnormal.
The Atlantic had an incredible article on how Artificial Intelligence may help health care providers recognize, detect, and treat infection before fatal sepsis develops. Johns Hopkins researchers published a trio of studies in Nature Medicine and npj Digital Medicine showcasing an early-warning system that uses artificial intelligence.
Prevention is the key. If caregivers fail to detect sepsis in time, it’s essentially a death sentence. Consequently, much research has focused on catching sepsis early. The AI system caught 82 percent of sepsis cases and significantly reduced mortality.
Suchi Saria is the director of the Machine Learning and Healthcare Lab at Johns Hopkins University. He was the senior author of the studies. Saria said in an interview that his research emphasizes how “AI is implemented at the bedside, used by thousands of providers, and where we’re seeing lives saved.”
Saria believes using electronic records in new ways could transform health-care delivery, providing physicians with an extra set of eyes and ears—and helping them make better decisions. With their universal deployment and real-time patient data, electronic records could warn providers about sepsis and other fatal conditions. The goal, Saria adds, is not to replace caregivers, but to partner with them and augment their capabilities.
Targeted Real-Time Early Warning System
The Targeted Real-Time Early Warning System [TREWS] scans electronic health records to identify clinical signs that predict sepsis. the system alerts caregivers, and facilitate early treatment. TREWS provides real-time patient insights and a unique level of transparency in its reasoning.
Saria published three studies on TREWS. The first tried to determine how accurate the system was, whether providers would actually use it, and if use led to earlier sepsis treatment. The second went a step further to see if using TREWS actually reduced patient mortality. And the third interviewed 20 providers who tested the tool on what they thought about machine learning, including what factors facilitate versus hinder trust.
Saria acknowledges that TREWS’s false-positive rate, although lower than that of existing electronic-health-record systems, could certainly improve, but says it will always be crucial for clinicians to continue to use their own judgment.