Daily Sepsis Research Analysis
AI-driven decision support and risk stratification emerged as key themes in this sepsis digest. A JAMA reinforcement learning study suggests earlier, more frequent vasopressin initiation in septic shock with an associated mortality benefit, while an interpretable machine-learning model predicts sepsis-induced coagulopathy with solid external/temporal validation. A nationwide Italian registry of complicated intra-abdominal infections provides large-scale, sepsis-relevant epidemiology to inform an
Summary
AI-driven decision support and risk stratification emerged as key themes in this sepsis digest. A JAMA reinforcement learning study suggests earlier, more frequent vasopressin initiation in septic shock with an associated mortality benefit, while an interpretable machine-learning model predicts sepsis-induced coagulopathy with solid external/temporal validation. A nationwide Italian registry of complicated intra-abdominal infections provides large-scale, sepsis-relevant epidemiology to inform antimicrobial stewardship.
Research Themes
- AI-guided hemodynamic therapy in septic shock
- Interpretable machine learning for coagulopathy risk stratification
- Antimicrobial stewardship informed by intra-abdominal infection epidemiology
Selected Articles
1. Optimal Vasopressin Initiation in Septic Shock: The OVISS Reinforcement Learning Study.
Using reinforcement learning on large multicenter EHR data, the OVISS study recommended initiating vasopressin earlier, in more patients, and at lower norepinephrine doses than typical practice, with an associated reduction in in-hospital mortality. The rule was externally validated across 227 US hospitals and showed superior expected outcomes versus clinician behavior.
Impact: Introduces a data-driven, externally validated treatment policy for vasopressin initiation in septic shock, potentially shifting vasopressor strategies. The AI approach is timely and clinically actionable.
Clinical Implications: Consider earlier vasopressin initiation at lower norepinephrine doses for septic shock, guided by decision-support tools; prospective trials are needed before protocol changes.
Key Findings
- The RL-derived rule suggested vasopressin in 87% vs 31% of patients under usual care.
- Initiation occurred earlier (median 4 vs 5 hours from shock onset) and at lower norepinephrine doses (0.20 vs 0.37 µg/kg/min).
- Adherence to the rule was associated with lower hospital mortality (adjusted OR 0.81, 95% CI 0.73-0.91) across external datasets.
Methodological Strengths
- Large multicenter derivation and external validation across 227 hospitals
- Causal inference techniques (inverse probability weighting) and off-policy evaluation (weighted importance sampling)
Limitations
- Observational design with potential residual confounding and confounding by indication
- Reliance on EHR data quality; generalizability to non-US settings uncertain
Future Directions: Prospective pragmatic trials to test RL-guided vasopressor strategies; integration into EHR decision support with safety and fairness monitoring.
IMPORTANCE: Norepinephrine is the first-line vasopressor for patients with septic shock. When and whether a second agent, such as vasopressin, should be added is unknown. OBJECTIVE: To derive and validate a reinforcement learning model to determine the optimal initiation rule for vasopressin in adult, critically ill patients receiving norepinephrine for septic shock. DESIGN, SETTING, AND PARTICIPANTS: Reinforcement learning was used to generate the optimal rule for vasopressin initiation to improve short-term and hospital outcomes, using electronic health record data from 3608 patients who met the Sepsis-3 shock criteria at 5 California hospitals from 2012 to 2023. The rule was evaluated in 628 patients from the California dataset and 3 external datasets comprising 10 217 patients from 227 US hospitals, using weighted importance sampling and pooled logistic regression with inverse probability weighting. EXPOSURES: Clinical, laboratory, and treatment variables grouped hourly for 120 hours in the electronic health record. MAIN OUTCOME AND MEASURE: The primary outcome was in-hospital mortality. RESULTS: The derivation cohort (n = 3608) included 2075 men (57%) and had a median (IQR) age of 63 (56-70) years and Sequential Organ Failure Assessment (SOFA) score at shock onset of 5 (3-7 [range, 0-24, with higher scores associated with greater mortality]). The validation cohorts (n = 10 217) were 56% male (n = 5743) with a median (IQR) age of 67 (57-75) years and a SOFA score of 6 (4-9). In validation data, the model suggested vasopressin initiation in more patients (87% vs 31%), earlier relative to shock onset (median [IQR], 4 [1-8] vs 5 [1-14] hours), and at lower norepinephrine doses (median [IQR], 0.20 [0.08-0.45] vs 0.37 [0.17-0.69] µg/kg/min) compared with clinicians' actions. The rule was associated with a larger expected reward in validation data compared with clinician actions (weighted importance sampling difference, 31 [95% CI, 15-52]). The adjusted odds of hospital mortality were lower if vasopressin initiation was similar to the rule compared with different (odds ratio, 0.81 [95% CI, 0.73-0.91]), a finding consistent across external validation sets. CONCLUSIONS AND RELEVANCE: In adult patients with septic shock receiving norepinephrine, the use of vasopressin was variable. A reinforcement learning model developed and validated in several observational datasets recommended more frequent and earlier use of vasopressin than average care patterns and was associated with reduced mortality.
2. Interpretable machine learning model for early morbidity risk prediction in patients with sepsis-induced coagulopathy: a multi-center study.
A multicenter retrospective study developed an interpretable random forest model using eight routinely available variables to predict sepsis-induced coagulopathy, achieving AUCs around 0.75–0.78 with temporal validation. SHAP analyses highlighted clinically meaningful predictors such as APTT, lactate, oxygenation index, and total protein.
Impact: Provides an interpretable, validated tool for early SIC risk stratification, enabling targeted monitoring and timely anticoagulation or supportive strategies.
Clinical Implications: Facilitates early identification of patients at high risk for SIC using readily available labs, supporting individualized monitoring and intervention pathways.
Key Findings
- Among 847 ICU sepsis patients, 56.7% developed SIC.
- An 8-variable random forest achieved AUCs of 0.782 (train), 0.750 (test), and 0.784 (temporal validation).
- Top predictors included APTT, lactate, oxygenation index, and total protein; SHAP improved interpretability.
Methodological Strengths
- Multicenter dataset with internal split-sample and temporal external validation
- Model interpretability via SHAP with parsimonious variable set
Limitations
- Retrospective design from two centers; prospective and geographic external validation are lacking
- Potential missing data bias and unmeasured confounding; clinical impact not yet tested in interventional studies
Future Directions: Prospective impact studies integrating the model into clinical workflows; evaluation of generalizability across diverse ICUs and EHR systems.
BACKGROUND: Sepsis-induced coagulopathy (SIC) is a complex condition characterized by systemic inflammation and coagulopathy. This study aimed to develop and validate a machine learning (ML) model to predict SIC risk in patients with sepsis. METHODS: Patients with sepsis admitted to the intensive care unit (ICU) between March 1, 2021, and March 1, 2024, at Hebei General Hospital and Handan Central Hospital (East District) were retrospectively included. Patients were categorized into SIC and non-SIC groups. Data were split into training (70%) and testing (30%) sets. Additionally, for temporal validation, patients with sepsis admitted between March 1, 2024, and October 31, 2024, at Hebei General Hospital were included. Feature selection was performed using least absolute shrinkage and selection operator (LASSO) regression and multivariate logistic regression. Nine ML algorithms were tested, and model performance was assessed using receiver operating characteristic curve (ROC) analysis, including area under the curve (AUC), calibration curves, and decision curve analysis (DCA). The SHaply Additive Explanations (SHAP) algorithm was used to interpret the best-performing model and visualize key predictors. RESULTS: Among 847 patients with sepsis, 480 (56.7%) developed SIC. The random forest (RF) model with eight variables performed best, achieving AUCs of 0.782 [95% confidence interval (CI): 0.745, 0.818] in the training set, 0.750 (95% CI: 0.690, 0.809) in the testing set, and 0.784 (95% CI: 0.711, 0.857) in the validation set. Key predictors included activated partial thromboplastin time, lactate, oxygenation index, and total protein. CONCLUSIONS: This ML model reliably predicts SIC risk. SHAP enhances interpretability, supporting early, individualized interventions to improve outcomes in patients with sepsis.
3. Epidemiological analysis of intra-abdominal infections in Italy from the Italian register of complicated intra-abdominal infections-the IRIS study: a prospective observational nationwide study.
A nationwide prospective registry of 4,530 complicated intra-abdominal infections in Italy found 27.8% presented in septic shock, E. coli as the predominant pathogen, and extensive empiric antibiotic use. Findings highlight stewardship opportunities and ICU burden across etiologies.
Impact: Provides large-scale, current epidemiology on sepsis-relevant intra-abdominal infections, guiding empirical therapy and stewardship at national and institutional levels.
Clinical Implications: Supports targeted empiric regimens (e.g., coverage for E. coli) and stewardship oversight, including cautious empiric antifungal use even in septic shock.
Key Findings
- Among 4,530 cIAI patients, 27.8% presented in septic shock and 16.5% required ICU care.
- E. coli was the leading pathogen (45.6% of positive intra-abdominal cultures).
- Empiric antimicrobial therapy was used in 78.4% of patients; empiric antifungals were used in 4.1% of septic shock cases.
Methodological Strengths
- Prospective, nationwide, multicenter design with large sample size
- Systematic capture of microbiology and empirical therapy patterns
Limitations
- Observational design without standardized treatment protocols limits causal inference
- Incomplete culture acquisition (70.8%) may bias pathogen distribution estimates
Future Directions: Link registry data to outcomes by regimen and resistance patterns; evaluate stewardship interventions and predictive models for septic shock in cIAI.
BACKGROUND: Intra-abdominal infections (IAIs) are common and severe surgical emergencies associated with high morbidity and mortality. In recent years, there has been a worldwide increase in antimicrobial resistance associated with intra-abdominal infections, responsible for a significant increase in mortality rates. To improve the quality of treatment, it is crucial to understand the underlying local epidemiology, clinical implications, and proper management of antimicrobial resistance, for both community- and hospital-acquired infections. The IRIS study (Italian Register of Complicated Intra-abdominal InfectionS) aims to investigate the epidemiology and initial management of complicated IAIs (cIAIs) in Italy. MATERIAL AND METHOD: This is a prospective, observational, nationwide (Italy), multicentre study. approved by the coordinating centre ethic committee (Local Research Ethics Committee of Pisa (Prot n 56478//2019). All consecutively hospitalized patients (older than 16 years of age) with diagnosis of cIAIs undergoing surgery, interventional drainage or conservative treatment have been included. RESULTS: 4530 patients included from 23 different Italian hospitals. Community Acquired infection represented the 70.9% of all the cases. Among appendicitis, we found that 98.2% of the cases were community acquired (CA) and 1.8% Healthcare-associated (HA) infections. We observed that CA represented the 94.2% and HA 5.8% of Gastro Duodenal perforation cases. The majority of HA infections were represented by colonic perforation and diverticulitis (28.3%) followed by small bowel occlusion (19%) and intestinal ischemia (18%). 27.8% of patients presented in septic shock. Microbiological Samples were collected from 3208 (70.8%) patients. Among 3041 intrabdominal sample 48.8% resulted positive. The major pathogens involved in intra-abdominal infections were found to be E.coli (45.6%). During hospital stay, empiric antimicrobial therapy was administered in 78.4% of patients. Amoxicillin/clavulanate was the most common antibiotic used (in 30.1% appendicitis, 30% bowel occlusion, 30.5% of cholecystitis, 51% complicated abdominal wall hernia, 55% small bowel perforation) followed by piperacillin/tazobactam (13.3% colonic perforation and diverticulitis, 22.6% cholecystitis, 24.2% intestinal ischemia, 28.6% pancreatitis). Empiric antifungal therapy was administered in 2.6% of patients with no sign of sepsis, 3.1% of patients with clinical sign of sepsis and 4.1% of patients with septic shock. Azoles was administered in 49.2% of patients that received empiric antifungal therapy. The overall mortality rate was 5.13% (235/4350). 16.5% of patients required ICU (748/4350). In accordance with mortality, it is important to highlight that 35.7% of small bowel perforation, 27.6% of colonic perforation and diverticulitis, 25.6% of intestinal ischemia and 24.6% of gastroduodenal complications required ICU. CONCLUSION: Antibiotic stewardship programs and correct antimicrobial and antimycotic prescription campaigns are necessary to ulteriorly improve the adequacy of drug usage and reduce the resistances burden. This will help in improving the care and the cure of the next generations.