Feasibility of AI as first reader in the 4-IN-THE-LUNG-RUN lung cancer screening trial: impact on negative-misclassifications and clinical referral rate.
Summary
In 3,678 baseline LDCTs, AI as a first reader produced far fewer negative misclassifications than radiologists (0.8% vs 11.1%) and reduced missed clinical referrals (2.9% vs 11.8%). These data support AI-first-read filters to safely rule out negatives and decrease radiologist workload in lung cancer screening.
Key Findings
- Among 3,678 baseline LDCTs, AI had 31 negative misclassifications (0.8%) versus 407 (11.1%) by radiologists.
- Missed clinical referrals would have been lower with AI as first reader (3/102, 2.9%) than with radiologists (12/102, 11.8%).
- AI independently ruled out negative cases without substantially increasing risk of missed referrals, supporting feasibility as a first-reader filter.
Clinical Implications
Programs can consider AI first-read filtering at baseline LDCT to rule out negatives, focusing radiologist time on indeterminate/positive studies and potentially standardizing referral decisions.
Why It Matters
Demonstrates a practical, safety-relevant AI deployment that could reshape screening workflows by reducing errors and resource burden without compromising referrals.
Limitations
- Single AI vendor and baseline-only assessment; external generalizability and longitudinal outcomes not addressed.
- Operational thresholds and definitions (e.g., NM criteria) may vary across programs.
Future Directions
Prospective implementation studies comparing AI-first-read pathways versus standard practice on detection rates, workload, cost-effectiveness, and patient outcomes across diverse settings.
Study Information
- Study Type
- Cohort
- Research Domain
- Diagnosis
- Evidence Level
- III - Retrospective analysis of a large screening cohort with comparative AI vs radiologist reads.
- Study Design
- OTHER