Preliminary Development and Validation of Automated Nociception Recognition Using Computer Vision in Perioperative Patients.
Summary
Using perioperative facial video, convolutional neural networks classified CPOT-defined pain with strong performance across internal (AUC 0.91) and external cohorts (AUC 0.91 and 0.80), while numeric rating scale classification performed poorly (AUC 0.58). Perturbation analyses highlighted facial regions (eyebrows, nose, lips, forehead) as predictive features.
Key Findings
- CPOT-based models achieved AUCs of 0.91 (internal) and 0.91/0.80 (external), while NRS-based classification underperformed (AUC 0.58).
- Calibration improved Brier scores; explainability indicated key facial regions driving predictions.
- Feasibility demonstrated across development (n=130), validation (n=77), and two external datasets (n=254).
Clinical Implications
If generalized and integrated into monitors, automated facial nociception detection could provide continuous pain surveillance in PACU/wards/ICUs, prompting timely analgesia and reducing under-treatment, while requiring safeguards for bias, privacy, and diverse populations.
Why It Matters
Introduces and externally validates an AI method for continuous nociception assessment using only standard video, addressing staffing and monitoring gaps in perioperative pain management.
Limitations
- Relatively small datasets and single-modality (RGB video) inputs limit generalizability.
- Numeric rating scale classification performed poorly; potential demographic and lighting biases not fully addressed.
Future Directions
Large multicenter studies with diverse populations, multimodal signals (physiology + video), continuous labeling, and fairness/robustness audits; prospective impact evaluations on analgesic delivery and outcomes.
Study Information
- Study Type
- Cohort
- Research Domain
- Diagnosis
- Evidence Level
- III - Prospective observational development with internal and external validation cohorts
- Study Design
- OTHER