Serena Yeung, Francesca Rinaldo, Jeffrey Jopling, Bingbin Liu, Rishab Mehra, N. Lance Downing, Michelle Guo, Gabriel M. Bianconi, Alexandre Alahi, Julia Lee, Brandi Campbell, Kayla Deru, William Beninati, Li Fei-Fei & Arnold Milstein, A computer vision system for deep learning-based detection of patient mobilization activities in the ICU, npj Digital Medicine, volume 2, Article number: 11 (2019)
Early and frequent patient mobilization substantially mitigates risk for post-intensive care syndrome and long-term functional impairment. We developed and tested computer vision algorithms to detect patient mobilization activities occurring in an adult ICU. Mobility activities were defined as moving the patient into and out of bed, and moving the patient into and out of a chair. A data set of privacy-safe-depth-video images was collected in the Intermountain LDS Hospital ICU, comprising 563 instances of mobility activities and 98,801 total frames of video data from seven wall-mounted depth sensors. In all, 67% of the mobility activity instances were used to train algorithms to detect mobility activity occurrence and duration, and the number of healthcare personnel involved in each activity. The remaining 33% of the mobility instances were used for algorithm evaluation. The algorithm for detecting mobility activities attained a mean specificity of 89.2% and sensitivity of 87.2% over the four activities (*); the algorithm for quantifying the number of personnel involved attained a mean accuracy of 68.8% (**).(*) Fig 1
(**) Fig 2
Fig. 3
Owing to the temporal sparsity of patient mobility activities (making it difficult to find and annotate occurrences in long stretches of recorded data), a web-based application was developed to allow nursing staff to flag the approximate time occurrences of the patient mobility activities they witnessed, providing research assistants with a time stamp in the data for focused retrospective review. The use of time stamps to coarsely indicate the occurrence of mobility events enabled our research assistants to retrospectively examine only the periods of data flagged by nursing staff to identify and label mobility activities, avoiding manual review of thousands of hours of data. Three trained research assistants reviewed these sampled periods of data to provide precise temporal annotations, with each occurrence of a mobility activity being reviewed by one research assistant. To assess consistency of the manual review across the different research assistants, a subset of the data was annotated by all three of the research assistants. Frame-level inter-rater reliability of annotations on this subset was 0.894 using Fleiss’s kappa....
The algorithm for quantifying the number of personnel involved in each mobility activity was based on the YOLOv2 convolutional neural network architecture for object detection. The YOLOv2 convolutional neural network was trained to predict the spatial locations of people in each image frame of data using annotated bounding boxes of the spatial locations of people in 1379 frames of patient data.
沒有留言:
張貼留言