HPB
Amr I. Al Abbas, MD (he/him/his)
General Surgery Resident
UT Southwestern Medical Center
Dallas, Texas, United States
Amr I. Al Abbas, MD (he/him/his)
General Surgery Resident
UT Southwestern Medical Center
Dallas, Texas, United States
Amr I. Al Abbas, MD (he/him/his)
General Surgery Resident
UT Southwestern Medical Center
Dallas, Texas, United States
Andres A. Abreu, MD (he/him/his)
Postsdoctoral Researcher
Department of Surgery, University of Texas Southwestern Medical Center
Dallas, Texas, United States
shekharmadhav khairnar, MS
Data Scientist
UT Southwestern Medical Center, United States
Jae Pil Jung, MD
Postdoc
UPMC, United States
Patricio M. Polanco, MD
Associate Professor
Department of Surgery, University of Texas Southwestern Medical Center
Dallas, Texas, United States
Melissa Hogg, MD MS (she/her/hers)
Chief of GI and General Surgery
Department of Surgery, NorthShore University HealthSystem
Evanston, Illinois, United States
Amer H. Zureikat, MD
Professor and Chief
Division of Surgical Oncology, University of Pittsburgh Medical Center
Pittsburgh, Pennsylvania, United States
Matthew R. Porembka, MD (he/him/his)
Associate Professor
University of Texas Southwestern Medical Center
Dallas, Texas, United States
Herbert J. Zeh, III, MD
Professor and Chair
Department of Surgery, University of Texas Southwestern Medical Center
Dallas, TX, United States
Ganesh sankaranarayanan, PhD
Associate Professor
UT Southwestern Medical Center, United States
Negative margin resection is a significant determinant of outcome in pancreatic adenocarcinoma (PDAC). We have previously shown it is difficult to discern active disease at the portal vein/superior mesenteric vein margin from image review (PV/SMV). In this work, we utilized deep learning analysis of intraoperative surgical images to identify the PV/SMV and classify that margin status.
Methods:
A review of robotic pancreaticoduodenectomy (RPD) videos at a tertiary center conducted from 9/2012 to 6/2017. Ten to fifteen images that show the portal vein were extracted from each video and annotated using the LabelMe software. The images are then split to training, test and validation sets to train a YoloV8 model which was pretrained on COCO image dataset for segmenting the PV/SMV. The segmented images were then trimmed to encompass the PV/SMV and used to train a VCG-16 deep learning image classification model for classifying whether the PV/SMV margin was positive or negative. The ground truth was based on pathology results at that margin. Mean average precision (mAP) for Intersection over Union (IoU) for a range of 0.5 to 0.95 with an interval of 0.05 was used for assessing the segmentation model. For the classification model, both per frame and video level accuracy and F2 harmonic mean were used for assessment. We also computed the Gradient-weighted class activation mapping to visualize the important parts of the images that are used for classification.
Results:
In total, 107 RPD videos were reviewed. A total of 1758 (train & test) frames were annotated. The mAP (.50-.95) of segmentation model was 0.68. For classification, frame level accuracy was 79% with an F2 score of 0.66, sensitivity of 64% and specificity of 88%. For video level classification, the accuracy was 81% with an F2 score of 0.61, sensitivity of 57% and specificity of 93%. Figure 1 shows an example of gradient-weighted class activation mapping, illustrating in red/orange the more important parts of the image for classification.
Conclusions:
Our newly developed model can predict positive PV/SMV margin with reasonable accuracy, in real-time and without laborious expert review. Further data collection is underway to improve model accuracy. The use of deep learning for intra-operative video analysis is feasible and may aid surgeons in intraoperative decision making.