Matasyoh NM, Mathis-Ullrich F, Zeineldin R (2024)
Publication Language: English
Publication Type: Journal article
Publication year: 2024
Book Volume: 12
Pages Range: 193950-193959
DOI: 10.1109/ACCESS.2024.3520386
Integrating Minimally Invasive Surgery (MIS) with robotic systems has revolutionized surgical procedures by enhancing precision, reducing patient discomfort, and shortening recovery times. However, the dynamic and visually complex environment of robotic surgeries poses significant challenges in the accurate semantic segmentation of surgical instruments, a critical task for navigating surgical robots and ensuring surgical safety. This necessitates the development of advanced computational techniques capable of overcoming these challenges. To address these limitations in robotic surgeries, we introduce SAMSurg, an adaptation of the Segment Anything Model (SAM). This vision foundation model accurately segments diverse objects in images for various computer vision applications. SAMSurg leverages the pre-trained representations of SAM's image and prompt encoders, focusing on fine-tuning the mask decoder to adapt to the specific demands of surgical instrument segmentation in MIS. The model is evaluated using over 77K image-mask pairs from various surgical datasets, demonstrating its adaptability across different surgical interventions, disciplines, and instrument types. This extensive evaluation underscores SAMSurg's superior performance against state-of-the-art models like SAM and MedSAM, with Dice Similarity Coefficient (DSC) scores as high as 96.90%. Furthermore, it demonstrates strong generalization capabilities across various surgical contexts, achieving higher DSC scores than state-of-the-art models, such as SAM and MedSAM, on unseen datasets. SAMSurg's performance shows a significant improvement in the segmentation of surgical instruments for robotic surgeries, promising to enhance surgical techniques and patient care outcomes.
APA:
Matasyoh, N.M., Mathis-Ullrich, F., & Zeineldin, R. (2024). SAMSurg: Surgical Instrument Segmentation in Robotic Surgeries Using Vision Foundation Model. IEEE Access, 12, 193950-193959. https://doi.org/10.1109/ACCESS.2024.3520386
MLA:
Matasyoh, Nevin Musula, Franziska Mathis-Ullrich, and Ramy Zeineldin. "SAMSurg: Surgical Instrument Segmentation in Robotic Surgeries Using Vision Foundation Model." IEEE Access 12 (2024): 193950-193959.
BibTeX: Download