Multimodal Transformers for Real-Time Surgical Activity Prediction

Published in 2024 IEEE International Conference on Robotics and Automation in PACIFICO Yokohama May 13th to 17th, 2024, 2024

Recommended citation: Keshara Weerasinghe, Sayed Hamid Reza Roodabeh, Homa Alemzadeh. 2024. Multimodal Transformers for Real-Time Surgical Activity Prediction: To appear in 2024 IEEE International Conference on Robotics and Automation in PACIFICO Yokohama (ICRA 2024) May 13th to 17th, 2024. https://doi.org/10.48550/arXiv.2403.06705 https://arxiv.org/abs/2403.06705

Real-time recognition and prediction of surgical activities are fundamental to advancing safety and autonomy in robot-assisted surgery. This paper presents a multimodal transformer architecture for real-time recognition and prediction of surgical gestures and trajectories based on short segments of kinematic and video data. We conduct an ablation study to evaluate the impact of fusing different input modalities and their representations on gesture recognition and prediction performance. We perform an end-to-end assessment of the proposed architecture using the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) dataset. Our model outperforms the state-of-the-art (SOTA) with 89.5\% accuracy for gesture prediction through effective fusion of kinematic features with spatial and contextual video features. It achieves the real-time performance of 1.1-1.3ms for processing a 1-second input window by relying on a computationally efficient model.

Download paper here