You are here: indexactivitiestheme1projectssgret

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
activities:theme1:projects:sgret [2016/01/08 15:51]
fdespino
activities:theme1:projects:sgret [2016/01/08 15:52]
fdespino
Line 13: Line 13:
  
 {{ :​activities:​theme1:​projects:​raven-ii_platform.png?​300|}} {{ :​activities:​theme1:​projects:​raven-ii_platform.png?​300|}}
-To improve the robotic training efficiency, our project is focusing on two objectives. The first one is to recognize surgical gestures. For this purpose, we proposed a novel approach for the [[http://​ieeexplore.ieee.org/​xpl/​articleDetails.jsp?​arnumber=7302557|unsupervised segmentation and recognition of surgical gestures in robotic training]] that does not rely on any statistical or probabilistic model. In this work, multiple experts were asked to perform a pick-and-place training task using the Raven-II robot, available at the LIRMM lab (this robot closely mimics the da Vinci one). From surgical robotic tool trajectories,​ we segment the different signals into surgical primitives, called dexemes, and use these primitives to learn and retrieve the entire surgical gestures, called surgemes. Our approach is then composed of two steps : the unsupervised segmentation and the recognition . Based on this novel approach, we are able to detect surgemes at 77.5% and reach a temporal matching of 81,9% between the manual annotations and the detections . Using those detections, our second objective is to provide in-depth evaluation of the surgical robotic task in order to efficiently (i.e. locally) evaluate the trainee performance through dedicated metrics.+To improve the robotic training efficiency, our project is focusing on two objectives. The first one is to recognize surgical gestures. For this purpose, we proposed a novel approach for the [[http://​ieeexplore.ieee.org/​xpl/​articleDetails.jsp?​arnumber=7302557|unsupervised segmentation and recognition of surgical gestures in robotic training]] that does not rely on any statistical or probabilistic model. In this work, multiple experts were asked to perform a pick-and-place training task using the Raven-II robot, available at the LIRMM lab (this robot closely mimics the da Vinci). From surgical robotic tool trajectories,​ we segment the different signals into surgical primitives, called dexemes, and use these primitives to learn and retrieve the entire surgical gestures, called surgemes. Our approach is then composed of two steps : the unsupervised segmentation and the recognition . Based on this novel approach, we are able to detect surgemes at 77.5% and reach a temporal matching of 81,9% between the manual annotations and the detections. Using those detections, our second objective is to provide in-depth evaluation of the surgical robotic task in order to efficiently (i.e. locally) evaluate the trainee performance through dedicated metrics.
  
 {{:​activities:​theme1:​projects:​segmentation_and_recognition_process.png?​660|}} {{:​activities:​theme1:​projects:​segmentation_and_recognition_process.png?​660|}}
inserm rennes1 ltsi