You are here: indexgdr2016

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
gdr2016 [2016/11/14 17:42]
janin
gdr2016 [2016/12/13 07:55] (current)
fdespino
Line 42: Line 42:
 ==== Registration ==== ==== Registration ====
  
-<b>If you are interested to participate to this event, please send an email to Noemie Buisard (noemie.buisard@univ-rennes1.fr) with your name, affiliationsand participation (or not) to the lunch, no later than December, 2<​sup>​nd</​sup>​ 2016.</b>+**No registration fees but inscription is mandatory.** \\ 
 + 
 +**If you are interested to participate to this event, please send an email to Noemie Buisard (noemie.buisard@univ-rennes1.fr) with your name, affiliations and participation (or not) to the lunch, no later than December, 2<​sup>​nd</​sup>​ 2016.**
  
 ---- ----
Line 51: Line 53:
 **Russell H. Taylor**, //Johns Hopkins University, USA// [[http://​www.cs.jhu.edu/​]] \\ **Russell H. Taylor**, //Johns Hopkins University, USA// [[http://​www.cs.jhu.edu/​]] \\
 **Tim Salcudean**,​ //​University of British Columbia, Canada// [[https://​www.ece.ubc.ca/​]] \\ **Tim Salcudean**,​ //​University of British Columbia, Canada// [[https://​www.ece.ubc.ca/​]] \\
-**Jean-Pierre Henry**, //STAN Institute, France// [[http://​stan-institute.com/​]] \\+**Alexandre Thouroude**, //STAN Institute, France// [[http://​stan-institute.com/​]] \\
  
 ---- ----
Line 63: Line 65:
 Coffee and viennoiseries Coffee and viennoiseries
  
-** 10:00 - 10:45 ** | ** Katia Charrière, IT2IM team, LaTIM-Inserm U1101, University of Bretagne and Telecom Bretagne ** \\ +** 10:00 - 10:45 ** | ** Hassan Alhajj, IT2IM team, LaTIM-Inserm U1101, University of Bretagne and Telecom Bretagne ** \\ 
-__Title__: \\ +__Title__: ​Video analysis for ophthalmic surgery ​\\ 
-__Resume__: \\+__Resume__: ​Data recorded and stored during video-monitored surgeries are a relevant source of information for surgeons, especially during their training period. But today, this data is virtually unexploited. Accordingly,​ different methods have emerged to help the surgeons in different manners: report generation, surgical skill evaluation, construction of educational videos and real-time video monitoring. We focus on the latter application,​ with the aim to automatically communicate useful information to the surgeon during the surgery. In particular, our goal is to set up a warning/​recommendation generation system for videos recorded during cataract surgeries. To distinguish a normal conduct of surgery from an abnormal one, a crucial step is to recognize surgical tasks, phases or gestures in real-time. So, we worked first on recognizing them in the microscope videos. The results obtained are very encouraging but highlighted one main challenge: to improve the interpretation of the videos, one should be able to detect all surgical instruments. However, these instruments have a wide variety of shapes and are only partially visible in the surgical scene. To overcome this issue, we added a second video stream, filming the operating table. In this context, knowing which instruments exit or enter the operating table leads to which tools are likely being used by the surgeon and which tools surely are not. It’s an in progress work and we are currently trying to solve this task using deep learning techniques. ​\\
  
-** 10:45 - 11:30 ** | ** Pierre Chatelain, Lagadic Team, IRISA, University of Rennes 1 and Technische Universität München ​** \\ +** 10:45 - 11:30 ** | ** Pierre Chatelain, Lagadic Team, IRISA, University of Rennes 1 and Technical University of Munich ​** \\ 
-__Title__ : Quality-driven control of a robotized ultrasound probe \\+__Title__: Quality-driven control of a robotized ultrasound probe \\
 __Resume__: In the context of robot-assisted ultrasonography,​ we present a servoing approach to control the quality of ultrasound images. The ultrasound signal quality within the image is represented by a confidence map, which is used to design a servo control law for optimizing the placement of the ultrasound probe. A control fusion is also proposed to optimize the acoustic window for a specific anatomical target which is tracked in the ultrasound images. The method is illustrated in a teleoperation scenario, where the control is shared between the automatic controller and a human operator. \\ __Resume__: In the context of robot-assisted ultrasonography,​ we present a servoing approach to control the quality of ultrasound images. The ultrasound signal quality within the image is represented by a confidence map, which is used to design a servo control law for optimizing the placement of the ultrasound probe. A control fusion is also proposed to optimize the acoustic window for a specific anatomical target which is tracked in the ultrasound images. The method is illustrated in a teleoperation scenario, where the control is shared between the automatic controller and a human operator. \\
  
-** 11:30 - 12:15 ** | ** Speaker 3affiliation ​** \\ +** 11:30 - 12:15 ** | ** Ninon Candalh-ToutaAgathe Team, ISIR, University of Pierre and Marie Curie ** \\ 
-__Title__: ​ \\ +__Title__: ​How can we improve laparoscopic surgery training ?  \\ 
-__Resume__: ​ \\+__Resume__: ​Laparoscopic surgery becomes a standard for many surgical procedures as its great advantages over open surgery in terms of cosmetic results or patient recovery time. Unfortunatly,​ for the surgeon and consequently the student in medical school the surgery shows many problems. New skills are needed and students train outside of the operating room on simulators. Nevertheless,​ the training is difficult and painful and a new learning needs to be developed. The use of multi-sensory feedbacks or the development of an individualized training could lead to a more accurate, faster and less painful training. ​\\
  
 ** 12:15 - 14:00 ** | **  Lunch break ** \\ ** 12:15 - 14:00 ** | **  Lunch break ** \\
  
-** 14:00 - 14:45 ** | ** Jean-Pierre Henry, STAN Institute and Surgical Training School of Nancy, University of Lorraine ** \\ +** 14:00 - 14:45 ** | ** Alexandre Thouroude, STAN Institute and Surgical Training School of Nancy, University of Lorraine ** \\ 
-__Title__: \\ +__Title__: ​Learning the optimal gesture, from a Mirage 2000 pilot to surgeon ​\\ 
-__Resume__: \\+__Resume__: ​Learning times must be shorter and shorter. Learning costs are more expensive. Robotic surgery is embedded in these immutable parameters. How could we translate the training of fighter pilots to surgeons? The main point is not necessary in the materials but how we use them. \\
  
 ** 14:45 - 15:30 ** | ** Russell H. Taylor, Department of Computer Science, The Johns Hopkins University ** \\ ** 14:45 - 15:30 ** | ** Russell H. Taylor, Department of Computer Science, The Johns Hopkins University ** \\
Line 86: Line 88:
  
 ** 15:30 - 16:15 ** | ** Tim Salcudean, Electrical and Computer Engineering,​ The University of British Columbia ** \\ ** 15:30 - 16:15 ** | ** Tim Salcudean, Electrical and Computer Engineering,​ The University of British Columbia ** \\
-__Title__: \\ +__Title__: ​ ​Imaging and image guidance for prostate cancer interventions ​\\ 
-__Resume__: \\+__Resume__: ​We will describe our work in prostate imaging, which includes ultrasound and magnetic resonance elastography,​ and preliminary work in photoacoustic imaging. We have acquired ultrasound prostate images prior to radical prostatectomy and we correlated our images with histopathology results. We are currently using ultrasound imaging during robot-assisted radical prostatectomy,​ and we have put together a system to provide fused ultrasound and MRI guidance during surgery. We will describe our image acquisition and registration techniques, and our results of cancer imaging using elastography. ​\\
  
 ** 16:15 ** | ** Workshop closure ** \\ ** 16:15 ** | ** Workshop closure ** \\
inserm rennes1 ltsi