News
Members
Publications
Software / Data
Job offers
Images / Videos
Collaborations
Conferences
Lab meetings: "Les partages de midi"
Practical information
Members Area
Next conferences we are in …
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
activities:theme1:projects:std:index [2015/12/15 13:47] janin |
activities:theme1:projects:std:index [2015/12/22 15:52] janin |
||
---|---|---|---|
Line 12: | Line 12: | ||
Detecting tools in surgical videos is an important indregient for context-aware computer-assisted intervention systems. We propose a new two-stage pipeline for tool detection and pose estimation in 2d images, named ShapeDetector. Our approach is data-driven and overcomes strong assumptions made regarding the geometry, number, and position of tools in the image. Our method has been validated for the following three pose parameters: overall position, tip location, and orientaton; using a new surgical tool dataset: the NeuroSurgicalTools data-set made of 2476 monocular images from neurosurgical microscopes during in-vivo surgeries. | Detecting tools in surgical videos is an important indregient for context-aware computer-assisted intervention systems. We propose a new two-stage pipeline for tool detection and pose estimation in 2d images, named ShapeDetector. Our approach is data-driven and overcomes strong assumptions made regarding the geometry, number, and position of tools in the image. Our method has been validated for the following three pose parameters: overall position, tip location, and orientaton; using a new surgical tool dataset: the NeuroSurgicalTools data-set made of 2476 monocular images from neurosurgical microscopes during in-vivo surgeries. | ||
- | {{ :activities:theme1:representativeimage.png?400 }} | + | [[https://ecm.univ-rennes1.fr/nuxeo/nxdoc/default/55cab40f-6564-4026-99b5-37ffb10cdfb3/view_documents|{{ :activities:theme1:representativeimage.png?400 }}]] |
+ | |||
[[https://ecm.univ-rennes1.fr/nuxeo/nxdoc/default/55cab40f-6564-4026-99b5-37ffb10cdfb3/view_documents|**IMAGES AND ANNOTATIONS**]] | [[https://ecm.univ-rennes1.fr/nuxeo/nxdoc/default/55cab40f-6564-4026-99b5-37ffb10cdfb3/view_documents|**IMAGES AND ANNOTATIONS**]] | ||
We provide separate train and test splits as long as corresponding annotations in the LabelMe format (one annotation file per image). | We provide separate train and test splits as long as corresponding annotations in the LabelMe format (one annotation file per image). | ||
- | [[http://dbouget.bitbucket.org/2015_tmi_surgical_tool_detection/ | + | [[http://dbouget.bitbucket.org/2015_tmi_surgical_tool_detection/|More info]] |
- | |More info]] | + | |
====== Main Collaborators ====== | ====== Main Collaborators ====== | ||
- | * [[http://www.bic.mcgill.ca|Pr. Louis Collins MNI McGill University Canada]] | + | * [[https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/|Rodrigo Benenson, Bernt Schiele, Max-Planck-Institut für Informatik, in Saarbrücken, Germany.]] |
+ | * Funding by Carl Zeiss, Germany |