#FA9550 #D13AP00046

Autonomous Robot Surgery: Performing Surgical Subtasks without Human Intervention




Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Oct 14, 2014

Center for Automation and Learning for Medical Robotics (Cal-MR): http://cal-mr.berkeley.edu.

Abstract: Automating repetitive surgical subtasks such as suturing, cutting and debridement can reduce surgeon fatigue and procedure times and facilitate supervised tele-surgery. Programming is difficult because human tissue is deformable and highly specular.
Using the da Vinci Research Kit (DVRK) robotic surgical assistant, we explore a “Learning By Observation” (LBO) approach where we identify, segment, and parameterize sub-trajectories (“surgemes”) and sensor conditions to build a finite state machine (FSM) for each subtask. The robot then executes the FSM repeatedly to tune parameters and if necessary update the FSM structure.

We evaluate the approach on two surgical subtasks: debridement of 3D Viscoelastic Tissue Phantoms (3d-DVTP), in which small target fragments are removed from a 3D viscoelastic tissue phantom, and Pattern Cutting of 2D Orthotropic Tissue Phantoms (2d-PCOTP), a step in the standard Fundamentals of Laparoscopic Surgery training suite, in which a specified circular area must be cut from a sheet of orthotropic tissue phantom. We describe the approach and physical experiments, which yielded a success rate of 96% for 50 trials of the 3d-DVTP subtask and 70% for 20 trials of the 2d-PCOTP subtask.

Adithyavairavan Murali, Siddarth Sen, Ben Kehoe, Animesh Garg, Seth McFarland, Sachin Patil, W. Douglas Boyd, Susan Lim, Pieter Abbeel, Ken Goldberg

Date: October 2014

We thank our collaborators, in particular Allison Okamura, Greg Hager, Blake Hannaford, and Jacob Rosen. We thank Intuitive Surgical, and in particular Simon DiMao, for making the DVRK possible. We also thank the DVRK community, including Howie Choset, Anton Deguet, James Drake, Greg Fisher, Peter Kazanzides, Tim Salcudean, Nabil Simaan, and Russ Taylor. We also thank Aliakbar Toghyan, Barbara Gao, Raghid Mardini, and Sylvia Herbert for their assistance on this project. This work is supported in part by a seed grant from the UC Berkeley Center for Information Technology in the Interest of Science (CITRIS), by the U.S. National Science Foundation under Award IIS-1227536: Multilateral Manipulation by Human-Robot Collaborative Systems, by AFOSR-YIP Award #FA9550-12-1-0345, and by Darpa Young Faculty Award #D13AP00046.


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...