Automating surgical subtasks


Our group has been focusing on surgical subtask automation for a while. Our most recent achievements have been presented at the ICRA Workshop: Supervised Autonomy in Surgical Robotics; Nagy et al. "An Open-Source Framework for Surgical Subtask Automation":

"The next step in the advancement of MIS appears to be partial automation. The surgical workflow of the RAMIS procedures often contains time-consuming and monotonous elements, like suturing or knot-tying, the automation of these—so-called subtasks—can reduce the fatigue and the cognitive load on the surgeon, who can hence pay more attention on the critical parts of the intervention. During the last few years, the automation of surgical subtasks became a prevailing topic in the research of surgical robotics. A number of autonomous surgical subtasks, like dedebrisment, suturing, palpation, blunt dissection or shape cutting are already implemented, or being currently developed by various research groups. Our aim was to develop an open-source framework to support such projects; provide software packages that already implement basic functionalities, becoming universal building blocks in surgical subtask automation. The basic architecture of this software package—the iRob Surgical Automation Framework, or irob-saf—is presented herein.
Most of the research centers working on partial automation in surgery are utilizing the 1st generation da Vinci alongside the dVRK), or the Raven platform, both employing the the ROS for programming. Since our lab employs the dVRK, the irob-saf is built on this, with a ROS interface for  future portability.  
 A. Sensory inputs 
The endoscopic camera image is undoubtedly the most important information source in RAMIS. It does not requires the placement of any additional instrument into the already crowded operating room. In irob-saf, the video stream— preferably stereo—can be provided by any kind of camera, e.g., USB webcameras, or the stereo endoscope of the da Vinci, as long as it is interfaced into a ROS topic. The algorithms usable for perception are out of scope of the current work, however, the framework offers a prebuilt infrastructure to run those, with the required input and output channels. It is important to note that further sensor modalities, like force sensors, can also be easily added to the existing infrastructure. 
B. High-level robot control 
The arms of the da Vinci surgical robot are interfaced by high-level robot control nodes, one node per arm. These are responsible for executing the trajectories generated by higher level nodes, while checking for errors from the robot. The trajectories are sent through ROS actions instead of topics, which are more favorable in environmental interaction scenarios. ROS actions makes it possible for the higher-level node to work on something else during action execution, e.g., monitoring the environment, or send actions to another node. Moreover, actions provide the ability to send feedback and the result of the action, or preempt the action with another, e.g., if any environmental change makes it necessary. These high-level robot control nodes are robot-specific, but their interface to the other nodes of the framework is universal. This means that the usage of another type of robot arm requires only the implementation of the high-level robot control node itself. 
C. Motion decomposition 
Laparoscopic surgical motion can be decomposed to components hierarchically. The granularity level that can be used most easily to build a motion library is probably the level of gestures—so called surgemes. A library was implemented, offering a number of surgemes, such as grasp, cut or place object. These surgeme actions are parameterizable, and also connected through a ROS action interface. The implemented surgemes are able to do the necessary safety checks, e.g., the proper instrument is used for the current surgeme. Further surgemes can be implemented based on the existing ones, and to be added to the library. 
D. Subtask-level logic 
The whole system is controlled by a subtask-level logic node. This receives the information from the perception nodes, handles errors, user (surgeon) interactions, and contains and performs the specific workflow of the current subtask.
DISCUSSIONAn open-source, ROS based software package to ease surgical subtask automation is presented. This framework interfaces sensory inputs, perception algorithms and robots, and contains a surgeme-level motion library. The whole system can be controlled by a subtasklevel logic ROS node, tailored to the needs of the current subtask to be automated. The iRob Surgical Automation Framework—irob-saf—is available on GitHub, and is being continuously developed and updated."

Read here the full paper.  


Comments

Popular Posts