The neurosurgery robot project II.
The great challenge of my project is to increase the overall accuracy of the robotic setup. For future validation of the system sub-millimeter accuracy has to be achieved. Let us introduce the major components of the neurosurgical system (click on the side figure for details). Patients are undergoing intensive CT scanning before the surgery. Usually we have 0.5 mm slices, sometimes 2 mm (for research purposes that may be enough). Presently the DICOM format CT scans are transformed into the open source 3D Slicer 3.0 format, therefore we can display them on our computers. The Virtual Fixtures are also defined in Slicer, by putting together the adequate planes. The CT scan is downloaded to the StealthStation platform that has to be registered to the actual skull by touching at least four distinguished points on the skull (in vivo these are screws in the bone). This procedure is the same with every optical tracking device, and the best achievable accuracy is 0.5-0.9 mm. Now we can read the relative position of the robot’s marker to the marker mounted on the patient (or on the operating table).
Another registration procedure is to determine the relation between the virtual frame at the robot’s base (derived from the optical marker mounted on the robot’s end) and the StealthStation’s inner coordinate system. We can calculate this by moving the robot into six different positions, where we can read the end’s position through the StealthLink interface, and calculate the base position by applying the (inverse) forward kinematics based on the joint values provided by the robot encoders. The accuracy of the procedure depends a lot on the range of the joints we move the robot, as we do not have the absolute precise kinematic model of the manipulator. The encoders’ accuracy allows a maximum of 0.15 mm precision.
Finally we have to determine the cutter tool’s tip spatial position compared to both the marker and the tool center point of the NeuroMate. This we achieve by performing pivot calibration, using again the forward kinematics. We approach the same point (the bottom of a cone) in 6 different orientations, therefore we can compute the position of the tooltip.
After all, we have the transformations to go from any frame to any other. As of now, we use the robot’s world frames to determine the motion commands, but later we will relocate it to the reference frame. In the mean time we are continuously looking for better methods to increase the overall accuracy. The system is pretty complex, so there is a lot we can do.