Motion Generation of Robotic Surgical Tasks: Learning From Expert Demonstrations
Robotic surgical assistants offer the possibility of automating portions of a task that are time consuming and tedious in order to reduce the cognitive workload of a surgeon. This work uses programming by demonstration to build generative models and generate smooth trajectories that capture the underlying structure of the motion data recorded from expert demonstrations. Specifically, motion data from Intuitive Surgical’s da Vinci Surgical System of a panel of expert surgeons performing three surgical tasks are recorded. The trials are decomposed into subtasks or surgemes, which are then temporally aligned through dynamic time warping. Next, a Gaussian Mixture Model (GMM) encodes the experts’ underlying motion structure. Gaussian Mixture Regression (GMR) is then used to extract a smooth reference trajectory to reproduce a trajectory of the task. The approach is evaluated through an automated skill assessment measurement. Results indicate that the approach presents a means to (i) extract important features of the task, (ii) create a metric to evaluate robot imitative performance (iii) generate smooth trajectories for reproduction of three common medical tasks.