Researchers from John Hopkins and Stanford Universities have demonstrated that fundamental surgical manipulation tasks can be learned on the da Vinci Research Kit (dVRK) via video-based imitation learning. Once trained, the robot that is normally controlled by a surgeon was able to autonomously manipulate tissue, handle a needle, and tie knots with the same skill as a human doctor!
The da Vinci Robotic Surgical System comes in single and multiple port variations and is designed to help surgeons perform minimally invasive procedures with a greater range of motion and accuracy.
Here’s a video demonstration of the team’s model in action:
Their model is based on ACT, a transformer-based architecture similar to what LLMs are built on only intended for fine robotic manipulation tasks.
Performance was much better when modeling the actions as relative motion vs using absolute forward kinematics due to the presence of joint measurement errors.
What amazes me is that the robot was able to work through challenges it was not trained to deal with, like dropping a needle or having a screwdriver interfere with a knot! That’s the type of thing I experience with LLMs on a regular basis and it blows my mind every time. 🤓
I’m interested in seeing where this research takes us. Will we experience a future where robots perform surgery without human help? The da Vinci surgical system collects enough data on a wide range of procedures that it seems completely possible!
You can read the research paper on GitHub, which includes lots of video examples of successes and failures.
Here’s the article posted by John Hopkins University.