Game on: XBOX camera offers touchless method of browsing images
A specialized camera system that can recognize hand gestures could soon allow surgeons to browse and display medical images in the operating room without having to physically touch a keyboard or mouse.
Created by researchers at Purdue University, in West Lafayette, Ind., the hand-gesture recognition system combines the Kinect camera, originally developed by Microsoft for its XBOX gaming system, with an algorithm that converts the gestures to commands that manipulate MRI images on a large display.
Development of the algorithms has been led by doctoral student Mithun G. Jacob, and a paper on the research was published in December 2012 in the Journal of the American Medical Informatics Association.
“Gestures are a natural and efficient way to manipulate images and have been used in the OR to improve user performance for data entry,” wrote Jacob and colleagues. “A touchless interface would allow the surgeon to directly interact with images without compromising sterility.”
The authors validated the system by working with veterinary surgeons, who specified functions they typically perform with MRI images during surgery and suggested gestures to use as commands. Ten gestures were chosen to rotate, browse, adjust brightness and control zoom on the images.
Jacob and colleagues found that the inclusion of contextual information greatly improved the accuracy of the system. In addition to tracking hands, the camera observes the surgeon’s torso and head, and the algorithm takes into account the current phase of the surgery.
“We can determine context by looking at the position of the torso and the orientation of the surgeon's gaze,” co-author Juan P. Wachs, PhD, said in a press release. “Based on the direction of the gaze and the torso position we can assess whether the surgeon wants to access medical images."
Results showed that by integrating context, the rate of false positives for gesture recognition was reduced from 20.8 percent to 2.3 percent. The authors also reported that the system demonstrated a mean accuracy of 92.6 percent in translating gestures into commands.
Next steps for the researchers include collecting more training data from more users, and adding more environmental cues. The team is exploring context using a mock brain biopsy needle that can be tracked in the brain, allowing the system to anticipate which images the surgeon will need to see next and reducing the number of needed gestures, according to Wachs.