Microsoft‘s Hololens has been getting a tremendous amount of attention over the past few years. Hype has been steadily accelerating about the technological, financial, and social potential for augmented reality, especially given the recent frenzy surrounding Pokemon Go. To clarify, while “mixed reality” is probably a more accurate term to describe a technology that blends simulated objects with your surroundings in an almost indistinguishable fashion, we will use the term “augmented reality” in this post as it is still more common. If you don’t have the time to read this lengthy hands-on, here is the low-down: Hololens is a very impressive technology with very compelling medical use-cases, although it currently has some limitations that seem like they will be addressed in later generations of the device. Keep in mind there are additional headsets to keep an eye on, including, but not limited to, the Daqri, Epson’s Moverio, Magic Leap, ODG’s R-7 Smartglasses, and the Meta 2.
The health technology sector is very excited about the potential of augmented reality for a variety of applications. Health-related AR startups are already getting a lot of buzz and funding. One of the most talked about applications is the holographic anatomy educational program, which is the result of a partnership of Microsoft with Case Western Reserve University. Reading about the various applications and the technology is exciting, however it is hard to get an idea of the potential of the Hololens, and the limitations, without trying it yourself.
Thanks to Neil Gupta, organizer of the Boston Augmented/Mixed Reality Meetup, and Microsoft, I was able to get hands on (or head in?) a Hololens and try it out for myself. We tried the Hololens in a sizable enclosed space that proved to be ideal for its tracking and projectional capabilities. The headset is pretty ergonomic. After a little fiddling with the size and the angle of an unconventional oblique strap that keeps it in place, it was easy to forget it was on your head. I didn’t feel that it was incredibly secure, however, and I imagine that during a long surgery you would need someone to adjust it occasionally.
Once the headset was secure I was ready to go. It only took a short period to get used to the gesture system, which is very limited at this point, but functional. The gesture for a click involves holding your thumb and index finger in an “L” shape and then touching them together. This will interact with wherever you are looking at. So, you target with your gaze, and then interact with a click gesture. There is an additional gesture, which is almost like a “right-click,” that consists of taking a clenched fist and then opening it slowly with your palm facing the ceiling. There is a sweet spot where your hand needs to be in order for these gestures to be recognized by the Hololens. You can even type using a virtual keyboard using this gesture system. The whole set-up works pretty reliably once you get the hang of it, but it is disappointing that the hand-tracking is not more sophisticated at this time. There is also the option to utilize voice recognition, but I mainly stuck with the gestures. I imagine the hand-tracking will improve in time.
One thing I did immediately was set up my virtual holographic space. This was incredibly cool. I placed various virtual animals and objects around the room, and also could place highly functional browser windows wherever I liked. The browser windows in particular really impressed me. The text and images were high resolution and very readable. When looking at them I truly never got the sense that these were holograms. It felt like these windows were in the same space with me. Some of the holographic objects worked better than others. A combination of the object color/design and the lighting/background I placed them in front of affected the “realness” of the objects. But overall these very basic functions of the Hololens impressed me the most. I also should mention that the tracking is pretty unbelievable. A major part of the technology behind the Hololens is sensing where you are in 3D space and adjusting the projections of the holograms accordingly. While some other headsets are having issues with this particular challenge at the moment, this is not a problem for the Hololens. Moving around the room, even when moving briskly, does not cause any jitter or apparent shift in the position/rotation of the surrounding holograms.
I then spent some time playing “Project X-ray,” a pretty fun space invader type game in which spider robots explode through the wall attacking you. The game is a blast, and you have to actually physically dodge laser beams as you try and take out these robots. Not something you really want to do with anyone watching you, I might add. The illusion, once again, is very impressive here, as these virtual holes in the wall the aliens popped through at some points seemed really quite convincing.
Now it’s time to talk about the elephant in the room, which is the Hololens’ limited field-of-view (FOV). You only see holograms in a small box in the center of your vision (image source: The Verge). So when you see promotional videos with a room filled with holograms, it’s a little misleading. While the object data for those holograms are there, you really have to scan your head around to see it all. Now, from a gaming/entertainment perspective this may be a deal-breaker at the moment, but for medical applications this might not be a big deal at all.
I won’t lie, I was pretty excited after my brief experience with the Hololens. I can already see it being useful for medical applications in its current state. I’m a surgeon, so I’ve been thinking about ways I can use this in an OR. During surgery, a major challenge is controlling the location and content of visual displays in the room. There are usually several monitors that can display a camera feed, vital signs, or imaging data. Getting the circulating nurse, who sometimes is unfamiliar with the system, to try and change out the feeds, reposition the monitors, and/or open available imaging can get frustrating very quickly, especially during a stressful case. From what have experienced, the Hololens already has the ability to completely eliminate this cumbersome process and allow you to create a custom holographic layout of patient related information. Furthermore, thanks to the gesture system, you can cycle through the data and imaging at your leisure, which will benefit surgeon and patient alike. What’s unclear to me at the moment is how lighting conditions in the operating room, which can be very harsh, will affect the quality of the holographic projections. I’m also curious to see if it affects the gesture-recognition capabilities. Finally, I’m a little worried that the tinted visor of the Hololens may decrease a surgeon’s ability to view the operating field clearly, which may limit its uses for surgical applications. It also doesn’t seem like it would protect your eyes sufficiently from bodily fluids that might splash up during a case.
Some other AR applications for the operating room I’m pretty excited about include using surgical navigation technology to “see through” patients. So, for example, with an AR headset I would see the bony structure of a patient’s spine through his body using reconstructed CT scans. Another exciting application is the ability for surgeons to guide other surgeons remotely, also called telementoring.
The future of augmented reality for medicine and surgery is bright and exciting. The technology is already at the point where it has some very compelling uses. There are some limitations with the current state of the technology, but these problems appear to be relatively minor in the grand scheme of things, and will likely be overcome in the next few years. The augmented/mixed reality headset may one day join the mask and surgical cap as necessities for the OR.