Tuesday, February 7, 2020 marked an important occasion to me. A HoloLens 2 arrived at my door. For the first time since March 2019, where I got to try a HoloLens 2 during the MVP Summit for a few minutes, I actually had my hands on a device. And what’s more – I got a month to play, test and develop with it, courtesy of fellow MVP and Regional Director Philipp Bauknecht of MediaLesson GbmH, a real community hero, who has graciously provided me with this learning opportunity. I hope I will be someday be able to repay him this enormous favor.
Just having this device around and being able to test and develop with it, quite changed my views of it, what’s actually important – and what makes it a game changer.
First of all, I am going to bring your hopes down a little. To an extent, HoloLens 2 suffers a bit from what I would like to coin as “the Apollo 12 effect”. The whole world followed Neil Armstrong, Buzz Aldrin and Michael Collins to the Moon and where glued to very bad black & white screen while the first two men took their steps to the Moon. But a lot less people watched Apollo 12. Successive flights got even less attention (bar Apollo 13, but that was not because they landed, but almost died). People had literally seen this before and were – I kid you not – complaining about footage of men on the Moon eating up precious TV time from the football games. Subsequent flights after 17 were cancelled. People are extremely well equipped at accepting ‘magic’ and then getting bored with it.
As far as display goes, HoloLens 2 shows you virtual objects in 3D space that can interact with reality. This, my friends, is exactly what HoloLens 1 did.
It does this a lot faster, the view is brighter, the holograms are a lot more stable, and the thing almost everyone harped on – the field of view – has been considerably increased. I can almost imagine Alex Kipman shouting “we gave you bloody magic and all you kept telling me was the view was not big enough – are you happy now???“
There are other things: the device is a lot more ergonomic, it feels lighter but actually isn’t very much – it’s just better balanced. Donning it as easy as cake, taking it off as well, and charging via USB-C is a godsend – no more fiddling with MicroUSB on a wobbly end. I have seen more than one HoloLens 1 with a damaged charging port.
That’s all very fine and welcome. But that’s not what I mean by game changing. If we stay in space terminology – HoloLens 1 was like we suddenly had a fusion rocket. It is awesome, but parts of it were messy.
HoloLens 2 has a warp drive.
Interaction, interaction, interaction
Everyone who has ever used HoloLens 1 – or better still, tried instructing a newbie user to use one – knows the challenge. You can select something by pointing your head to a Hologram, then perform an air tap. Just tap your finger and thumb together. Easy as cake. And yet I have witnessed people who for some reason could not perform this simple task successfully. Either they pointed the cursor not correctly, or they made gestures that were almost but not quite an air tap, made it to slow or too fast, contorted their hands in a way that apparently confused the device, or started to make up gestures – that of course did not work at all. Whatever. Most people got it, but between 10-20% just never could get it to work reliably, if at all. The HoloLens 1 came with a little clicker for those people, apparently a last-minute addition – that almost no-one ever used. It either lost its charge at an inconvenient time, or (in most cases) got completely lost at all, it being a small device that easily was forgotten or dropped somewhere.
HoloLens 2 does not come with a clicker, and that’s for a reason. If you make a gesture that even remotely resembles an air tap, it registers it as such – with such ferocity and accuracy that if you have a large contact surface you might even get some inadvertent air taps in (I will have to look into that for my app Walk the World, for instance).
In addition, what everyone saw being demoed first by the amazing Julia Schwarz – the ability just to touch, grab and move Holograms works amazingly well. To such an extent that you can push buttons like they are real, grab, move and rotate things like they are real… everything with amazing accuracy. You can even have your hand visualized and then it looks like you have some computer-generated glove on your hands – it follows every little movement. The resulting interaction model is very natural. So natural that you actually at first expect haptic feedback when you push a ‘button’. Maybe something for HoloLens 3 ;).
There are a few things you might want to explain when you instruct someone to use the device for the first time to speed things up – like that if you want to use the start button, that’s on your wrist. No more bloom gestures – the Italians will appreciate that ;). You might want to explain how an air tap works, but it’s likely people find that out by themselves as it is so easy now. Also, the device goes out of it’s way to explain itself on first startup.The fact that it can not only track your hand but also recognizes all kinds of hand postures and gestures allows for much more detailed control , and my personal favorite is having menus popup when you hold one hand in a certain position. These hand palm menus can be very easily made, using no code at all, just using stuff that’s included in the MRKT2 out of the box.
But wait, there’s more
Voice commands, remember that? The thing that everyone used like crazy and then quickly came back from, as it did not always work in noisy environments, especially with a lot of talk around. And making an odd gesture in empty space and looking weird is one thing, but shouting repeatedly at a device makes you feel very awkward indeed. Whatever they did to it, it’s now way more accurate and confident at recognizing speech. Even in a very loud room with people talking. Speech control is everywhere in HoloLens 2, and very easy to use reliably.
And then there’s eye tracking. Remember you had to move your whole head to point the gaze cursor? It now tracks your eyes. I knows what your are looking at. I use this in AMS HoloATC to make an image of the actual airplane pop up when you look at the model. There are four (or five, depending on what you include) events that you can easily track. I also learned that on a real life device I make that happen way too fast and too nervously. Having a real device, I will be able to fix this in the near future.
Eye tracking also has some extra benefits – first of all, it allows the device to use Windows Hello login using iris recognition. Second, calibrating is a lot easier and faster. No longer do you have first close your one eye, then very very precisely move your finger in the right slot for a couple of times, and repeat that for the other eye – you now simply have to track a few holograms with your eyes, as they move though your view. And you really should do that – Microsoft pushed the envelope a lot further when it comes to display technology, so if you don’t calibrate properly, there’s a lot more chance of having a fuzzy view. Fortunately the device has a setting that automatically starts the calibration routine when it detects the user has changed (which it presumably does using the iris scan).
HoloLens 2 is an amazing device, with an amazing display technology – but it’s the interaction model that makes it really special. This is what takes in over the top, makes it natural, simple to learn and easy to use. The hand/eye tracking removes the barrier of artificial gestures and make wandering around and interacting with Holograms at lot easier. This will make use of the device in business settings – especially industrial and manufacturing environments – a lot easier.
I love living in the future:)