A thorough market study taught us that the HoloLens are the most advanced smartglasses out there. That’s why we decided to focus on it. Its capabilities show the wide potential of these kinds of computing devices. Have a look!
The idea of the HoloLens is to display holograms in our natural, physical environment. To do so, the HoloLens uses a high-end laser technology that projects artificial light into our eyes, just the same way we perceive natural light to see the world around us. By doing so, the HoloLens allows us to see all sorts of digital information and holograms, while still remaining fully aware of the real environment.
The HoloLens has a few cameras built in that are actively looking at our eyes. This is used for iris recognition and eye tracking.
Iris recognition is used for biometric security. When putting on the device, it automatically knows who you are and it uses this data to log in – you no longer need to use a username and password. It works the same as Apple’s FaceID on the iPhone basically, proving a convenient yet very secure authentication mechanism. This comes in handy especially in an enterprise environment where the glasses can be used amongst multiple colleagues (the device remembers up to 64 different users).
Eye tracking is used for two purposes:
It’s worth noting that we (as developers) can leverage both techniques within an app context as well. Let’s start building better interfaces that support usability like we couldn’t do before, shall we?
The device also has a depth camera which is used, in combination with a ‘time of flight’ technique of light and AI, to create a 3D map (called ‘the spatial map’) of your environment in real time. That way, objects or holograms can be placed into your surroundings the same way real, physical items would. Here are some examples:
Next to that, we can use the spatial map for interpreting the environment surrounding our apps’ users. This allows us for example to measure distances & sizes of objects, or to detect anomalies on a construction site by comparing the spatial map to the 3D design an architect made. And this is only the beginning… Did we mention that the spatial map itself can be extracted from the HoloLens as a 3D object as well; turning the device into a portable (and wearable) “3D scanner”? The possibilities seem endless!
To interact with the HoloLens, you use hand gestures, gaze, and voice commands. The HoloLens tracks the movements of your hands, and recognizes gestures by means of AI and computer vision. As it processes data in real time, this causes a very natural way to interact with the device. Just like swiping on your smartphone, the HoloLens has gestures that are standard and work out-of-the-box. But custom gestures can be implemented as well.
The same goes for voice commands. Thanks to three microphones at the bottom of the visor, the HoloLens recognizes voice commands on a normal voice pitch. It also ignores background noises, so it can still be used perfectly in harsh, loud environments (like eg. industrial sites).
3D models can be rendered on the HoloLens itself, as it is a fully capable stand-alone (yet ‘wearable’) computer. But for large scale 3D models, a better practice is to connect it with more powerful servers in the cloud. These are able to render everything, and stream the high quality result(s) towards your glasses in real time.
Lastly, the HoloLens is not an individual device. While it can be perfectly used by a single person, it can also create a shared experience with multiple HoloLens users. In this shared experience, everyone can see and interact with the same holographic objects – whether people are in the same room (co-located) or not (remote). What a time to be alive!
Want to learn how these capabilities can be leveraged in your organization? Then book the workshop that fits your needs