Migrating to MRTK2: Changing viewpoint rotation angle when pushing the thumbstick


A short one, but one that took me quite some time to find.

If you are using MRTK2 to build a Mixed Reality app to run on Immersive Headsets – or at least an environment where you control the environment using controllers – you can typically ‘teleport’ forward and backward by pushing the thumbstick forward and backward, and rotate the view by pushing it left and right. Your viewpoint rotates either 90° left or 90° right (and of course you can change the view by rotating your head, but that’s not the point). But what if you want to rotate only 45°? Or 22.5°?

I have spent quite some time looking in the MRKT2 config profiles looking for the rotation angle. The reason I could not find it, is because it’s not there. At least, not directly. It’s in the pointer – the 3D cursor, if you like. Yeah, you read that right.

First, we clone

Starting from a default MRKT2 project, we follow the following steps:

  • Clone the Toolkit Configuration Profile itself. I used DefaultMixedRealityToolkitConfigurationProfile to start this time.
  • Under Input, clone the Mixed Reality Input System Profile
  • Under Pointers, clone the Mixed Reality Input Pointer Profile

Finding the right pointer

If you expand the Pointers section and then the Pointer Options sections, you will see a whole lot of pointers. The second one is the one we are looking for: ParabolicPointer. It’s a prefab.

If you click it, it will open the asset folder Assets/MRTK/SDK/Features/UX/Prefabs/Pointers and highlight the prefab we just selected:

Finding the angle

Press CTRL+D to duplicate the pointer, then drag it the Prefabs folder inside your app:


If that folder is not there yet, create it first. Once the prefab is inside the folder, rename it to something recognizable – ParabolicPointer45 for instance. When you are done renaming you have automatically selected the pointer itself. Go over to the inspector, scroll all the way down to the “Parabolic Teleport Pointer” script and there, in the “Teleport Pointer Settings” section, you will finally see where that bloody 90° comes from:

Adapting and applying

So now it’s simply a matter of changing the angle to what you want, so for instance 45°. Then you go back to your adapted Mixed Reality Input Pointer Profile and drag your customized point in the right place:

and now your view will only rotate 45° in stead of 90° when you push the thumb stick sideways.


Why in the name of Turing, Hopper, Kernighan and Richie a setting like this ends up in a pointer prefab eludes me a little, especially since I have the feeling this setting has nothing to do with the pointer’s behavior – but with that of the camera. But then again, I am just a simple developer ;). It took me quite some searching and debug breakpoints to find out where things where happening, and I took a few blind alleys before I found the right one. Typically something I blog about so a) I can now easily find back how I did it and b) you now also know where to look for.

Demo project, although it does not do very much, here. When you run it (either from Unity or as an app) in your Immersive Headset, you will see a green ‘floor’ rotating only 45° when you push the thumbstick stick left or right.

Migrating to MRTK2: using a Spatial Mesh inside the Unity Editor


If you are developing an app using the Mixed Reality Toolkit 2 that requires interaction with a Spatial Mesh, the development process can become cumbersome: add code or assets with Unity and Visual Studio, create IL2CPP solution, wait, compile and deploy, wait, check behavior – rinse and repeat. You quickly learn to do as much as possible inside the Unity editor and/or use Holographic Remoting if you want stay productive and make your deadline. But a Spatial Mesh inside the Unity Editor does not exist.

… or does it? 😉

Begun the Clone Wars have again

You guessed it – before we can see anything at all, a lot of cloning and configuring of profiles needs to be happening first.

  • Clone the Toolkit Configuration Profile itself. I used DefaultHoloLens2CameraProfile this time.
  • Turn off the diagnostics (as ever).
  • Enable Spatial Awareness System
  • Clone the MixedRealityAwareness profile
  • Clone the MixedRealityAwarenessMeshObserver profile (the names of these things become more tongue-breaking the deeper you go)
  • Change the Display option (all the way down) to “Occlusion”

And now the interesting part

On top of the Spatial Awareness System Settings there’s this giant button over the whole length of the UI which is labeled “+ Add Spatial Observer”.

If you click that one, it will add a “New data provider 1” at the bottom, below the Display settings we just changed the previous step.

Select “SpatialObjectMeshObserver” for type

And if you hit the play button, lo and behold:

Basically you are now already where you want to be, but although the wireframe material works very well inside a HoloLens, it does not work very well in an editor. At least, that is my opinion.

Making the mesh more usable inside the editor

You might have noticed the SpatialObjectMeshObserver comes with a with a profile “DefaultObjectMeshObserverProfile” – I’d almost say of course it does. Anyway, clone that one as well. Then we create a simple material:

Of course using the Mixed Reality Toolkit Standard shader. I only change the color to RGB 115,115,115 which is a kind of battleship grey. You make take any color you fancy, as far as I am concerned. Set that material to the “Visible Material” of the Spatial Mesh Object Observer you just added (not in the material of the “Windows Mixed Reality Spatial Mesh Observer”!)

The result, if you run play mode again, is definitely better IMHO:

Using a mesh of a custom environment

So it’s nice to be able to use some sample mesh, but what if you need to the mesh of a real space? No worries, because just like in HoloLens1, the device portal allows you to download a scan of the current (real) space the HoloLens sees:

You can download this space by hitting the save button. This will download a SpatialMapping.obj file. Bring this into your Unity project, then drag it on top of the Spatial Mesh Object observer’s “Spatial Mesh Object” property:

And then, when you hit play mode, you will see the study where I have been hiding in during these worrying times. It has been my domain for working for the past 2.5 months for working and blogging, as well as following BUILD and the Mixed Reality Dev Days. If you download the demo project, it will also include a cube that moves forward, to show objects actually bounce off the fake spatial mesh, just like a real spatial mesh.

Note: if you compile and deploy this project to a HoloLens (either 1 or 2) you won’t see this ‘fake mesh’ at all. It only appears in the editor. Which is exactly what we want. It’s for development purposes only.


Using this little technique you can develop for interacting with the Spatial Mesh while staying inside the Unity editor. You will need less access to a physical HoloLens 2 device, but more importantly speed up development this way. The demo project is, al always, on GitHub

Interesting facts about Azure Digital Twin Service Preview

Currently the Azure Digital Twin Service is still in preview. There are some interesting facts and small issues you need to know before starting to try this out. To understand what Azure Digital Twin offers, it would be best to read my extended post about an introduction to Microsoft Digital Twin. So let’s start!

Availability of Digital Twin Preview

Or the lack of it. You are lucky if you already had a created Azure resource based on the Digital Twin service. At the moment Microsoft states :

Thank you for your interest in the Azure Digital Twins preview program. Due to overwhelming demand, the preview program is temporarily closed as we prepare for the upcoming release of new capabilities. As a result, you may not be able to create new Azure Digital Twins resources right now. Please continue checking back for new information.

So be careful not to throw your resource away. A good chance is that you are not able to create one again. I tried several locations worldwide without luck. Luckily I already had one.

Simulated device(s)

Microsoft has an interesting example which allows you to play around with the Azure Digital Twin service without having an actual device. It uses a separate project which communicates through the IoT hub to provide the Digital Twin simulated data.

IoT Hub… where is it?

Talking about IoT Hub. Wonder where that one is? It is there, but you cannot access it. It is automatically generated when the Azure Digital Twin was created. It will not appear in the list of resources in your Azure environment. Even worse… you can’t access it via Azure Powershell.

Digital Twin viewer

There is a great Digital Twin viewer. But you will get into an issue, when you have done the example before the viewer. The redirect URL of the example is the same as the viewer which causes issues by logging into Azure via the browser using the example. The best way is to change the port number of the redirect URL of the project. Add an additional URL to the App registration under the Azure Active Directory to support this.


There are several limitations using the Azure Digital Twin Preview. You are allowed to run only one Azure Digital Twin service at the time. There are rate and message limitations for Azure Digital Twin Management API, user-defined functions and device telemetry. Read about them here.


Playing around with Azure Digital Twin Preview is just great! As long as you understand the limitations you can try several things yourself without having an actual IoT device. It gives you a good understanding and ideas where you can use it for. We need to wait for general availability to start using it in businesses. There are some other interesting findings posted in the feedback hub which you can find here.

The post Interesting facts about Azure Digital Twin Service Preview appeared first on 365 XR Blog.

Microsoft Guides, the next step in training & guidance with Mixed Reality

It is more than 4 years ago that Microsoft came with the first release of Microsoft HoloLens device in the world of Mixed Reality. It took some time before Microsoft released several out-of-the-box applications for businesses. And they are ruling the Mixed Reality landscape since then. Their focus stays continuously on providing solutions which addresses the most common challenges for businesses.

Business challenges

One of these business challenges is training personnel. Training personnel in organizations require a lot of effort from experts on the workforce. And with a large staff turnover, something which happens a lot on the factory floor, is training a costly part of their business. But even keeping your workforce up-to-date with the latest requirements for training, or to prep students on school for what is coming or getting more insights by combining it with for example a Digital Twin will improve, support and saving costs for your organization allowing you to drive better business.

It is important for organizations to understand the business value of Microsoft Guides. Eventually organizations want to empower their workforce, optimize digital operations and deliver new services internally or externally to their customers. And that will result in a different way of thinking within manufacturing, training, support and guidance of employees.

Microsoft Guides

Microsoft Guides is a Mixed Reality business application which allows you to create a tailor made guide for employees on the factory floor. It allows you to create holographic work instructions which support a whole or part of the work flow. There are absolutely no coding skills requires for creating these Mixed Reality guides.

Microsoft Guides is part of the Microsoft Dynamics 365 family and therefor actually called Microsoft Dynamics 365 Guides. It uses the base storage system of Dynamics 365, its underlying Common Data Service and Power Apps to store its flow and content.

The solution works for Microsoft HoloLens 1 and 2. But understandable it is extended based on the new functionality provided by Microsoft HoloLens 2 like natural gestures and other. The natural gestures are currently only used for the author when building guide.

Since then the Microsoft Team responsible for Microsoft Guides is extending the functionality of the application and its platform on a monthly basis. They are keen on getting feedback from customers, MVP’s and consultants. And they take the feedback seriously.

How does it work?

In short a guide is created by an author. The author starts outlining a guide into different tasks using a desktop application. Outlining is based on requirements of the customer and the guidelines of building a guide. Then the author switches to the Microsoft HoloLens 2 device. There the guide is attached to for example an object in the real physical world by editing each task of the outlines. This is accomplished by moving, placing, sizing and rotating the holograms around in the real physical world. Adding additional holographic instructions like arrows, hands and more.

An operator logs in Microsoft Guides and select the guide. The operator needs to synchronize his training using a tag or positioning a holographic object on the object in the real world. After this the guide is executed through the outlines and tasks. The operator experiences the cards with the steps explained floating around. A tether (dotted line) is used to indicate where the step needs to be executed on the real assets. Finally the author is able to monitor and analyse the progress and execution. It allows the author to improve the guides and support the operator in improving their skills.

Different approaches

Microsoft Guides allows you to use different approaches.

  • Real-life assets – This is the best way of having hands-on training for employees. The guide is synchronized on a real-life object using a tag or holographic object.
  • Virtual assets – guides with virtual assets allow you to have education or training without the actual real-life asset. The asset is digitally placed as a holographic object in each task. This is specifically interesting in situations where you are not able to have the actual asset. It is also a more passive way of training since you are not able to execute the tasks on the real-life asset.

Before you start

One of the biggest mistakes is thinking that creating a guide is simply creating a few outlines and task. It is much more comprehensive. Before you start you will need to gather as much content as possible to create your guide. This is accomplished by having for example workshops and inspiration sessions with your customer or target group. There are a few things which are important for creating your guide. You need to understand the space where the guide is build for. You need to understand the procedure and workflow of the actual work. These things will influence in how you build your guide. If possible try to involve an expert of the organization. Actually it is a must to create a good guide. Keep that in mind!

Also try to understand the objectives and sub-objectives of your guide. This will influence the amount of outlines and tasks you want to use. You would think why? But it is important to not lose your operator using the training into doing too much tasks within the same outline. Something we will explain further in detail. And the final one is taking into consideration for which role you are creating a training. The role determines what you need to highlight from the asset and the depth of information you want to provide.


Guides makes use of an anchor. The anchor is used to position the guide at the right location in the real world. This is accomplished by using a QR code or a Circular code. The later one is a predefined tag by the Microsoft Guides team. It is also possible to use a holographic object as a marker. But that requires to position the holographic object at the exact position which is in most cases very difficult. This is mostly interesting when it is not possible to place a tag on the actual asset.

Structure of a guide

The structure of guides exists of one or more cards placed in an outline. Your guide can have more than one outline. the outlines and cards are executed in a logical sequence. There are rules for creating fabulous and great working guides. Information about making guides great can be found here. Not going too much in detail, but it explains from begin to end and has a large number of great tips and tricks. Examples are that each outline must be a specific task within the work flow. It should have a clear beginning and end. It explains how to get around with holographic objects and to be consistent in using models, styles, texts and more.

The outlines and tasks are created via the desktop application. The desktop application allows you to add content. Content can be anything like images, sounds, video and holographic models. It offers out of the box several holographic helpful parts like arrows, generic tools, hands, numbers, symbols and animated zones.


A really cool feature which is just added two months ago is having actions in a card. At the moment there are only two actions. But i would expect more in the future. And who knows… maybe even add your own actions. But for now we have the action website link an Power App.

The website link allows you to add one specific link per card which can be opened during the presentation of the card. Imagine using a link to more information about what the employee is doing. Or maybe real-time information coming from a sensory device shown at the real object or in the virtual model.

Power Apps is a major cool feature. It allows you to show a Power App during a card in the outline. Just like the website link you can have a different Power App per card. Imagine a Power App with questions to be answered by the employee. You could for example use a single Power App with several questions. Each question can be reached by using a parameter in the URL to the Power App. And the results are stored in a database based on the credentials of the logged on user. There are so many more things you could do with this. And just like Microsoft Guides, Power Apps has a creator interface for power users. No additional coding and technical skills are required.

Roles and rights

At the moment Microsoft Guides knows two roles. We have the author and the operator.

The author is allowed to create, rename and edit guides. The author can also activate or deactivate guides. Just like the operator the author can also operate guides.

The operator is allowed to view and operate assigned guides via the Microsoft HoloLens.

Improve efficiency by analytics

The performance of the operator is measured due to data collection during the operation of a guide. Each gaze and commit interaction on buttons is measured. Time related information about the run, guide task and step are stored. This allows the author to view usage statistics and detailed time-track information via Power BI reports. These metrics can be used by the author to optimize the created guide. But it can also be used to see how the operator’s performance is improving doing the work. Sharing the result using Power BI Reports requires an appropriate license.

There are two default reports available. The process time-tracking report and the Guides usage report. These reports can answer several questions like

  • Is daily guide usage changing?
  • What is the most frequently used guide?
  • What is the average run time per guide?
  • How long is a guide run in minutes?
  • How long did each task or step take?

Power Apps & Power automate

Microsoft Guides can be integrated into existing process and workflows using Power Apps and Flow. You can start a workflow or even use the “Create work record” event. This integration is mainly on the back-end of Guides and differs therefor on the Power App action discussed earlier.

Dynamics 365 Field Services

There is also an integration with Dynamics 365 Field services. It allows you to attach guides to Field Services tasks. This gives you the ability to complete work orders via Microsoft Guides.


Microsoft Guides is the tool for building great guides in Mixed Reality. While the tool is just released last year, it offers an extensive amount of functionality which is extended every month. Dynamics 365 and Power Apps are the base for Microsoft Guides. And do not get mistaken. Power Apps is going to be big and very important through the Microsoft landscape. The same is for Microsoft Guides as tool to build guides and more.

The post Microsoft Guides, the next step in training & guidance with Mixed Reality appeared first on 365 XR Blog.

An introduction to the Microsoft Azure Digital Twin service

Digital Twin? That’s a buzzword for something isn’t it? Could be. But it is actually an interesting type of service which allows you as an IT company explain a business solution to your customers without going into depth about Machine Learning, Mixed Reality, Internet of Things, Sensory data and many other technologies. It is a business solution which can easily solve several use cases for your customer. And they aren’t particularly interested in what technologies you would need to use, as long as it does the job.

What is a Digital Twin?

So what is a Digital Twin? A Digital Twin is a digital representation, also called a copy, of physical entity(s) in the real world. A physical entity can be anything from people, places, machines, factories, devices, systems and even processes. One of the characteristics of a Digital Twin is that the replica represents a, if possible a true, copy of one or more actual physical entities. Such a copy is visualized using a 2D form like a web browser or a 3D form using for example a Microsoft HoloLens device or mobile device supporting augmented reality. These physical entities generate real-time data which is feed back into the model of the Digital Twin. In some cases a Digital Twin is compared to concepts like cross-reality environments and mirror models.

The idea is creating a “living” simulation model. The model continuous update and change based on the actual values of real–life entities. Such a simulation model can be used in several different use-cases. Use cases for example

  • Simulation – Simulate different scenarios which you normally never would do an the actual asset without damaging it or causing collateral damages.
  • Analyse & Optimize – optimize systems, machines or processes by tweaking values in the simulation model. This allows you to see the effect on those entities without changing the actual configuration. When a better and more optimized configuration is found it can be applied to the actual real-life entities.
  • Training – simulation models allow you to train new personnel without using the actual machine or control room. Some scenarios You could also use it in education at schools
  • Testing – use the simulation model to find the utter edges of the system before it fails.
  • Visualization – get more and clearer insights in current statuses of systems and machine over multiple locations spread.
  • Complex models – Bring complex scenarios with lots of sensory devices divided over structured locations into a single model. Combine and use the data to create easy and simple to understand views.
  • Professional services – Use sensory data from devices at different locations to provide services to your company. The Microsoft example which is available for Digital Twin is a great example. In this example different types of sensory data, temperature, movement and other, is used to determine if a room is available and suited for you.

You can understand that there are far more examples in which a Digital Twin can offer benefits.

Microsoft Azure Digital Twin Service

The Azure Digital Twin service is an IoT service which helps you in creating models of physical environments. It uses something called a spatial intelligence graph. The spatial intelligence graph allows you to model a structure of relationships and interactions between people, places and devices. This allows you to query real-time data from devices which are bound into a structured environment instead of a single device without any relationships whatsoever. The service is part of Azure and supports high scalability and re-usability so called “spatially aware experiences”. With other words duplicating real-time experiences around assets (e.g. machines, processes or other) into a model which knows where exactly the data from sensory devices is coming from. More information can be found here.

Keep in mind that Azure Digital Twin service does not deliver a complete Digital Twin solution. It still requires to connect to sensory devices using for example a IoT Hub. That means you will need to setup a IoT Hub. You also need to think about how you want to visualize and interact with your model. Where are you going to output the results? And how do you want to visualize the structured model of people, places and device. So what does Azure Digital Twin service offer? It offers

  • Modelling the relationships and interactions between people, places and devices using the spatial intelligence graph
  • Use of predefined and domain specific object models
  • Secure scalability and reuse for multiple tenants
  • Custom functions which can be used for changing incoming data or executing checks against incoming data from sensory devices
  • Automation of device tasks using advanced Azure compute capabilities.

Using the Azure Digital Twin service

Azure Digital Twin is at the moment of writing in preview. it is even extremely difficult to get a service installed via the Azure portal since Microsoft has limited the number of instances allowed per region. Meaning that you will need to wait till someone removes a Azure Digital Twin service from his tenant before you can add one yourself. Hopefully this will be resolved as quickly as possible with more availability or when Azure Digital Twin goes into general availability.

Microsoft has an extensive amount of documentation about how to implement Azure Digital Twins. There are concepts, references and resources available in the following documentation.

There is an interesting tutorial about monitoring a building with a Azure Digital Twin. This is the Microsoft example i spoke earlier about. The tutorial helps you to configure and deploy a already made solution into your Azure Digital Twin service. The first two steps of the tutorial are the most important one. They are roughly laid out below.

  • Deploy the Azure Digital Twin service by creating a new resource in your Azure portal.
  • Create an app registration to access the Azure Digital Twin REST API
  • Grant the right permissions to the app registration
  • Download the sample code. It consists of two projects. One is used to configure and provision a spatial intelligence graph. The second one is used to simulate devices and sensory data
  • Configure and provision a spatial intelligence graph
  • Define conditions to monitor
  • Create an user-defined function
  • Simulate device and sensory data using the second project
  • Run the simulation data
  • Run the building service to see if there is a room available based on the current sensory data

The tutorial is thorough and self explaining. Therefor i’m not going to explain each step. You will to follow the following tutorial.

There were some things which require a little bit more attention. Those things i will mention here.

You are building an application which will access the Azure Digital Twin Service through the Azure Digital Twin API. This requires to have an app registration in Azure Active Directory which is given read/write permissions to the Azure Digital Twin service. This requires to have administrative rights in your Azure portal. To get it to work i had to specify an additional platform in the app registration under [Your app registration] > Authentication > Add platform. Add the mobile and desktop applications platform. Make sure you add a redirect URI called http://localhost:8080/. In a later stadium you will be changing the appSettings.json file. That configuration file contains a AARedirectURi defined with the same URI.

Based on some organization settings you will require to have administrative consent. You will need to add delegated permissions for read/write access to Azure Digital Twin. Make sure that the Azure Digital Twin permission appears correctly in the list. If not, then use “Grant admin consent for organization” to give the admin consent.

Keep in mind that the demo uses a simulation sample which simulates sensor data and send that to the IoT Hub which is provisioned by the Azure Digital Twin service. No actual devices and sensors are used in the example.

Azure Digital Twin pricing

The Azure Digital Twin service has no upfront costs or termination fees. You only pay per node and message.

A node is a single component in the spatial intelligence graph. Below the Microsoft example showing a Digital Twin for sensory devices in rooms in a building.

Each Tenant, building, floor, room, device and sensor in this Microsoft demo is a node.

Each API call to the Azure Digital Twin API counts for a message. Each communication sent to a device and sensor are counted as message. And messages send from the Azure Digital Twin to other systems count. You can get discounts when using the service extensively. More information about pricing can be found here.


Azure Digital Twin is a useful tool to build Digital Twin solutions for customers. It is mainly self explanatory when using the different use-cases without going to much into technical detail. Azure Digital Twin service delivers an important part of storing the spatial intelligence graph which is a replication of your real world environment. The tutorial is a great example in how to use the Azure Digital Twin Service in combination with simulated IoT data.

The post An introduction to the Microsoft Azure Digital Twin service appeared first on 365 XR Blog.

Speaking at Microsoft 365 Virtual Marathon about Remote Assist and Guides

I’m honored to have two talks at one of the largest Microsoft 365 online events in the world called Microsoft 365 Virtual Marathon. An 36 hour event happening on May 27th till May 28th 2020. The event is a joined effort between SharePoint Conference and members of the Office 365 community. And that’s not even all. There are keynotes from key Microsoft employees and from thought leaders and members of the community like Jeff Teper, Bill Baer and Naomi Moneypenny. A truly great online event for everybody with an incredible amount of content. Over 300 speakers will deliver more than 400 sessions during this event. And it is all for free!

My sessions are focusing on solving business problems around training, guidance and remote assistance. Microsoft has two major Mixed Reality applications which are part of the Microsoft Dynamics 365 stack. In a time like this these applications are becoming more and more important for organizations.

Getting your employees ready for business using Microsoft Guides

Microsoft offers a broad range of out-of-the-box solutions for the Mixed Reality market. One of them is creating customized training modules for new workers on the factory floor. Using Dynamics 365 and Microsoft HoloLens we are able to create a specific training which allows new workers to be more quickly in learning their daily job. The session contains explanation of the different functionalities of Microsoft Guides and live demonstration of the application.

Empower your workers using Microsoft Remote Assist

Microsoft offers a broad range of solutions modernizing field services with Mixed Reality for technicians. It empowers them by offering modern tools like Mixed Reality devices, Video calls, Annotations and File Sharing capabilities. These tools allow field service workers to solve complex problems even faster, collaborate together with experts and gives them easy access to work orders. During this session we will show you a global overview of Dynamics 365 Remote Assist using Dynamics, Teams and HoloLens.

Join my online sessions

Hopefully you can join one or both of my sessions and learn more about the world of Mixed Reality, Microsoft HoloLens 2 and Microsoft’s great applications to support many industries and sectors. Sessions will be recorded and available at a later time.

The post Speaking at Microsoft 365 Virtual Marathon about Remote Assist and Guides appeared first on 365 XR Blog.

Migrating to MRTK2: right/left swappable hand menu’s for HoloLens 2


As I announced in this tweet that you might remember from February the HoloLens 2 version of my app Walk the World sports two hand palm menus – a primary menu, often used command menu that is attached to your left hand that you operate with your right hand, and a secondary less-used settings menu that is attached to your right hand – and that you operate with your left. Lorenzo Barbieri of Microsoft Italy, a.k.a. ‘Evil Scientist’ 😉 did the brilliant suggestion I should accommodate left-handed user as well. And so I did – I added a button to the settings menu that actually swaps the ‘handedness’ of the menus. This mean: if you select ‘left handed operation’ the main menu is operated by your left hand, and the secondary settings menu by your right.

A little video makes this perhaps more clear:

This blog explains how I made this work. I basically extracted the minimal code from my app and made it into a mini app that doesn’t do more than make the menu swappable – both by pressing a toggle button and speech command. I will discuss the main points, but not everything in detail – but as always you can download a full sample project and see how it’s working in the context of a complete running app.

This sample uses a few classes of my MRTKExtensions library of useful scripts.

Configuring the Toolkit

I won’t cover this in much detail, but the following items need to be cloned and partially adapted:

  • The Toolkit Configuration Profile itself (I usually start with DefaultMixedRealityToolkitConfigurationProfile). Turn off the diagnostics (as ever)
  • The Input System Profile
  • The SpeechCommandsProfile
  • The RegisteredServiceProviderProfile

Regarding the SpeechCommandsProfile: add two speech commands:

  • Set left hand control
  • Set right hand control

In the RegisteredServiceProviderProfile, register the Messenger Service that is in MRKTExtension.Messaging. If you have been following this blog, you will be familiar with this beast. I introduced this service as a Singleton behavior back in 2017 and converted it to a service when the MRTK2 arrived.

Menu structure

I already explained how to make a hand menu last November,  and in my previous blog post I explained how you should arrange objects that should be laid out in a grid (like buttons). The important things to remember are:

  • All objects that are part of a hand menu should be in a child object of the main menu object. In the sample project, this child object is called “Visuals” inside each menu.
  • All objects that should be easily arrangeable in a grid, should be in a separate child object within the UI itself. I always all this child object “UI”, and this is where you put a GridObjectCollection behaviour on.

Consistent naming makes a structure all the more recognizable I feel.

Menu configuration

The main palm menu has, of course, a Solver Handler and a Hand Constraint Palm Up behaviour. The tracked hand is set to the left.

The tricky thing is always to remember – the main menu is going to be operated by the dominant hand. For most people the dominant hand is right – so the hand to be tracked for the dominant menu is the left hand, because that leaves the right hand free to actually operate controls on that menu. For left hand control the main menu has to be set to track the right hand. This keeps confusing me every time.

On the palm. It won’t surprise you to see the Settings menu look like this:

With the Solver’s TrackedHandedness set to Right. But here you also see the star of this little show: the DominantHandController, with the DominantHandController set to off – since I always have the settings menu operated by the non dominant hand, whatever that might be.


This is actually a very simple script, that responds to messages sent from either a button or from speech commands:

namespace MRTKExtensions.HandControl
    public class DominantHandHelper : MonoBehaviour
        private bool _dominantHandControlled;
        private IMessengerService _messenger;
        private SolverHandler _solver;

        private void Start()
            _messenger = MixedRealityToolkit.Instance.GetService<IMessengerService>();
            _solver = GetComponent<SolverHandler>();

        private void ProcessHandControlMessage(HandControlMessage msg)
            var isChanged = SetSolverHandedness(msg.IsLeftHanded);
            if (msg.IsFromSpeechCommand && isChanged && _dominantHandControlled)
                _messenger.Broadcast(new ConfirmSoundMessage());

        private bool SetSolverHandedness(bool isLeftHanded)
            var desiredHandedness = isLeftHanded ^_dominantHandControlled ?
Handedness.Left : Handedness.Right; var isChanged = desiredHandedness != _solver.TrackedHandness; _solver.TrackedHandness = desiredHandedness; return isChanged; } } }

SetSolverHandedness determines what the handedness of the solver should be set to – depending on whether this menu is set to be controlled by the dominant hand or not, and whether or not left handed control is wanted. That’s an XOR yes, you don’t see that very often. But write out a truth table for those two parameters and that’s where you will end up with. This little bit of code is what actually does the swapping of the menus from right to left and vice versa.

It also returns a value to see if the value has actually changed. This is because if the command is started from a speech command we want, like any good Mixed Reality developer, give some kind of audible cue the command has been understood and processed. After all, we can say a speech command any time we want, and if the user does not have a palm up, he or she won’t see hand menu flipping from one hand to the other. So only if the command comes from a speech command, and actual change has occurred, we need to give some kind of audible confirmation. I also added this confirmation only to be given by the dominant hand controller – otherwise we get a double confirmation sound. After all, there are two of these behaviours active – one for each menu.

Supporting act: SettingsMenuController

Of course, something still needs to respond to the Toggle Button being pressed. This is done by the this little behaviour:

    public class SettingsMenuController : MonoBehaviour
        private IMessengerService _messenger;

        private Interactable _leftHandedButton;

        public void Start()
            _messenger = MixedRealityToolkit.Instance.GetService<IMessengerService>();

        private void ProcessHandControlMessage(HandControlMessage msg)
            if (msg.IsFromSpeechCommand)
                _leftHandedButton.IsToggled = msg.IsLeftHanded;

        public void SetMainDominantHandControl()

        private async Task SetMainDominantHandDelayed()
            await Task.Delay(100);
            _messenger.Broadcast(new HandControlMessage(_leftHandedButton.IsToggled));

The SetMainDominantHandControl is called from the OnClick event in the Interactable behaviour on the toggle button:

and then simply fires off the message based upon the toggle status of the button. Note that there’s a slight delay, this has two reasons:

  1. Make sure the sound the button plays actually has time to play
  2. Make sure the button’s IsToggled is actually set to the right value before we fire off the message.

Yeah, I know, it’s dicey but that’s how it apparently needs to work. Also note this little script not only fires off HandControlMessage but also listens to it. After all, if someone changes the handedness by speech commands, we want to see the button’s toggle status reflect the actual status change.

Some bits and pieces

The final piece of code – that I only mention for the sake of completeness – is SpeechCommandProcessor :

namespace HandmenuHandedness
    public class SpeechCommandProcessor : MonoBehaviour
        private IMessengerService _messenger;

        private void Start()
            _messenger = MixedRealityToolkit.Instance.GetService<IMessengerService>();

        public void SetLeftHanded(bool isLeftHanded)
            _messenger.Broadcast(new HandControlMessage(isLeftHanded) { IsFromSpeechCommand = true });

It sits together with a SpeechInputHandler in Managers:

Just don’t forget to turn off the “Is Focus Required” checkbox as these are global speech commands. I always forget this, and that makes for an amusing few minutes of shouting to your HoloLens without it having any effect, before the penny drops.


You might have noticed I don’t let the menu’s appear on your hand anymore but next to your hand. This comes from the design guidelines on hand menu’s on the official MRKT2 documentation, and although I can have a pretty strong opinion about things, I do tend to take some advice occasionally 😉 – especially when it’s about usability and ergonomics. Which is exactly why I made this left-to-right swappability in the first place. I hope this little blog post will give people some tools to add a little bit to inclusivity for HoloLens 2 applications.

Full project, as mostly always, here on GitHub.

Migrating to MRKT2 – easily spacing out menu buttons using GridObjectCollection


This is simple, short but I have to blog it because I discovered this, forgot about it, then discovered it again. So if anything this blog is for informing you as well al to make sure I keep remembering this myself.

If you have done any UI design for Mixed Reality or HoloLens, you have been in this situation. The initial customer requirement ask for a simple 4 button menu. So you make a neat menu in a 2×2 grid, and are very satisfied with yourself. The next day suddenly you find out you need two more buttons. So – do you make a 2×3 or a 3×2 menu? You decide on the latter, painstakingly arrange them in a nice grid again.

The day after that, there’s 2 more buttons. The day after that, 3 more. And the next day… you discover GridObjectCollection. Or in my case, rediscover it.

Simple automatic spacing

So here is our simple 2×2 menu in the hierarchy. This is a hand menu. It has has a more complex structure than you would imagine, but this is because I am lazy and want an easily adaptable menu that can be organized by GribObjectCollection

The point is, everything that needs to be easily organizable by GridObjectCollection, needs to be a child of the object that has the actual GridObjectCollection behaviour attached. In my case that’s the empty gameobject “Controls”. Now Suppose I want this menu not to be 2×2 but 1 x 4. I simply need to change “Num Rows” into 1, press the “Update Collection” button and presto:

Of course, you will need to update the background plate and move the header text, but that’s a lot less work than changing the layout of these buttons. Another example: change the default setting for “Layout” from “Row Then Column” to “Colum then Row”, set “Num Rows” to 1 again (for it will flip to 4 when you change the Layout dropdown) and press “Update Collection” again:

You can also change how the button spacing by changing Cell Height and Cell Width. For instance, if I have a 4×4 grid and a cell distance width of 0.032 they are perfectly aligned together without any space in between (not recommended for real live scenario’s where you are supposed to press these buttons – a mistake is easily made this way)

You can also do fun things like having then sorted out by name, child order, and both reversed. Or have them spaced out on not only a flat surface, but on a Cylinder, Sphere or a Radial area.

Note: the UpdateCollection can also be accessed by code, so you can actually use this script runtime as well. I mainly use it for static layouts.


Don’t waste time in manual spacing, use this very handy tool in the editor to make a nice an evenly spaced button menu – or for whatever you need to have laid out.


  • Make it yourself easy by putting any parts of an UI that should be in a grid in a separate empty game object and put the GridObjectCollection control on that, and place the other parts outside that, so they won’t interfere with the layout process.
  • You can use this behaviour with any type of game object, not only buttons of course
  • More details about GridObjectCollection can be found on the documentation page on GitHub. This also handles related behaviours like ScatterObjectCollection and TileObjectCollection.

No code so no project, although in the next blog post this technique will be applied ‘in real life’, so to speak.

Migrating to MRTK2 – configuring, understanding and using Windows Mixed Reality controllers


Although the focus for Mixed Reality Toolkit 2 now understandably is on Microsoft’s big Mixed Reality business player – HoloLens 2 – it’s still perfectly doable – and viable, IMHO – to develop Windows Mixed Reality apps for WMR immersive headsets. Case in point: most of the downloads I get for my three Mixed Reality apps in the store come from people using immersive headsets – which is actually not that strange as immersive headsets are readily available for individuals whereas HoloLens (either 1 or 2) is not – and they cost 10-15% of an actual HoloLens to boot.

And the fun thing is, if you do this correctly, you can even make apps that run on both – with only minor device specific code. Using MRTK2, though, there are some minor problems to overcome:

  1. The standard MRTK2 configuration allows for only limited use of all the controller’s options
  2. There are no samples – or at least none I could find – that easily shows how actually extend the configurations to leverage the controller’s full potential
  3. Ditto for samples on how to intercept the events and use those from code.

I intend to fix all of the above in this article. Once and for all 😉


If you have worked a bit with the MRTK2 before, you know what’s going to follow: cloning profiles, cloning profiles and more cloning profiles. We are going some four levels deep. Don’t shoot the messenger 😉

Assuming you start with a blank Unity app with the MRTK2 imported, first step is of course to clone the Default profile – or whatever profile you wish to start with, by clicking Copy & customize.

While you are at it, turn off the diagnostics

Next step is to clone the Input System Profile. You might need to drag the inspector a bit wider or you won’t see the Clone button

Step 3 is to clone the Controller Mapping Profile:

Expand the “Controller Definitions” section. If you then select Windows Mixed Reality Left Hand Controller, you will notice a lot of events are filled in for the various controls – but also that a couple are not:

You can select something, but it’s not applicable or already assigned to something else. The missing events are:

  • Touchpad Press
  • Touchpad Position
  • Touchpad Touch
  • Trigger Touch
  • Thumbstick Press

So we have to add these events. To achieve this, we have to do one final clone: the Default Input Actions Profile.

And then you simply can add the five missing events (or input actions, as they are called in MRKT2 lingo).

Mind to select “Digital” for all new actions except for Touchpad position. Make that a “Dual Axis”. That last one will be explained later.

Now you can once again go back to Input/Controller/Input Mappings settings, and, assign the proper (new) events to the controller buttons. Don’t forget to do this for both the right and the left hand controller.

And now finally there are events attached to all the buttons to the controllers. Now it’s time to show how to trap them.

Understanding and using the events

The important thing to understand is that there are different kind of events, that all need to be trapped in a specific way. When I showed you to add the event types, all but one of them were digital types. Only one was “Dual Axis“. There actually are a lot of different types of events:

I am not sure if I got all the details right, but this is what I found out:

  • a digital event, that’s basically a click. You need to have a behaviour that implements IMixedRealityInputHandler to intercept this. Example: a click on the menu button
  • a single axis event is an event that gives you a single value. The only application for WMR controllers I have found is a way to determine how far the trigger is pushed inwards (on a scale of 0-1). You will need to implement IMixedRealityInputHandler<float>
  • a dual axis event gives you two values. The only application I found was the touchpad – it gives to the X,Y coordinates where the touchpad was touched. Range for both is -1 to 1. 0,0 is the touchpad’s center. You will need to implement IMixedRealityInputHandler<Vector2>
  • a six dof (degrees of freedom) event will give you a MixedRealityPose. This enables you to determine the current grip and pointer pose of the controller. You will need to implement IMixedRealityInputHandler<MixedRealityPose>

Demo application

I created an application that demonstrates the events you will get and what type they are. If available, it will also display the values associated with the event. It looks like this:

Not very spectacular, I’ll admit, but it does the trick. On the top row it displays the type of event intercepted, the bottom two rows show actual events with – in four cases – information associated with the events. When activated: the red circle will turn green.

Observations using the demo app

  • You will notice you’ll get a constant stream of Grip Pose and Pointer Pose events – hence these two events and the MixedRealityPose type events indicator are always green
  • You will also get a constants stream of “Teleport Direction” events (of type Vector2) from the thumbstick even if you don’t touch it. I have no idea why this is so. I had to filter those out, or else the fact Touchpad position is a Vector2 element got hidden in the noise.
  • Grip press is supposed to be a SingleAxis event, but only fires Digital events
  • If you touch the touchpad, it actually fires two events simultaneously – the Digital Touchpad Touch and the Vector2 Touchpad position.
  • Consequently, if you press the touchpad, you get three events – Touchpad touch, Touchpad Position and Touchpad Press.
  • The trigger button also is an interesting story as that fires three events as well. As soon start to press it ever so slightly, it fires the SingleAxis event “Trigger” that tells you how far the trigger is depressed. But at the lowest scale where “Trigger” registers, it will also fire the Digital “Trigger Touch” event. However, you will usually get a lot more “Trigger” events as it’s very hard to keep the trigger perfectly still while it’s halfway depressed.
  • And finally, when you fully press it, the Digital “Select” event will be fired.
  • Menu and Thumbstick press are simple Digital events as you would expect.

Key things to learn from the demo app

Registering global handlers

On top you will see the the ControllerInputHandler implementing being derived from BaseInputHandler and the four interfaces mentioned.

public class ControllerInputHandler : BaseInputHandler, 
    IMixedRealityInputHandler, IMixedRealityInputHandler<Vector2>, 

The important thing to realize is that this behaviour needs to handle global events. This implicates two things, first, you will have to register global handlers

protected override void RegisterHandlers()

(and unregister them of course in UnregisterHandlers)

but second, if you use this in Unity, uncheck the “Is Focus Required” checkbox

This will ensure the global handlers being registered properly and being intercepted by this controller.

Discriminating between events of same type

It might not be immediately clear, but the only way I have been able to determine what exact event I get is to check it’s MixedRealityInputAction.Description property. In the code you will seen things like

var eventName = eventData.MixedRealityInputAction.Description.ToLower();
if (eventName == "touchpad position")

In fact, you will see that the names of the event displayers in Scene hierachy are bascially the names of the events without spaces. I simply find them by name

After I simply load them in a dictionary in Start by looking for children in the “Events” object

foreach (var controller in _eventDisplayParent.GetComponentsInChildren<SingleShotController>())
    _eventDisplayers.Add(controller.gameObject.name.ToLower(), controller);

I simply find the back by looking in that dictionary and activating the SingleShot Controller. This class is part of a prefab that I used and explained in an earlier post.

private void ShowEvent(string eventName)
    var controller = GetControllerForEvent(eventName);
    if (controller != null)
private SingleShotController GetControllerForEvent(string controllerEvent)
    return _eventDisplayers[controllerEvent.ToLower().Replace(" ", "")];

I must say I feel a bit awkward about having to use strings to determine events by name. I guess it’s inevitable if you want to be able to support multiple platforms and be able to add and modify events without actually having to change code and introduce types. This flexibility is what the MRTK2 intends to support, but it still feels weird.

Combining events

In the Immersive headset version of Walk the World you can zoom in or out by pressing on top of at the bottom at of the touch pad. But as we have seem, it’s not even possible to detect where the user has pressed, only that he has pressed. But we can detect where he last touched, which most likely is at or very near where he last touched. How you can combine these the touch and press events to deserve and effect like I just described, is showed in the relevant pieces of the demo project code that copied below:

Vector2 _lastpressPosition;

public void OnInputChanged(InputEventData<Vector2> eventData)
    var eventName = eventData.MixedRealityInputAction.Description.ToLower();
    if (eventName == "touchpad position")
        _lastpressPosition = eventData.InputData;

public void OnInputDown(InputEventData eventData)
    var eventName = eventData.MixedRealityInputAction.Description.ToLower();
    if (eventName == "touchpad press")
        // Limit event capture to only when more or less the top or bottom 
        // of the touch pad is pressed
        if (_lastpressPosition.y < -0.7 || _lastpressPosition.y > 0.7)

First, the touchpad position event keeps the last position into a member variable, then when the touchpad is pressed, we check where it last was touched. The event is only fired when the front 30% or back 30% was last touched before it was pressed. If you press the sides (or actually, touch the side before you press) nothing happens.


Interacting with the controller has changed quite a bit since ye olde days of the HoloToolkit, but it still is pretty much doable and usable if you follow the rules and patterns above. I still find it odd I have to determine what event is fired by checking it’s description, but I may just be missing something. However, this method works for me, at least in my application.

Also, I am a bit puzzled by the almost-the-same-but-not-quite-so events around trigger and touchpad. No doubt some serious considerations have been made implementing it like this, but not having been around while that happens, it leaves confused about the why.

Finally, bear in mind you usually don’t have to trap Select manually, and neither is the Thumbstick (‘Teleport Direction’) usually very interesting as those events are handled by the environment by default – the only reason I showed they them here was to demonstrate you could actually intercept them.

Demo project, as always, here on GitHub.

Migrating to MRKT2 – multi-device behaviour switching and scaling


The MRTK2 allows for development for both HoloLens 1, HoloLens 2 and Windows Mixed Reality immersive headsets with nearly identical code. And a growing number of other platforms, but the focus is now understandably on HoloLens 2. Yet, if you want to make apps with a broad range, you might as well use the capabilities the toolkit offers to run one app on all platforms.

Rule-of-thumb device observations

  • On HoloLens 2, you typically want interactive stuff to be close by and relatively small, so you can leverage the touch functionality
  • On HoloLens 1 interactive stuff needs to be further away since the only control option you basically have is the air tap. But because it’s further away, it needs to be bigger
  • On Windows Mixed Reality immersive headsets you want it also further away but even bigger still, as I have observed things seems to appear smaller on an immersive headset compared to HoloLens, and lower resolution headsets makes things harder to see things like small print compared to HoloLenses.

Basically this boils down to scaling and distance. Scaling usually is pretty simple to fix, but distance behavior is a bit more difficult, especially since the MRTK2 contains so much awesome behaviours for keeping for instance a menu in view, but it does not support different behavior for different devices.

I have come up with a rather unusual solution for this, and it works pretty well.

Meet the twins

I made two behaviours that work in tandem. The first one is pretty simple enough and is called EnvironmentScaler.

This simply scales the current game object to the value entered for the specific device type. Notice also there is a drop down that will enable you to select view how platform specific sizes will appear inside the Unity Editor.

The second one is a bit more odd. You see, for determining the right distance, I would like to use the standard Solver and RadialView combo. Of course I could have written a behaviour that changes the RadialView values based upon the detected platform. But then it would only have worked for RadialView. So I took a more radical and generic approach

As you can see, there is one Solver but no less that three RadialViews on the menu. They all have slightly different values for things like distance and Max view degrees. An if you start Play mode:

It simply destroys and removes the behaviors for the other platforms. Crude, but very effective. And no coding required. The only thing is – there is no way to distinguish those three RadialViews, so it’s best to add them to your game object in the same order as they are listed in the EnvironmentSwitcher: for HoloLens 1, HoloLens 2 and WMR headsets.

The nuts and bolts

Both the switcher and the scaler have the same generic base class:

public abstract class EnvironmentHelperBase<T> : MonoBehaviour
    private EditorEnvironmentType _editorEnvironmentType = EditorEnvironmentType.Hololens2;

    protected T GetPlatformValue(T hl1Value, T hl2Value, T wmrHeadsetValue)
        if (CoreServices.CameraSystem.IsOpaque)
            return wmrHeadsetValue;

        var capabilityChecker = CoreServices.InputSystem as IMixedRealityCapabilityCheck;

        return capabilityChecker.CheckCapability(MixedRealityCapability.ArticulatedHand) ?
hl2Value : hl1Value; #else return GetTestPlatformValue(hl1Value, hl2Value, wmrHeadsetValue); #endif } private T GetTestPlatformValue(T hl1Value, T hl2Value, T wmrHeadsetValue) { switch (_editorEnvironmentType) { case EditorEnvironmentType.Hololens2: return hl2Value; case EditorEnvironmentType.Hololens1: return hl1Value; default: return wmrHeadsetValue; } } }

The GetPlatformValue method accepts three values – one for every platform supported – and returns the right one for the current platform based upon this simple rules:

  • If the headset is opaque, it’s a WMR headset
  • If it’s not opaque and it supports articulated hands, it’s a HoloLens 2
  • Otherwise it’s a HoloLens 1

And there’s also the GetTestPlatformValue that returns a platform based upon what’s selected in the _editorEnvironmentType field, that can be used for testing in the editor. I have noticed that the editor returns false for opaque and true for the articulated hand support, so by default the code acts like it’s in running in a HoloLens 2. Hence my ‘manual switch’ in editorEnvironmentType so you can see what happens for the various devices inside your editor. For runtime code, whatever you selected in editorEnvironmentType in either behaviour is of no consequence.

EnvironmentScaler implementation

This is the very simple, as all the heavy lifting has already been done in the base class:

public class EnvironmentScaler : EnvironmentHelperBase<float>
    private float _hl1Scale = 1.0f;

    private float _hl2Scale = 0.7f;

    private float _immersiveWmrScale = 1.8f;

    void Start()
        gameObject.transform.localScale *= GetPlatformValue(_hl1Scale, _hl2Scale,
_immersiveWmrScale); } }

Simply scale the object to the value selected by GetPlatformValue. Easy as cake.

EnvironmentSwitcher implementation

public class EnvironmentSwitcher : EnvironmentHelperBase
    private MonoBehaviour _hl1Behaviour;

    private MonoBehaviour _hl2Behaviour;

    private MonoBehaviour _immersiveWmrBehaviour;

    void Start()
        var selectedBehaviour = GetPlatformValue(_hl1Behaviour, _hl2Behaviour, 
        foreach (var behaviour in new[] {_hl1Behaviour, _hl2Behaviour,
_immersiveWmrBehaviour}) { if (behaviour != selectedBehaviour) { Destroy(behaviour); } } } }

Very much like the previous one, but now the values are not floats (for scale) but actual behaviors. It find the behaviour for the current device, the destroy all others.

The fun thing is – in this I used this specifically for three identical behaviours (that is, they are all RadialView behaviours) – one for every device. But it’s just as easily possible to use three completely different behaviours, one for each device, and have the ‘wrong’ ones be rendered inoperative by this behaviour as well. This makes this approach very generically applicable.


Multi device strategy does not have to be complex. With these two behaviours you can make your app appear more or less the same on different devices, and still adhere to the device’s unique capabilities.

Complete project, as always, here.