Check out the most recent update to Microsoft Dynamics 365 Guides! With the July release of Guides, authors can now create guides to anchor faster and with more accuracy through markerless technology enabled by integration of Azure Object Anchors. Based on customer feedback, we have introduced an easier and more seamless way for you to anchor holograms to objects in the physical world using object anchoring.
Using Object Anchors, operators can easily move from one workstream to the next as HoloLens’ spatial insight detects anchors and seamlessly launches overlaid digital content. Learn more about how this new feature improves on-the-job guidance with Dynamics 365 Guides.
Why use Object Anchors instead of QR codes?
Object anchors enable users to automatically align digital content with physical objects, eliminating the need for QR-code markers while improving alignment accuracy.
- QR codes require physical artifacts of specific sizes to be printed out.
- Object Anchors are markerless.
- Sticking QR codes on objects might cause damage.
- Object Anchors are fully digital and have no risk of getting scratched or damaging physical products.
- Finding and scanning QR codes involves another step to the operator workflow.
- Once your object has been converted to the right format by the author, operators can easily walk up and look at the object to be detected. This reduces the time it takes to locate the physical marker and more seamlessly launch into a workflow.
- Using a single QR code can affect accuracy as it can be influenced by marker size, camera scanning angle, and distance from the marker. Manual processes that require QR code alignment can also impact alignment accuracy.
- Object Anchors minimize manual processes and increases the accuracy of detecting an object.
How does Azure Object Anchors work?
Object Anchors is an Azure service which enables the detection of specific objects in the real-world environment. It uses sensing and processing on a HoloLens to detect and align a digital model to a physical object. To detect the object, it requires a converted 3D model of the real-world object through AOA’s conversion service.
Once the 3D model has been converted, Azure Object Anchors primarily uses the depth sensor on the device to match the geometry of the real-world object and the converted model. When it finds the object, it overlays it with a mesh to indicate the object and visualizes the required and pre-authored Guides.
Considerations while using Object Anchors
When considering potential use cases, keep in mind the following:
- Object Anchors currently works best for stationary objects that don’t move.
- Objects should be 1-10 meters for each dimension for optimal alignment accuracy.
- An accurate 3D model of the object is required to convert to an Object Anchor. The currently supported file types are: .obj, .fbx, .glb, .gltf, .ply
- Highly reflective and dark material objects are difficult for the HoloLens to detect and may impact alignment and detection.
Step 1 - Convert your 3D model to a detectable format:
- Use a 3D model of the object with which you want to align.
- Run this 3D model through the AOA conversion services cloud-based training and conversion pipeline (using Guides, authors will be able to perform that pre-conversion and assign detectable object to a guide as part of their workflow).
- Receive an object model output to use on your HoloLens 2 device .
Step 2 - Assign an anchor: Leverage the object model generated in the first workflow to assign it to a guide as its Object Anchor. Use the PowerPoint-style authoring experience to simply drag and assign the object as an anchor.
Step 3 - Author/ operate:
- Guides uses the Azure Object Anchors detection SDK to scan and detect the Object Anchor based on a real-world scan .
- Step by step holographic instructions are then overlaid accurately over the physical object, based on the Guides that you set up.
Try it out! Download the latest release of Guides here.
Instructions on how to upgrade your Guides solution are here.
Don’t forget to tell us what you think and let us know about the different Mixed Reality solutions you build!