An anchor is a mechanism used to attach content to the physical world. In this article, we'll look at two different types of anchors offered through Azure Mixed Reality Services: Azure Spatial Anchors, which are used to attach content to physical locations, and Azure Object Anchors, which are used to attach content to physical objects.
What is an Azure Spatial Anchor?
An Azure Spatial Anchor represents a physical point in the world that persists in the cloud. Like local spatial anchors, holograms can be attached to a Spatial Anchor. The unique aspect of a Spatial Anchor is its ability to be stored and persisted in the cloud and queried later, either by the device that created it or by any other supported device. This enables both cloud backup of anchors and cloud-based anchor sharing.
Picture this scenario: you and a friend are at your house, and you agree to play a round of virtual chess using Mixed Reality devices. set up the game by orienting a holographic chess board on a table. On your devices, you both view the chess board in the same place in the real world (the tabletop). It stays anchored to that spot regardless of where you move in the physical space. You could even end the session and restart it the following day without having to place the anchor again. Azure Spatial Anchors help build multi-user cross platform experiences like these!
Behind the scenes, the chess app saves the location of the chess board with a spatial anchor that is persisted in the cloud. This includes feature information about the point in the environment where the anchor is stored. The chess app shares the Spatial Anchor information with Azure Spatial Anchors in the cloud. The application on your friends’ HoloLens, iOS, or Android device can then query Azure Spatial Anchors for that anchor’s position. Once the anchor is found, the application can render a chess board in the same physical location on as many devices as you would like.
Another experience Spatial Anchors can enable is wayfinding. For example, a developer could build an application using multiple Azure Spatial Anchors that are placed in sequence, creating a path. These anchors get visually connected to each other to build a graph of anchors. This helps guide users to specific points in the real world.
Using these features, Azure Spatial Anchors enable developers to build experiences around persisting and sharing holographic content in the real world, allowing the content to be viewed in the same space over time.
What is an Azure Object Anchor?
An Azure Object Anchor represents a position and orientation relative to a real-world object in your environment. It provides a common frame of reference that allows you to place digital content in the same physical location of a real-world object. Using this approach, you avoid the need for physical markers (for example, QR codes) or manual alignment.
Picture a scenario where employees at a service center are performing maintenance on a car with the assistance of an application on their HoloLens 2 device. Visual overlays and markers directed to various parts of the car help the operators stay in the flow of work while following step-by-step instructions displayed directly in front of them. This works by submitting a 3D model of the car, to understand its shape, to the Azure Object Anchors service, which outputs an Object Anchor model. Physical objects are detected by their shape using a HoloLens’s depth camera. Using the Azure Object Anchors runtime SDK, your HoloLens application loads the car’s Object Anchor model, then uses it to detect the car in the real world. Now that your application knows the precise location of the car, that information can be used to build the maintenance walkthrough experience by highlighting various components or overlaying digital instructions.
Azure Object Anchors helps move from a manual process to a walk-up and work experience that improves user learning and reduces errors by automatically detecting objects in the environment!
One unique feature of an Azure Object Anchor is that a single Object Anchor model can be detected in a variety of different locations or environments. In the example above, the user could use the application to detect the car model in different locations in the garage or even in an entirely different service center. If all copies of the car have the same physical shape, the Object Anchor will properly identify the location of each copy. This is different from a Spatial Anchor, which is tied to a single physical location in the world and will only be found in the same physical location where it was created.
Hybrid Use Cases Across Azure Spatial Anchors and Azure Object Anchors
The following are a series of examples which demonstrate how Azure Object Anchors and Azure Spatial Anchors can be used together to unlock more spatially aware Mixed Reality experiences.
Scenario: Interactively training employees using a “learning by doing” approach on a factory floor
Azure Object Anchors: Using object detection to identify a given machine on a factory floor, employees can see a digital overlay of instructions to be run when they find the machine in question.
Azure Spatial Anchors: Using Azure Spatial Anchors, employees can apply anchors to different locations of interest on the factory floor, which will be persisted across time. Azure Spatial Anchors help employees navigate indoors and find the content they care about in the space.
Together: When an object is detected in the environment using Object Anchors, we can drop a Spatial Anchor in that location with metadata about the object. When the employee walks through the space, they have a Spatial Anchor telling them where to go to find a machine on a factory floor. Once they get to the Spatial Anchor indicating the machine, they can detect the Object Anchor. You can augment your object detection experience by additionally leading the user straight towards the object to be detected using Spatial Anchors!
Scenario: Event staging and daily maintenance assistance for a theatre prop team
Theatre sets are very specific and detailed when it comes to the props that are used to set up a scene. Using spatial insight from our Azure Mixed Reality services, theatre tech
Azure Object Anchors: Object Anchors can be used to identify objects in the scene (for example, a couch) and align their 3D holographic representations with the real objects.
Azure Spatial Anchors: Spatial Anchors can help the employees keep track of the positions of the different props on the stage. Since a play has multiple scenes, it can be confusing and time consuming to have to memorize where each object must be in a short amount of time. Azure Spatial Anchors help map out the stage and identify the locations where different objects need to be placed on the stage.
Together: Every piece of furniture can be scanned by Azure Object Anchors for object detection and anchored to a specific location by Azure Spatial Anchors, positioning each piece relative to its corresponding location. When the theater stage is re-arranged due to a change of a scene, spatial insight can help re-position certain objects to the locations that have been pre-authored.
With Azure Object Anchors and Azure Spatial Anchors, a theatre prop employee does not have to memorize the specific locations of all the props; the only thing they must do is wear a HoloLens 2 device, anchor each prop to the desired location, and let the job be done for them!
As shown above, Azure Spatial Anchors and Azure Object Anchors can unlock immersive Mixed Reality experiences by using a variety of different anchoring mechanisms. Check out how these can be tied into your products to make them more aware of the objects and spaces around you!
Azure Spatial Anchors is currently supported on HoloLens 1, HoloLens 2, iOS devices with ARKit, and Android devices with ARCore.
Azure Object Anchors is currently supported on Hololens 2.
Get started with Azure Spatial Anchors sample code here.
Get started with Azure Object Anchors sample code here.