HoloLens at Microsoft Build: All About Collaboration

As expected, HoloLens was the star of Microsoft Build 2016, much as it was for Build 2015.  There was a fundamental difference this year, though.  Last year was all about showing you holograms in your world.  Keynote demos showed single users exploring anatomy, controlling robots, and moving windows on their HoloLens desktops.  Some attendees got to try the device on to see holographic architectural models in the Trimble demo.  Those who went to Holographic academy got to learn how to build single Holographic apps using Unity.

Type image caption here (optional)Type image caption here (optional)

In 2016, there was a fundamental shift. I’m glad to say it was a shift that truly highlights how HoloLens stands out in a world of other VR and AR offerings: that is its capacity for collaborative apps. What do we mean by collaborative apps? It’s something we at Case Western knew was critical to our endeavors since Mark Griswold first put on the device in 2014, though we’d always called it “multilens.”  However you prefer to phrase it, it’s the ability for multiple people, each with a HoloLens, to experience and interact with the same holograms in the same place in their world.  A shared experience. Consider the difference in these two pictures:

   

On the top, we have one student learning about anatomy in a Holographic app. While this is very exciting and powerful, the picture on the bottom shows a scenario that is far more beneficial to education: A teacher instructing a class of HoloLens users with a shared Holographic app. If you’re wondering why this is so beneficial, you may want to peruse the following articles. The gist is: Hardly anyone truly learns alone.

Impact of peer interaction on conceptual test performance http://scitation.aip.org/content/aapt/journal/ajp/73/5/10.1119/1.1858450

Why Peer Discussion Improves Student Performance on In-Class Concept Questions, http://science.sciencemag.org/content/323/5910/122.full

As an educational institution, we knew collaborative, multilens applications were absolutely critical to have. This has been a big push in any of our holographic work since 2015 and put us on the leading edge of HoloLens development in our partnership with Microsoft. We were therefore both humbled and thrilled to be a part of the Keynote at Build, showcasing work on a multilens classroom and kicking off a conference whose HoloLens activities were all about collaboration. 

Our demo in the keynote showcased two kinds of multilens learning: local and remote. The power of networking HoloLenses together can take the incredible power of holograms in your world and add to it a fundamental part of learning through interaction, and that is physical co-presence.  Interestingly, both local and remote multilens experiences are accomplished technically in a very similar way. 

To further convince you of Microsoft’s push for shared experiences in HoloLens was this year’s Holographic Academy. The instructors themselves said that this year was specifically tailored to collaborative experiences, as this was the most requested topic from developers since last year’s Holographic Academy.

The academy first walked developers through deploying apps to the HoloLens, capturing gestures, and placing holograms in the room. After that, it was about using their HoloToolkit utilities to network lenses, sync hologram positions, and ultimately create a multilens game. In the pictures below you can see the other members of my Holographic Academy pod, each with their holographic avatars following their positions.

The multilens game we built consisted of us shooting projectiles at each other’s avatars. Good times.

I was very happy to speak at Build on how these multilens applications can be built in my developer talk entitled “Building Collaborative Educational Experiences in HoloLens.”  After all, creating a multilens educational application brings certain technical challenges. 

Remember, every HoloLens is a self-contained device running an app. Meaning that in the picture below, there are four versions of an app running.   

This raises the technical questions:

·       How does everyone in the class see the holograms in the same place?

·       How do they see the same hologram at the same time?

·       How can the teacher point out what they want the student to see?

·       How can people without HoloLens follow along?

·       What if a student, or even a teacher, is not there that day?

 

Our solutions were to use HoloLens’ Spatial Mapping to calibrate and align scenes and Unity’s Networking for multilens experiences. Unity’s Networking is the same technology you would use to create on online shooter, but the same underlying principals can be applied to a multilens educational app.

 

I decided to cover these technical questions by going through code snippets of solutions, in a similar order to how we solved them in our own development research.

 

World Anchor Sharing for putting holograms in the same place across lenses

 

World Anchors are a quick and easy way for each lens to see holograms in the same place, provided they have both adequately mapped the room. Regardless of your what networking solution you choose, you can sync a byte array across devices that represents a hologram’s position in the room:

 

“Teacher” Lens:

public GameObject origin;

private WorldAnchor originWorldAnchor;

 

void Start()

{

       originWorldAnchor = origin.GetComponent<WorldAnchor>();

}

 

private void ExportGameRootAnchor()

{

WorldAnchorTransferBatch transferBatch = new WorldAnchorTransferBatch();

transferBatch.AddWorldAnchor("originWorldAnchor", this.originWorldAnchor);

WorldAnchorTransferBatch.ExportAsync(transferBatch, OnExportDataAvailable, OnExportComplete);

}

 

“Student” Lens:

private byte[] importedData;

 

private void ImportRootGameObject()

{

       WorldAnchorTransferBatch.ImportAsync(importedData, OnImportComplete);

}

 

private void OnImportComplete(SerializationCompletionReason completionReason, WorldAnchorTransferBatch deserializedTransferBatch)

{

       this.originWorldAnchor = deserializedTransferBatch.LockObject("originWorldAnchor", this.origin);

}

 

 

Syncing an Integer as a Slide Number

 

Often times in instruction, you are presenting material in a slide show. As in, showing the human skeleton in slide 1, the circulatory system in slide 2, and so forth. This was our first step in verifying multilens applications could work: assigning slide numbers to given states of the holograms and syncing the integer of that slide number.

 

In Unity’s networking, you would begin doing this by adding a network identity to an object.

This enables you network capability when deriving your behavior scripts not from MonoBehaviour, but from NetworkBehaviour (which itself derives from MonoBehaviour).

 

using UnityEngine.Networking;

public class SyncSlideNumber : NetworkBehaviour

{}

 

To synchronize an integer value, you can use the SyncVar attribute. Picture each of these boxes is a HoloLens running the same Unity app with the same cube object:

We can assign one HoloLens to be the Server, as well as a client, or player, in the scene. Clients can tell the Server about an update to the SyncVar and the result will propagate to all Clients. This is done in a method designated by the [Command] attribute, which means it is a command that runs on a server. We can then transmit, sync, and update clients of a slide number change by the following code:

public class SyncSlideNumber : NetworkBehaviour

{

public int SlideNumber;

 

[SyncVar]

private int syncSlideNumber;

 

void Update()

{

       if (isServer)

       {

if (Input.GetMouseButtonDown(0)) //Could be set as AirTap

{

       SlideNumber = (SlideNumber + 1) % ObjectsToShow.Length;

       activateObject(SlideNumber);

       TransmitSlideNumber();

}

       }

else

{

if (syncSlideNumber != SlideNumber)

{

       SlideNumber = syncSlideNumber;

       activateObject(SlideNumber);

}

}

}

 

private void activateObject(int index)

{

       //Set up a "Slide" here

}

 

[ClientCallback]

void TransmitSlideNumber()

{

       CmdSendSlideNumberToServer(SlideNumber);

}

 

[Command]

private void CmdSendSlideNumberToServer(int index)

{

       syncSlideNumber = index;

}

}

 

We made a conscious decision to have the instructor of the class be the server and the only one who can advance the slide. We check this with the built-in isServer Boolean that comes with the derived NetworkBehaviour class.

 

Syncing a Camera Position

 

We are not limited to syncing an integer; we can also sync a position and rotation of the camera of any HoloLens in the scene. This provides us with abilities like the “laser gazer” (top picture), following along on a tablet from the point of view of a HoloLens (middle picture), or remote collaboration (lower picture).

 

We can put the scripts to do this on a Player prefab in the Unity Networking system. To do this, it’s important to be aware of the concept of a Local Player Authority. This basically means that in any version of the app running, there exists an instance of every player’s object and scripts but only one is the local player in each scene and drives their own position. The ones that aren’t the local player are driven over the network.

Transmitting from a local player but syncing when not the local player, leaves us with this code snippet for basic camera syncing:

public class SyncPlayerCamera : NetworkBehaviour

{

       [SyncVar] Vector3 syncPoint;

[SyncVar] Quaternion syncRotation;

public static float LerpSpeed = 5f;

public float sendRate = .1f;

 

void Start()

{

if (isLocalPlayer)

       StartCoroutine(TransmitCoroutine());

}

 

IEnumerator TransmitCoroutine()

{

while (true)

{

TransmitInfo(this.transform.localPosition, this.transform.localRotation);

yield return new WaitForSeconds(sendRate);

}

}

 

[ClientCallback]

void TransmitInfo(Vector3 point, Quaternion rot)

{

       CmdSendPointToServer(point, rot);

}

 

[Command]

private void CmdSendPointToServer(Vector3 point, Quaternion rot)

{

syncPoint = point;

syncRotation = rot;

}

 

void Update()

{

if (!isLocalPlayer)

{

       this.transform.localPosition = Vector3.Lerp(this.transform.localPosition, syncPoint, Time.deltaTime * LerpSpeed);

       this.transform.localRotation = Quaternion.Lerp(this.transform.localRotation, syncRotation, Time.deltaTime * LerpSpeed);

}

}

}

 

Syncing a Hand Position

To sync a hand position, we used the InteractionManager in the UnityEngine.VR.WSA.Input namespace to get the hand centroid and then sync the position using a similar networking solution.

using UnityEngine;

using UnityEngine.Networking;

using UnityEngine.VR.WSA.Input;

 

public class SyncPlayerHand : NetworkBehaviour

{

public Transform HandTransform;

public float SendRate = .1f;

public static float LerpSpeed = 5f;

[SyncVar] private Vector3 syncHandPos;

private Vector3 handPosCurrent;

private bool handSeen;

 

void Start()

{

syncHandPos = new Vector3(.25f, -.5f);

handPosCurrent = syncHandPos;

if (isLocalPlayer)

{

       InteractionManager.SourceUpdated += SourceManager_SourceUpdated;

       InteractionManager.SourceLost += SourceManager_SourceLost;

       StartCoroutine(TransmitCoroutine());   

}

}

 

private void SourceManager_SourceLost(InteractionSourceState state)

       {

              handSeen = false;

              if (state.source.kind == InteractionSourceKind.Hand)

                      handPosCurrent = this.transform.position + this.transform.right * .25f + this.transform.up * -.5f; //Some default

       }

 

private void SourceManager_SourceUpdated(InteractionSourceState state)

{

handSeen = true;

       if(state.source.kind == InteractionSourceKind.Hand)

              state.properties.location.TryGetPosition(out handPosCurrent);

}

 

IEnumerator TransmitCoroutine()

{

while (true)

{

       TransmitInfo(HandTransform.localPosition);

       yield return new WaitForSeconds(SendRate);

}

}

 

[ClientCallback]

void TransmitInfo(Vector3 handPos)

{

       CmdSendPointToServer(handPos);

}

 

[Command]

private void CmdSendPointToServer(Vector3 handPos)

{

       syncHandPos = handPos;

}

 

void Update()

{

if (isLocalPlayer)

{

if (!handSeen)

       handPosCurrent = this.transform.position + this.transform.right * .25f + this.transform.up * -.5f; //Some default

HandTransform.localPosition = handPosCurrent;

}

else

{

if (HandTransform)

{

handPosCurrent = Vector3.Lerp(handPosCurrent, syncHandPos, Time.deltaTime * LerpSpeed);

HandTransform.localPosition = handPosCurrent;

}

}

}

}

 

In Conclusion:  Remember the power of shared experiences

If we know one thing about HoloLens from this year’s Build conference, it’s that HoloLens has unique capabilities for physical co-presence in holographic experiences and we need to make use of them. It’s a piece of knowledge that fits well with our philosophy at Case Western Reserve University’s Interactive Commons, where our drive has always been for sharing knowledge between different disciplines and collaboratively solving big problems. I hope you join us in having fun building collaborative applications with HoloLens!

 

Jeff Mlakar