3 weeks to an AR prototype

In February we wanted to see if we could add an element to our workshops at Oliver Wyman that would bring a bit more depth and interaction to the content we present. Our choice was to add a layer of content to our personas that would enable participants to empathize more deeply than they might if we just presented them, as we typically did, on a large poster with a bunch of descriptive text. Though useful, they didn’t seem to stick and workshop participants would often lose focus and forget the personas having to be reminded of them by the facilitators as the workshops progressed. So we created an augmented reality(AR) app that would allow for deeper interaction. We used AR to add video content to each persona. The video would be triggered by pointing your phone at the persona board. We developed the content and chose navigation based on the four persona principles that I’ve used over the years to help more meaningfully differentiate each persona.

As I developed the interface for the AR app it was fascinating to discover the challenge of where to place the navigation. Should I place it “on-screen” or “in-scene”? If in-scene, which most AR apps do, would the experience detract from the video content? Would users get stuck in AR world? AR does add a dimension(think Pokemon Go) to reality but would that experience detract from the storytelling content of video? All questions I wrestled with as we developed the prototype.

The prototyping process – we first used an off-the-shelf AR tool called Zapworks and then moved to Unity – was absolutely essential to learning how to best design the interface. Mostly to the cool factor, AR designers seem to default to in-scene navigation. But after many iterations it was clear to me that on-screen navigation was the best way to design for our video content because our video content was, as strange as this sounds, two dimensional. and to add another dimension to it would be distracting at best and confusing at worst.

From design to fully working prototype took three weeks. We created quite a bit of video content and edited it to fit the context of our workshops – short interviews with each of our personas. And we edited the video to fit the format of the app.

The working prototype was quite a success. Workshop participants found the app engaging and helped them focus and keep attention on the personas during the course of the day.

It was particularly rewarding for me as the process epitomized making as essential and core to the design process. Going into the project there was little to no experience that I had myself other than my usual passion for figuring it out or any that I could draw upon from others despite countless hours looking for it on the interwebs. Designing the UI and content required experiencing the dimensional challenges of AR. I was, in fact, designing for a dimensionalized experience and the challenge became one where I didn’t want to distract the user by the experience and draw them away from the content.

As we move to voice as interface and more of these AR type experiences developed off device(holograms! Obi Wan, we need your help!) the art of exploring and defining by making will clearly be the preferred design method.