Spatial UI

Project: Create // Role: Lead Designer

Designing a Spatial Menu Interface

Create’s UI was one of the most fun features I got to design, since the beginning of the project we knew this would be the first 3D UI that any user would encounter in an immersive application, so we had to make sure it was accessible, scalable and most importantly that it respected the premise of Spatial Computing.

I want to acknowledge direct contributions from Jonathan Brodsky (Sr Engineer) and Michael Laufbahn (Visual Designer), none of this would’ve been possible without their hard work. 

We set out to make an interface that felt like a familiar object and that required minimal cognitive effort to process and very low physical exertion to operate. The objects contained within the menu had to stand out visually and be easy to select regardless of distance, they needed to be presented in an enticing way for users to feel curiosity before selecting them and be delighted when experimenting with them in the physical world. Tools, actions and nested options needed to be discoverable and always available to provide an efficient workflow.

Identifying our goals allowed me to determine our UX pillars:

  1. Comfortable – Must enable a state of physical ease and freedom from pain or constraint.
  2. Intuitive – Must present the lowest entry barrier possible, making the experience easy to operate by anyone.
  3. Authentic – Must feel and look true to the Spatial Computing medium.
  4. Safe To Explore – Must provide fallbacks and recovery to prevent and resolve unintended consequences.
  5. Immersive – Must maintain suspension of disbelief and encourage a state of play.

The Iterative Process

I’ve lost count of the number of iterations we went through to reach the final form and function of this UI, but every single one of the prototypes and concepts we developed allowed us to learn along the way by informing about do’s and don’ts for Spatial Menu Interfacing. Some concepts never made it past the sketching phase, some made it past the paper prototyping stage, while others would be fully prototyped, tested, and validated before a new revision would surface.

The following images are some of the most notable concepts that illustrate the evolution of the menu, they’re meant to show how we aligned design goals with user feedback starting with flat panels and ending with what we called “The Lightbox”.


This was the initial iteration, at this point in time we were still defining our specs and assessing how we would present tool panels, content panels and object categories or filters.

The panels proved out to be highly usable and felt very familiar to users, but we wanted to challenge ourselves and think of a more unique way to present virtual content in Spatial Computing.

Starting with what we knew (2D interfaces), allowed us to quickly block out a functional prototype and begin the iterative process. 

Revolving Platform

This iteration presented users with a scrollable cylindrical platform that spawned an array of randomly generated 3D objects from a portal-like box.

We observed that users reacted very positively to the way in which the objects were previewed, moving away from 2D images and showing volumetric content at scale provided a much better sense of the form and function of each objects, lowering the guess work and enticing a state of play and discovery.

However, this system wasn’t scalable in its current form as it only supported a limited amount of visible objects, which made it difficult to efficiently sort through them to find specific ones.

Testing feedback also surfaced users feeling frustrated as they became more engaged with the core experience due to the ambiguous way in which objects had to be sorted through.

Work Station

To address some of the limitations of the previous prototype, we built a work bench to provide users with a virtual drawer that automatically sorted objects by their corresponding category, making them all available at once.

This helped with discoverability but still suffered from the inherent constraints of a physical object.

The drawer provided a limited space, which constrained the amount of objects that could be placed within, in order to fit more objects we had to scale the preview instances down which made them more difficult to observe and analyze. We also learned that constraining interaction to a virtual surface discouraged immediate play with the physical world.

Note: During this phase of development the core experience was centered around building modular contraptions and using virtual pawns to to control/ affect the things that the user had put together. This system continues to be an important component of the final experience, which is now more open ended. 

Utility Cabinet

To address the scalability issue, we thought about using a cabinet, which afforded (and required) more space. The cabinet provided users with removable elements such as trays, shelves and drawers which could be used to sort and group the objects however users preferred.

We moved away from assembling objects on virtual surfaces and limited the cabinet to serve only as an instancing and content management system to encourage users to grab the objects they wanted and then play with them in their physical space.

A big problem we identified with this prototype was derived from the considerable amount of real estate it occupied in the physical world, this taught us a valuable lesson about the importance of respecting the user’s (often limited) play space.

Additionally, we observed that providing removable components (trays, shelves, drawers, etc) that behaved like physical objects often resulted in users misplacing where they had put an object or a tray. Another problem we noticed was that a vertical object viewed from a horizontal aspect ration POV would result in content clipping, which quickly disrupted immersion.

Turns out the real world is already messy, designing for spatial computing should aim to meet user expectations but should also leverage new affordances that are only possible in this new space. We started drawing from our learnings in game development, and starting implementing familiar systems such as respawning to provide a more usable and efficient system that required less work from the user.


For this iteration we focused on adding more constraints based on our previous observations and user feedback, we also started to leverage the suspension of disbelief in order to add functionality to the interface that wouldn’t be possible when abiding strictly to real world rules. We experimented with floating objects that could morph and accommodate for a large array of nested objects.

The Rolodex presented users with a transforming modular-interface, that could be collapsed or expanded when needed. The main form was built around a floating, scrollable cylinder, which had a series of removable boxes that sorted an array of objects by their corresponding category. These boxes would automatically respawn on their corresponding location once the user discarded them.

This approach gave us a lot of room to begin experimenting with nesting objects while also scaling the number of objects available to the user. We received feedback about how playful it felt, but we also realized that by nesting objects two levels deep we were once again making this a rather tedious task for what should be a simple operation for any user.

Along the way we built an abstract version of the Rolodex, which was meant to feel more alive, the prism represented contained creativity and when expanded it would first show a series of spheres containing category-based objects.

This approach reduced the amount of “digging” people had to do in order to access a particular object and provided a more organized framework to present the content. User testing allowed us to observe that presenting unfamiliar and abstract forms lacked the ability to provide clarity of usability for users, this interface was meant to serve a utilitarian purpose, instead it would often confuse users, they loved how it looked but it was not very intuitive.

Snow Globe

During this phase, we were experimenting with themed play-sets as a way to sort content, we were very close to finalizing the list of objects for the experience. We took this opportunity to experiment on how we could present content in such a way that we could tell a visual story while also hinting at how some objects were related to others.

We took inspiration from museum installations and retail windows and tried to present our play-sets as intricately arranged dioramas. This approach allowed the content to really stand out, it was also the largest preview scale we had ever tried, which invited users to get close to the objects prior to releasing them in their space.

Right around this time we also decided we were exposing all tools & actions as a panel for users to be able to quickly change their interaction modality. Prior to this, all of the tools and actions were mapped to the control inputs, based around a radial menu that was operated using the touchpad, the decision was informed by feedback that revealed that most users found it tiring to have to redirect their attention towards the control and the navigate a menu to affect content that was already in-front of them.

Some notable features that were added to this prototype include:

Resource Capacity Gauge – A visual indicator that displays the remaining content resources available. Running a physics-based experience on a mobile platform that had to be render every frame 60 times per second for each eye meant that we had to be really mindful of performance, every instance of an object had a predefined resource cost assigned to it, and we needed to inform this clearly to the users so that they would have a reference point for how many more objects they could release in their space.

Positional Indicator – To compensate for the lack of realtime shadows in additive rendering, we included a line that was displayed from the center of any actively held object to the closest planar horizontal room surface (determined by the spatial map), this helped users with depth perception as most frequently they would note that an object was being scaled when what was really happening was that it was being translated away or towards them.

This iteration came very close to being the final version we shipped with, it was considered feature complete and it met most of the requirements we had set out to accomplish, but it had several flaws.

  • The half-dome layout was still taking up too much space.
  • Presenting fixed overlapping objects at that scale would sometime obscure content, making them slightly more difficult to select at a distance.
  • People would often get too close to the content running into the enforced camera clip distance.
  • It wasn’t always clear how far away users needed to move an object (form it’s spawn location) for it to be instanced.
  • Unique layouts per play-set meant that maintenance for this system became very costly against precious dev time.

The final revision to our Menu Interface was the Lightbox. This was our last chance to take all of the learnings we accrued along the way and present them as a beautiful, scalable and intuitive interface that followed our five UX design pillars.

We focused on presenting the content itself as the focal point with the clear intention for no piece to be perceived as more important or valuable than the rest. We set the scale for all object previews to be the same size and to be equally distributed on a 5 x 3 1/3 grid (the 1/3 was used to display a preview of the following content row, which sat beneath the fold).

We sorted all objects into six main categories: Brushes, Stickers, Blocks, Gadgets, Characters, Worlds.

Taking inspiration from vending machines and toy store display windows we determined specific parameters to present our objects in order to convey their behavior upon being instanced into the real world. Objects that were affected by gravity would be placed on a shelf, objects that stuck to surfaces would stick to the background of the menu and objects that floated would appear offset from the shelves.

Hover states for object selection would enlarge the corresponding object by ‘x’% and would trigger the yaw rotation of the preview object allowing the user to get a better look at the object without having to walk around it.

The image bellow shows several Lightbox revisions, we were experimenting on how to present brushes to the user using panels and extractable paint sets. To provide consistency, we decided to present all brushes as little brush tips on the top shelf.

At this point we were very content with the main form, its horizontal layout worked great with the device’s FOV (Field of View), the scrollable shelving system was easy to use and very intuitive and the adjacent panels afforded a great anchor point for tools and actions.

The following image shows the concept for our “pin” affordance, which allows the user to anchor the menu onto a specific  horizontal surface, by default the menu is set on an invisible leash relative to the user, causing it to follow the user around their space once they’ve moved away by ‘x’ distance units.

We wanted to consolidate this interface as a singular object, so we embedded the tools and action panels onto a new container that started to look like a hybrid of a tablet and a display with depth. It felt very modern and sterile, which contrasted a lot with the more lighthearted tone of the experience.

We started iterating on the final shader for the interface, and decided to make it more translucent, this caused the frame to almost disappear and really let the content shine. The background shader was hooked up to the raycast pointer to display a spotlight wherever the user was pointing on the content area, all of these subtle changes made a huge difference in user perception who would now comment less and less on the UI itself and would instead focus more on all of the interesting objects contained within it.


We also made the decision to refine the form of the frame and tie the aesthetic to the menu reveal sequence interaction. The menu reveal happens right after users complete the mapping process (described here) and discretely teaches users the core interaction for the experience: point at an object, pull and hold trigger button and move the control.

The user is shown a little handle sticking out from a light portal that is placed in front of them, after a few seconds a tooltip is displayed to reinforce the interaction and to show which input is needed to be able to pull the tab down.

As users pull the tab, the menu interface emerges from the portal, presenting users with a floating Lightbox containing over 50 objects that can be combined in different ways creating moments of joy and delight.

The final form of the menu was composed of a top and bottom frame which framed a series of scrolling shelves. To the left, six quick navigation buttons to access content corresponding to their category. To the right, a tools panel for users to select from the available recast based modes: Grab, Clone, Freeze, Delete and Clear All. Finally, at the bottom we placed the pin button, and an options menu button to access system level options.

Here’s the final wireframe for this revision and beneath the final target concept which was implemented in the Unity engine.

Considering the User and their Space

A unique challenge we came across was the relationship between the user, their space and the interface. In order to make this UI a believable object we had to come up with a set of rules so that it could be perceived as something that not only fulfilled user expectations, but was also capable of navigating the physical world autonomously while having a sense of the user’s positional location. 

We cannot yet predict user intent, so the least we could do was to provide specific considerations to mitigate the known unknowns.

Collision – This allowed the Menu to feel more physical, it prevented it from intersecting real world objects, helping sustain the suspension of disbelief. We also added collision to the user’s headset, so that the menu couldn’t be intersected by the user’s head, this helped avoid the dreaded camera clipping which would cut through geometry when users went past the minimum distance between the headset and the content.

Invisible Leash – We observed that when in larger spaces, users were more likely to move away from the location in which they initiated the experience, which is where the menu would be spawned and would remain static unless intentionally moved by the user. This meant that the farther away users were from the menu the more difficult it would be to interact with the menu components.  To solve this, we set the menu on an invisible leash, which checks for a maximum distance radius relative to the user and keeps it at said distance so that it is always easy to interact with. 

Optional Anchoring – We also observed that some users became uneasy once they realized the menu was following them around, it clashed with their object permanence which would sometimes be perceived as the menu disappearing from view. This made us consider that having the menu remain at a predetermined location would help a lot of users,  so we added an anchoring affordance which could be toggled on or off and would maintain the menu in place ignoring the maximum distance constrain mentioned earlier.   

Billboarding – To aid with usability we added a billboarding behavior to the menu, so that it always faces the user, this makes the menu yaw (rotate along the Y axis) relative to the user’s headset. By adding this functionality we voided the requirement for users to have to walk around the menu and instead made the menu turn around to them. 

Height Based Leveling – We knew that we had to make this interface accessible, it needed to function properly for all users regardless of their height or play preference (standing or seating). We decided to check for the user’s headset height value and pass this information to the menu, so that it could level itself and always be in view whenever the user turned their heads towards it. 

Visual Design Elements

Aside from designing the Menu Interface I was also responsible for: design documentation, wire-framing flows, visual design for iconography sets, UI 3D models, and 2D vector graphics, which I animated and implemented using Unity’s prefab animation workflow. 

The following sections go into more detail about how we approached specific visual design elements for Project: Create.


Create features an array of icons used to represent: menu buttons (tools, actions, categories), call-to-action prompts, pop-up buttons and the options menu.

A big challenge when dealing with new systems which users are not familiar with is to ensure that the images used to represent these correlate as closely as possible to their preconceptions.

To validate our icons we surveyed internal people who volunteered to participate in an iconography analysis survey. Participants were presented with a series of icons that were paired with the following two questions:

1.- What does the icon look like?

2.- What kind of (tool, object, action) would you associate with it?

Being able to gather feedback from a diverse group of people was a tremendous help to quickly identifying icons that were not communicating what they were meant to represent, this also allowed us to rapidly iterate in an informed manner saving precious development time.

All of our icons were constructed using a grid system to ensure visual consistency and alignment. For the final implementation, the vector version of the icons were imported into Maya and extruded into 3D low-poly geometry to give them depth, this approach prevented issues such as z-fighting and use of transparency which is sometimes costly for rendering.

Object Visual Feedback

Create provides users with a set of tools to interact with the content in different ways, these are:

  • Grab Tool – Used to select and grab content.
  • Delete Tool – Used to to remove instances of a spawned object.
  • Freeze Tool – Used to disable/ enable gravity on physics enabled objects. 
  • Clone Tool – Used to quickly generate copies of an instance that has been spawned.

These tools can be selected either by using the tools panel on the menu interface, or via the bumper button on the controller, which displays a “Quick Tool Swap Menu”.

To provide clarity, we decided to assign a color to represent each tool, colors are displayed as the raycast laser that extends from the controller and whenever an object is pointed at, a fresnel highlight effect is displayed on the object to reinforce to the user which tool they are about to use on the selected object. We also utilized visual and audio effects to inform the users about state changes that have been applied to objects. 

Additional visual elements were also displayed to represent object transforms: translation, rotation and scale. Some of these visual elements included:

  • Positional Locator – A line displayed from the center of an actively held object, which would extend onto the nearest horizontal surface. The positional locator supplemented for the lack of shadows and provided a sense of location relative to the user’s physical space.
  • Rotational Indicator – A set of arrows displayed whenever an object was being actively rotated on the Y axis (yaw). This aided with a sense of rotational direction for multiple objects. 
  • Object Inertia –  
  • Raycast Bendiness – To imply an object’s mass and velocity we added bendiness to the raycast last, this helped make the virtual objects feel more physical and playful as users shook the controller while holding them. 


Related Case Studies