Space Between: Developer Build
For thousands of years now, the media we’ve created has been squashed onto things like stone, paper, or computer screens. Our ideas, values, events, most precious memories, all of human history - though experienced in three dimensions, has been forced to live and persist on synthetic two dimensional planes. Luckily we’ve been able to leverage the incredible power of our minds to recreate them, pulling them out from the pages like popup books in our imaginations. However now, with the emerging technologies of immersive virtual and mixed reality, our media can break free from its shackles and exist in its natural form.
This year was the first time I’ve had the opportunity to work full time on VR projects. I’d done some mixed reality work with Canon’s MREAL, and also messed around experimenting with some prototypes on my DK1, but beginning in January I was finally able to dive deep into the modern consumer VR devices that are sweeping across the tech industry and igniting a whole new wave of excitement and investment.
Our first VR project at Chronosapien was for mobile VR platforms (Cardboard mainly) and called Edge of Home. We were developing it for Kennedy Space Center as part of a package of apps that would be used to both augment the in park experience as well as extend it outside. Edge of Home’s main objective was to give users an experience of being outside of the International Space Station, and allowed them to tour it and learn about the functionality and some interesting facts of each of the modules that comprise the station. Both for reasons of scope and technical performance, the way they would receive these facts was through sets of popup UI elements for each module.
I’ve been doing graphic design for quite a while, and have designed things like websites, logos, UI and effects for our games, etc. However in approaching the design for these UI elements on the ISS, I knew that I would have to think a bit differently since the medium was no longer flat. My idea was to break up the UI layers I would normally create and give them some depth by adding space between them in the virtual world. I was also going to try and break free from the “inside the box” design that tends to prevail in traditional flat media and separate the controls, content, and detail elements as much as I could to add some visual interest and utilize all of that wonderful 360 degree space that VR provides.
So, I started like I normally would and grabbed a screenshot of our placeholder ISS module from the perspective that I wanted the user to be in when they were viewing these popups, dropped it into Photoshop, and began cranking away at some modern and sci-fi inspired UI elements. A few hours later when I was happy with the direction I was heading, I exported my layers, added them to our Unity project, and dropped them in the scene to see how things were looking. That’s when I realized that this designing this UI was going to be a much different and more challenging experience than the one I was used to.
Almost nothing looked the way it did in my Photoshop design or even scene view inside of Unity, even though once it was in I could fly around and view it from a hundred different perspectives. The size of all my elements were WAY too big in VR, and in spite of my efforts to create some depth by breaking up the elements, everything was still feeling very flat - just a little less than normal and with an obnoxiously theater sized presence. And so, feeling a bit nervous now about how this UI was going to end up looking and how much work it was going to take to get it there, I popped off my headset and dug into Unity to start tweaking.
This was a terribly painful process. Every time I nudged, scaled, squished or squashed the elements, I would run the scene again only to find that what I was designing for in the scene and inside of Photoshop just wasn’t translating into VR. Scale seemed to get lost. Positioning was impossible. I was designing blind, or at best with only one eye and a broken neck.
Eventually I got things to a “good enough” state, or rather that part in the project where you have to take what you’ve got and run with it or else cut features and drag the whole project behind. We relied on juicing the UI up with some animations and FX to help cover up the half baked design, but in the end I couldn’t help but feel that I was limping away from a battle that I would certainly face again.
A few weeks later I started prototyping for what is now our current project, Shapesong. Shapesong is a room-scale VR music exploration experience that lets players roam a virtual environment, playing and creating instruments and songs by themselves and (soon) with others. In many ways, this project is sort of a dream for me because it melts together two of my passions - music and games. For this reason, I started with a kind of vigor that I haven’t felt working on something in a really long time, with ideas and inspiring bursting out of me. Nevertheless though, I was still feeling the sting of wounds made by Edge of Home.
Shapesong is a different kind of UI design challenge. Though I was used to creating text and graphic based UI’s, I had never before created something as functional and demanding as a musical instrument. Musical instruments share many requirements as text UI’s - they need to be easy to recognize and parse, look beautiful, and of course need to be functional. However instruments are played and performed, which requires an ability for speed, yet robustness. They’re also more than just an aesthetic footprint - they’re a personal statement. In many ways I feel that musical instruments are a pinnacle of user interface design.
Having the challenges of designing for VR freshly formed in my mind, I begrudgingly started again with the process of dropping objects in, moving them around, then throwing my headset on and seeing if what I was doing was working. Things were going pretty well, all things considered, but the speed at which I could iterate was still excruciatingly slow. Then one day after a healthy load of faux chinese food, an idea popped in my head.
I knew that what I really needed to be doing was designing in VR, since there was no other way that I could get a one to one representation of the things I was designing. The question was how. On that drive home I was thinking about that question again when Tilt Brush popped into my mind. For the uninitiated, Tilt Brush is a room-scale VR app that gives users the ability to paint in 3D. It sounds really simple, but the feeling of awe I got when I first stepped beside a drawing I made is one that will leave an indelible mark on my VR career. It’s just something you cannot do in physical reality, and in a few simple strokes perfectly illustrates the power VR and why it is truly a paradigm shift in the way we create and consume media.
Up until that point (and as far as I know still), everyone had been using Tilt Brush as a purely artistic tool. I’d seen Disney animators recreate classic characters, bedroom artists weave intricate room-scale worlds, and Redditors draw 3D dickbutts - but never anyone that was using it for productivity purposes. I felt my hairs standing on end. It was like I’d discovered some hidden secret of VR dev. Sure, Tilt Brush wasn’t going to give me layers, effects, and filters like Photoshop would, but it would at least give me a canvas to not only work through and iterate on design ideas, but also a platform to share them with others.
When I got home, I threw on my headset as fast as I could and started drawing an instrument that I had been prototyping in Unity. In a couple minutes I had a 3D sketch of the instrument, full scale, right in front of me. From there I was able to start adding to it, tweaking it, look at from all angles, even pretend to use and play it like I would in the app. I wrote out notes in the air about how I intended certain things to work, drew placements of feet and controllers, and after three hours my room was filled with four new instruments, a timeline, a sequencer, and a wall of notes to help explain my creations. Finally I had no obstacles in the way of designing and I could just focus on making great content.
When I was done, I used the camera tool in Tilt Brush to snap some screenshots of everything I’d done so that I could reference it when actually creating the instruments in Unity. I also located the saved sketch and put it on my flash drive so I could share it with others at the office the next day. I couldn’t wait to impart my newly found knowledge with the rest of my colleagues.
From this experience of trying to design for VR, I found a few takeaways. First, scale on a screen is completely different in VR. Typically things end up being much bigger in the virtual world. They also require a lot more detail since you can get eyeball-close to anything you want just by leaning in. Secondly, designs that have depth in 2D can often end up feeling very flat in VR. You really need to utilize the space around a user to have an interesting design and take advantage of the 360 degrees of freedom that VR provides. Finally, and most importantly, to have a seamless and unfettered pipeline of creation, you really need to be designing for VR, in VR. Tools to do this really can’t come soon enough, and I can only hope that the projects for Unreal and Unity that allow for engine use in VR get the love and attention that they need.