A Week at the Hololens Academy

Ahhhhh the Hololens. I finally get to check it off my list. When I express my disappointment with not being able to try it out to my friends and co-workers that are interested in VR, it’s kinda like talking about going to Hawaii. “Ohhhh, you haven’t been? You really should, it’s an enjoyable experience.” (said, of course, with a knowing smirk and possibly a wink).

There’s a good reason for that knowing wink. Its a massively cool device, and despite being publicly available now to early adopters, there’s a waiting list and it’s $3k. Someone mentioned to me that they are in the “5th Wave” of wait list. So, right now, it’s hard to get your hands on it. And that’s IF you’re willing to shell out the money.

Should you buy it if you get the chance? Maybe. For me, there’s lots of parallels to Google Glass from a few years ago, but also lots of reasons it might break free from technological oddity into the mainstream.

In terms of sheer impressiveness in hardware, hell yes it’s worth $3k. Though it can be tethered via USB for the purposes of big deployments of your project, it’s completely wireless and independent. The computer to run it is built right into the device. It packs Wifi, 64GB of memory, a camera (both RGB and depth), and other sensors for headtracking (probably an accelerometer and gyroscope). Even the casing of the device is impressive. It looks slick, true, but the rotatable expandable band that makes every effort to custom fit your head is practically perfect. I originally didn’t put it on my head completely correct at first, and the display was resting on my nose a bit which would have been uncomfortable after awhile. Turns out, if you balance it on your head correctly, it barely touches your nose and almost floats on your face.

Compare the hardware to something like the Oculus Rift or the HTC Vive which are just display and you supply your own computer to tether to (and aren’t augmented reality). They run $600-800 plus at least a $1k desktop computer. I can’t recall who, but someone with me made the almost cruel observation of the size of an NVIDIA GTX970 graphics card compared to the size of the entire Hololens headset.

nvidiavshololensThe display is another massively cool hardware piece and makes the entire system come together as one. It has it’s problems which I’ll get into (cough cough field of view), but I’ll talk about that in a second when I get to usability. And make no mistake….usability is why or why you should not run right out and purchase one of these devices. The Hololens isn’t so much a tool as it is an experience. It’s not a hammer and nail. It’s more of a workbench. A beautiful workbench can be amazing, but if you can’t open the drawer to get to your hammer and nails and you want to create something, it’s worthless.

 

Training at Microsoft HQ

Awful analogies aside, and usablility aside, let me say a quick word about the training. Microsoft calls it “The Hololens Academy”. It occurs to me just now, that this might be a thinly veiled StarTrek reference. In fact, ALL of the training assets were space themed. From a floating astronaut, to a virtual futuristic tabletop projector, to a mid-air representation of our Solar System.

My company, Adobe, was kind enough to send me last minute to Redmond and do some learning. I honestly didn’t know what to expect because it was so last minute. Was it super secret stuff? No…but considering I hadn’t seen the not secret stuff yet, it really didn’t make too much difference. In fact it was SO not secret that our class followed along with well developed training material that MS has published online.

In fact, in a testament to how well developed it is…I was weirded out a bit on the first day to be honest. It had that theme park feel. Or that historical city tour feel. You know, where every word and joke your guide says is rehearsed and feels forced? But I got over that real fast, you know why? Because the sessions went like clockwork. The instructors kept exact time to an eerie degree, and the assistants WERE psychic. Virtually every time I had trouble, an instructor was behind me within a few seconds helping me out. I didn’t raise my hand, look confused, nothing. And there wasn’t a single time where I felt like they were annoyingly hovering. They just showed up out of the blue being insanely helpful.

The room itself was laid out extremely well for training. An open workspace with large screen TV’s on the wall facing every which way with the instructor in the center on a headset made a very great training space. The instructor didn’t even drive the software. He or she (they changed out every 3 hours), would have someone else driving the presentation machine while they spoke. This kind of coordination takes practice, no doubt.

The walls and tables were decorated for the event too, along with coffee tables specifically for placing your virtual assets on (holograms). The room is probably a permanent fixture specifically for this.

This all means one thing to me. We’ve got publicly available training materials, with tons of care put into creating them, extremely well staffed and smart trainers, and a training room just for the Hololens. Add to this the hundreds of engineers working on Hololens, adding the fact that MS is just now offering developer support for it… and the message is loud and clear. Microsoft is placing a HUGE bet on the Hololens. They aren’t half assing this like a lot of companies in their position might for a product that is so different and hard to predict how well it’s adopted.

Training style aside – I found another thing extremely interesting about the training. It’s all about Unity.

 

Authoring with Unity

Unity seems like kind of an underdog at the moment. It’s essentially a 3D authoring environment/player. It doesn’t nearly have the reach of something like Flash or Quicktime which at one point or another has been ubiquitous. Yet, its a favorite of 3D creators (designers and devs) who have the desire to easily make 3D interactive experiences. The reach of Unity alone (browser plugin, WebGL, Android, iOS, desktop application, Oculus, Vive, Gear, and now Hololens as well as others) puts it right in the middle of being THE tool for creating VR/AR/Mixed Reality content.

I was naive to not expect MS using Unity for experience creation. But, the fact is, it’s one of the ONLY tools for easy interactive 3D scene creation. I honestly expected Microsoft to push us into code only experience creation. Instead, they steered us into a combo of 3D scene building with Unity and code editing (C#) with Visual Studio. To be honest I’m a little resistant of Unity. Its not that its not an excellent tool, but I’ve gone through too many authoring tools that have fallen out of favor. This training is a wakeup call, though. If Oculus, Gear, HTC Vive weren’t enough to knock me over the head – a major company like MS (who has a great history of building dev tools) using a third party tool like this….well consider me knocked over the head and kicked in the shins.

The exercises themselves, were a mix of wiring things up in Unity and copying/pasting/pretending to code in Visual Studio. Its a hard thing to build a course around especially when offering this to everyone with no prerequisites, but MS certainly did a good job. I struggled a bit with C# syntax, not having used it in years, but easily fell back to the published online material when I couldn’t get something.

 

Usability and VR/AR Comparisons

OK so, the Hololens has the sweet sweet hardware. It has the training and developer support. All good right? Well no, there’s another huge consideration. The hugest consideration of all. How useable is it, and what can end users do with it?

You might guess that what end users do with it is up to you as a developer, and that’s partially right. Everything has limitations that enable or inhibit potential. Here’s the thing, though – take the iPhone or iPad for example. When it came out it WAS groundbreaking. But it wasn’t SO different that you had to experience it to imagine what it could do. Steve Jobs could simple show you a picture of it. Yep it had a screen. Jobs could show you interaction through a video: Yep you can swipe and tap and stuff. People were imaginitive enough to put 2 and 2 together and imagine the types of things you could do based on never having used the device. Sure, people are doing amazing things with touch devices that would have never been imagined without using it – but the simplest of interactions you can certainly get the gist when seeing it used without using it yourself.

VR is somewhat harder to pin down, but again, its somewhat easy to imagine. The promise is that you are thrown into another world. With VR, your imagination can certainly get ahead of itself. You might believe, without donning a headset that you can be teleported to another world and feel like you’re there.

Well, yes and no, and it’s all due to current limitations. VR can have a bit of a screen door effect meaning if you focus hard enough you feel like you’re in front of a screen. With VR, you are currently body-less. When you look down you’ll probably see no body, no hands, or even if it’s a great experience, it won’t look like YOUR body. This is a bit of a disconcerting experience. Also, you DEFINITELY feel like you’re wearing a headset. So yes…with VR, you ARE transported to a different and immersive space, however you need to suspend disbelief a bit (as amazing as it is).

AR is similar but a little worse. I can only comment on the Hololens, but its not the magical mixed reality fairly tale you might be led to believe. Even worse MS’s published videos and photos show the user being completely immersed in holograms. I can’t really fault them for this, because how do you sell and show a device like this that really must be worn to experience?

 

Field of View and other Visual Oddities

The biggest roadblock to achieving this vision is field of view. From what I’ve heard, its the one biggest complaint of the Hololens. I heard this going in and it was in the back of my head before I put the device on, but it took me an embarassingly long time to realize what was happening. A limited field of view means that the virtual objects or Holograms only take up a limited portion of the “screen”. Obviously. But in practice, this looks totally weird especially without some design trick to sweep it under the rug and integrate the limitation into the experience.

When you start viewing a 3D scene, if things are far away, they look fantastic! Well integrated with your environment and even interacting with it. Get closer though, and things start falling out of your field of view. Its as if you’re holding a mobile screen up fairly close to your face, but the screen has no edges and it doesn’t require your hand to hold it up. Well, what happens to things off screen? They simple disappear, or worse they are partially on screen but clipped to the window.

I took this image from a winbeta.com article about the field of view, here’s their take on it, but for our sake right now, here’s a great approximation of what you would see:

hololens-fov-2-29

People also use peripheral vision to find things in a large space, but unfortunately in this scenario you have no periphery – so it can be easy to not have a good understanding of the space you’re in right away.

There are a couple other visual limitations that make your holograms a bit less believable. For one, you can certainly see your headset. The best way to describe it is that you can certainly see when you’re wearing sunglasses and a baseball cap (though the Hololens certainly doesn’t protrude as far as a cap rim). You can also see the tinted projection area and some of the contours of that area in your periphery. It easy to ignore to an extent, but definitely still there. Also, you can see through the Holograms for sure. They’re pretty darn opaque, but they come across as a layer with maybe 90% transparency.

Another point is that in all the demo materials, if you get suspiciously close, the object starts disappearing or occluding. This is directly due to a camera setting in Unity. You can certainly decrease this value, however even the lowest setting is still a bit far and does occlude, and even then, the Hololens makes you go a bit crosseyed at something so close. You might say this is unfair because its simply a casualty of 3D scenes. To that, I say to check out the Oculus Rift Dreamdeck and use the cartoony city demo. You can put your head right up next to a virtual object, EXTREMELY close, and just feel like you can touch it with your cheek.

Lastly, overhead lights can cause some light separation and occasionaly push some rainbow streaks through your view especially on bright white objects like the Unity splash screen. This point, I can directly compare this to the flare of white objects on the Oculus Rift due to longer eyelashes.

For these above reasons – I don’t think the Hololens can be considered an immersive device yet like VR is. VR is really good at transporting you to a different place. I thought the Hololens would be similar in that it would convincingly augment your real world. But it doesn’t for me. It’s not believable. And thats why for now (at least 10-15 years), I’m convinced that AR is NOT the next generation after VR. They will happily live together.

If VR is the immersion vehicle – something that transports you, what’s AR? Or more specifically, the Hololens? Well, just because something isn’t immersive, doesn’t mean it can’t be incredibly useful. And I think that’s where the Hololens lies for the near term. It’s a productivity tool. I’m not sure I think games or storytelling or anything like that will catch on with the hardware as it is now (as cool as they are demo-wise until the immersion factor improves). No – I think it can extend your physical screen and digital world to an exceptional degree. Creating art, making music, even just reviewing documents can all be augmented. Your creation or productivity process doesn’t have to be immersive, just the content you create.

I think this point is where AR really shines over VR. In VR, we’re clumsily bringing our physical world into the virtual world so we can assist in creation using things modeled after both our real tools and 2D GUI tools. And usually this doesn’t work out. We have to remove our headset constantly to properly do a job. With AR, the physical world is already there. Do you have a task that needs to be done on your computer or tablet? Don’t even worry about removing your Hololens. Interact with both simultaneously…whatever. In fact, I think one HUGE area for the Hololens to venture into is the creation of immersive VR content itself. One for the immersive, one for the productive.

That’s not to say I don’t think casual consumers or others will eventually adopt it. It certainly could be useful for training, aid in hands free industrial work, anything that augments your world but doesn’t require suspension of disbelief.

 

Spatial Awareness

Hololens immersion isn’t all doom and gloom though. Spatial awareness is, in fact, AMAZING. The 3D sensor is constantly scanning your environment and mapping everything as a (not fantastically accurate but damn good) mesh. Since it uses infrared light like the Kinect to sense depth, it does have its limitations. It can’t see too far away, nor super close. The sun’s infrared light can also flood the sensor leaving it blind. One fun fact that I’ve learned is that leather seems to not reflect the light too well, so leather couches are completely invisible!

We did a really simple demo of spatial mapping. It looked amazing how we lined the real walls with a custom texture with blue lines. My Adobe colleague decided to make the lines flash and animate which was super mesmerizing. Unfortunately, I didn’t find the mixed reality video capture feature until after that, so here’s a nice demo I found on YouTube of something similar (though a bit more exceptional and interactive)

As scattered IR light tends to be sort of…well…scattered, meshes certainly don’t scan perfectly. That’s fine because MS has built some pre-packaged DLLs for smoothing the meshes out to flat planes and even offers advice on wall, ceiling, floor, and table finding.

Of course, once you’ve found the floor or surfaces to ineract with, you can place objects, introduce physics to make your Hologram interact with real surfaces (thanks Unity for simply collision and rigid bodies!), and even have your Holograms hidden behind real things. The trainers seemed most eager to show us punching holes in real objects like walls and tables to show incredible and expansive virtual worlds underneath. Again…though…the incredible and expansive can’t be immersive with the field of view the way it is.

Here’s a good time to show our group lobbing pellets at each other and hitting our real world bodies. The hole at the end SHOULD have been on the table, but I somehow screwed up the transformation of the 3D object in Unity, so it didn’t appear in the right spot. It does show some great spatial mapping, avatars that followed us around, and punching a hole through reality!

 

Spatial Audio

Spatial audio is another thing I’m on the fence about. It’s a bit weird on the Hololens. I give Microsoft major props for making the audio hardware AUGMENTED but not immersive. In VR systems, especially the Oculus Rift, you’d likely have over the ear headphones. Simple spatial audio (and not crazy advanced rocket science spatial audio) is limited to your vertical plane. Meaning, it matches your home stereo. Maybe a few front sources (left, right, and center), and a couple back source on your left and right. With these sources, you fade the audio between the sources and get some pretty awesome positional sounds.

On the Hololens, however, the hardware speakers are positioned above your ears on the headband. They aren’t covering your ear like headphones.

 

0e9693c5-78a1-4c51-9331-d66542e5fee9So yes, you can hear the real world as easily as you could without the headband on, but being positioned above your ears make it sound like the audio is always coming from above. One of our exercises included a Hologram astronaut. You’d click on the astronaut, and he’d disappear, but he’d talk to you and you were supposed to find him. Myself and everyone near me kept looking up to find him, but he was never up high – and I’m sure this is a direct result of the Hololens speaker placement. I asked the instructor about positional audio that included vertical orientation as well, and he said it was hard computationally. I know there are some cool solutions for VR (very mathy), but I’m skeptical on the Hololens. The instructors did say to make sure that objects you’d expect higher up (like birds) appear higher up in your world. I personally think this was a design cop-out to overcome the hardware.

 

Input

Last thing I want to cover is input. Frankly I’m disappointed with EVERYONE here (except for the HTC Vive). It seems mighty trendy for AR and VR headsets to make everyone do gaze input, but I hate it and it needs to die. The Hololens is no exception here, it’s included in all the training material and all of the OS interactions. Same goes for casual interactions on the Oculus Rift (gaming interactions use an XBOX controller, still dumb IMO) and Google Cardboard. The HTC Vive and soon the Oculus Rift will have touch controllers. Google Cardboard will soon be supplanted by Daydream which features a more expressive controller (though not positional). I’ve heard the Hololens might have some kind of pointer like Daydream, but I’ve only heard that offhand.

Gaze input is simply using the direction of your eyes to control a cursor on screen. Actually, it’s not even your eyes since your eyes can look around….Gaze input is using the center of your forehead as a cursor. The experience feels super rigid to me, I’d really prefer it be more natural and allow you to point at something you aren’t looking at. With the Oculus Rift, despite having gaze input, you also have a remote control. So to interact with something, gaze at it and click the remote.

The Hololens on the other hand, well it SEEMS cool, but it’s a bit clunky. You’re supposed to make an L with your thumb and index finger and drop the index finger in front of you (don’t bend your finger, or it may not recognize the action). You also have to do this in front of the 3D sensor, which doesn’t sound bad, but it would be way more comfortable to do it casually on your side or have your hand pointed down. And to be fair, spoken keywords like “select” can be used instead. We did also play with exercises that tracked your hands position to move and rotate a Hologram. All the same, I really think AR/VR requires something more expressive, more tactile, and less clunky for input.

 

Conclusion

All that said, the Hololens is an amazing device with enormous potential. Given that Microsoft’s CEO claims it is a “5 year journey”, what we have right now is really a developer preview of the device. For hardware, software, and support that feels so polished despite interaction roadblocks, it will be most likely be amazing what consumers get in their hands 5 years from now. So should you shove wads of cash at MS to get a device? Well, me…I’m excited about what’s to come, but I do see more potential for VR growth right now. I’m interested in not just new interaction patterns with AR/VR, but also about exploring how immersiveness makes you feel and react to your surroundings. The Hololens just doesn’t feel immersive yet. Additionally, it seems like the AR/VR community are really converging on the same tools, so lessons learned in VR can be easily translated to AR (adjusting for the real world aspect). The trainers made sure to point this out – the experiences you build with Unity should be easily built for other platforms. It will also be interesting to see in the next 5 years where Google takes Tango (AR without the head mounted display) and potentially pairs it with their Daydream project.

All that said, it’s all about use cases and ideas and making prototypes. If a killer idea comes along that makes sound business sense and specifically requires AR, the Hololens is pretty much the only game in town right now, so if that happens I’ll be sure to run out and (try to) get one. But in terms of adopting the Hololens because of perceived inevitability and coolness factor? I might wait.

But if you don’t own any AR/VR devices, cant wait to put something in the Windows store, can live with the limitations, and are already an MS junkie – maybe the Hololens is for you!

I’d like to give a big thanks to Microsoft for having us in their HQ and having such fantastic trainers and training material. I expect big things from this platform, and their level of commitment to developers for such a new paradigm is practically unheard of.

The Oculus Rift Dream Deck

Like I said in my last post, the “Dream Deck” is one of several free pieces of content to help you discover what VR is like. The deck itself, is a handful of mini experiences that last a few minutes and then fade to black inviting you to experience the next thing.

I also described the input mechanism here as a bit archaic. In fact, through much of the more simpler and/or utilitarian UI (like the video player), the “head cursor” is how it works for now. How it works, is that you’d look at the button/item you want to select – aligning a small subtle dot over the item, and then use the Oculus Remote to click on the item.

dreamdeck

 

I want to talk about a few of my favorite experiences in the deck. Before that, I did want to say that they are all varying degrees of nifty when you experience them. And also before my favorites, I want to talk about a couple of notable experiences. One is simply you standing in front of a very alien looking alien. Another puts you in the middle of a couple of robotic arms doing all sorts of loud and frantic mischief. These two experiences, while not the best of the breed for me, DO explore a concept unique to VR – and that’s making you uncomfortable without necessarily doing anything expressly so.

With both the alien and the robotic arm, you experience a closeness to something you as the user can’t anticipate how it will act. You can imagine seeing both in the real world, but having it before you in VR is something new. Both are extremely close to your virtual presence. The robot’s frantic activity and the alien’s lack of activity push your personal space boundaries in different ways – both end up making you (or me at least) a bit uncomfortable.

Night at the Museum (with a Dinosaur)

Speaking of uncomfortable – this museum setting feels a bit eerie, but also a bit curious. I found myself wanting to look around to see the environment. This last for about a second. Then of course, a T-Rex wanders in.

dino2

 

It’s a bit interesting how different folks experience this one. Some of my friends got frightened of the dinosaur and wanted to remove the headset. For me, from a ways back, it was a tiny bit dread inducing – but I could easily write it off as something that didn’t exist and therefore didn’t scare me in the least as the experience goes on.

I felt more of the same as the T-Rex charges at you. It again, wasn’t really scary, just minorly uncomfortable. When the dino roars over your head, it’s no longer scary at all for me. I found it a bit fascinating and I looked it up and down to soak it all in. I think in part, it’s because the T-Rex has officially done all it can do, and you realize it. There’s no anticipation anymore.

dino

 

Up High

My second favorite experience was one where you are placed high above a city on an unguarded steel ledge. Lest you think I was putting on a brave face writing about the T-Rex, THIS experience was very uncomfortable and fear inducing for me.

heights

YES, after several tries, it really no longer affects me. But those first few times DO NOT feel good. I had a similar experience playing a mini golf game on the Vive (where you can be super high on a course with no visible safeties around that would virtually prevent you from virtually falling). It’s quite interesting to me that this scares me, but a dinosaur does not. While both are virtual, the dinosaur seems easier to reason away. The experienced scared most everyone equally – with the exception of my wife who I still think lacks a reasonable fear of heights. She was excited to see what would happen if you jump off, and was disappointed that there was no mechanism to do so.

Either way, it’s fascinating to explore this concept of comfort in VR and the ways it can be invaded or not invaded. Even the Oculus store rates their content with comfort levels, with the highest being “intense”.

As much as I enjoyed exploring this concept, my favorite experience was unlike all the others and was not uncomfortable in the least.

Paper Village

My favorite experience was what I can only call…a “Paper Village”. It’s a very stylized, non realistic, cartoony/paperish town floating in the sky. Various animations play from traffic, to airplanes, to a cute little UFO that beams Paper Village citizens up.

mini-town2

 

There are a few reasons I like it so much. First, it’s very self contained. You’re looking inward on a tiny world instead of outward at your new virtual environment. To me this turns some of the assumptions you’d make about VR on its head a bit. You’re not in it, you are surrounding it. Its a bit god like in that, and a bit silly.

Adding to the silliness, is the whole non-photorealistic nature of it. As with the history of computer graphics to date, nothing is perfectly convincing. Realtime VR, as cool as it is, will be the worst at this (behind 3D games, behind rendered movies).  Not even attempting this and creating a cartoonish world really encourages your suspension of disbelief and creates something even more fantastic.

I think this suspension of disbelief makes you want to peer in every window and explore….

mini-town2

 

This suspension of disbelief, and this somewhat fantastical environment engages in a way I can’t really explain. On top of that it makes me do something that’s somewhat embarrassing because I do it every time I’m here even though I fully know the outcome. That something is to reach out and try to touch it. Even though I know it’s fake, and even though I can’t even see my hands.

Its freaky.

This is a good place to end things in this post, because my next post takes this cartoonish experience and explores it in the first piece of VR content I bought.

 

Hands on with the Oculus Rift

Virtual Reality is very fascinating to me. For one, its both a brand new technology given our current state of hardware that makes it possible AND also old as heck. I remember playing “Dactyl Nightmare” back before I graduated high school 20 years ago.

It’s now 2016, and the difference between then and now is that we seem poised to get VR in the hands of everyone. Both high end VR experiences because of powerful graphics cards and computers, and lower end VR experiences because everyone has a cellphone and Google Cardboard  is virtually free.

Besides the coolness factor, I’m interested in VR because early adopters and developers have the opportunity to invent a medium, but more specifically for user experience nerds like myself, we have the opportunity to re-invent many user interface paradigms. We can even question the whole notion of traditional user interfaces in general.

Before we get deep into using VR, I want to cover one possible way consumers have to experience it right now. There are a few.

The HTC Vive is one and offers a pretty stellar experience. It’s inclusion of hand tracking allows users to reach out and touch virtual objects. You may be surprised at how much lack of seeing your own body detracts from the experience. When you look down and don’t see your chest and feet or reach out to try to touch something and don’t see your hands it feels a bit uneasy.

The Vive also offers a bigger area to roam around. There’s quite a bit to set up here – but once you put the positional trackers in the corners of your room, VR experiences can track your body moving through a wide area giving you more freedom to virtually move around.

Additionally (and apparently not well advertised because a co-worker just told me), the Vive has a front facing camera which can create a form of Augmented Reality (although future AR promises to be way cooler than having to look through a camera to see the real world).

Other than the Vive, the major player are the Oculus Rift, the Gear VR, and Google Cardboard. I won’t get into the latter two, because I unfortunately haven’t had the pleasure of using the Gear, and the Cardboard…well, it’s nifty, not the best, and you can easily use it today with really no cost if you have a smartphone. I highly recommend trying it, but don’t think for a second Cardboard is as good as VR gets.

I was pretty psyched for VR back when the Oculus Rift was a Kickstarter campaign, before Facebook bought them. So, I backed both the development kits (the DK1 and DK2). Imagine my surprise when I found out that I’d get the Consumer version (CV1) for free!

Well, its been delivered and shipped and setup at my apartment. One major problem I had was that my slim Yoga 2 Windows laptop didn’t even come close to the minimum specs noted on the Rift. So my first task was to build a desktop gaming computer for myself. I spent around $820 to build it out – meeting (not really exceeding) their specs. I’ll put my parts list at the end of this post. This is all on top of the $600 most people will have to pay for the Rift itself. Unfortunately, its certainly not a casual purchase yet. The price, coupled with the fact that people won’t see the value without being able to experience it first means that adoption might be a bit difficult.

So with a fresh pair of eyes, before we get deep into specific games and experiences. Let me show you what you get when you buy the Oculus Rift CV1.

Hardware

First, the headset:

IMG_20160508_192823

 

The headset is the main piece of equipment you’d expect for VR. There’s a few more pieces I’ll get into after talking about the headset. I don’t want to get too far into specs and numbers that you can lookup elsewhere, but I do want to mention a few things.

First, the screen door effect. It doesn’t seem to exist. That’s awesome! The so called screen door effect can be seen in older versions of the Rift. In the DK1 and 2, it used to be that you could see the black grid between the pixels because the screen was so close to your face. It was almost as if you were looking through a screen door to see VR. To be fair, some have said you can still see the effect on the new Rift if you squint hard enough – but honestly, I think the Rift’s new resolution is pretty awesome.

Another critique of the older Rift was that headtracking could be slow. When your head movement doesn’t match the rate of the 3D scene movement, this can be disconcerting even if off by a miniscule amount. It commonly leads to feeling a bit nauseous. To be honest, I never really had that problem with the DK2. It felt fine for the most part, and when it didn’t, it was most likely a sub par application. Of course with the new Rift, headtracking STILL seems perfect to me, and also in every experience. This may be due to Oculus firming up their SDK for release. When everyone runs the same code and everyone uses the same head tracking, its probably more consistent.

Another nice attention to detail is the head sensor on the rift. When you put the Rift on, it turns on automatically (likewise when you take it off). Here’s an inside shot of the Rift – the sensor is centered above the lenses.

IMG_20160508_192837

The new Rift for me has all the promise of the old Rift, and none of the more prominent pitfalls. Of course that’s a big statement. As we get more used to VR and what it offers we’ll most likely see problems with any given platform that we never anticipated – and “no pitfalls” will turn into “many pitfalls”.

One of those pitfalls on the horizon is input. Seeing and experiencing VR is great and all, but we need to interact with our virtual world. How we do this is a main differentiating feature of the major players. The Vive’s world is room scale, which means you can roam around a pre-defined space much larger than the not-much-more-than-standing space the Rift affords you. The Vive also offers hand tracking which means you can ever so clumsily use your real hands (each gripping a controller) in the virtual world.

The Rift is rumored to offer both of these features soon, but what does it have now? Well here it is…all the extra stuff shipping with it (OK fine it comes with a lens cleaner and handy carrying case too, not pictured):

Body Tracking

IMG_20160508_192712

So, this little guy you’d mount on your desk or somewhere reasonably high and point it at where you intend to use your Rift. It’s camera-ish….meaning I don’t know if it’s technically a real RGB camera, whether it uses depth sensing, if it’s simple infrared, or really what it is. But it certainly does track your body movement well. This means when you lean or move around in your small space, your movements are reflected in the virtual world. It sounds like a small thing, but the older Rift didn’t have it, and your ability to move felt very rigid and limited.

Input Devices (for now)

IMG_20160508_192538 (1)

Two input devices ship with the Rift. The controller is what it appears – an XBOX controller. It almost feels like a copout! But I guess until we get imaginative enough with our VR experiences (like the Vive is starting to do), we’ll have to settle for treating VR content like our old school consoles for now.

Pictured to the left is a simpler controller. Its most likely powered by a watch battery and I can’t really tell how it connects – I guess it’s probably Bluetooth? Either way, its a simple remote. Most of the experiences I’ve seen (that don’t rely on the XBOX controller) don’t use more than the central button that you’d press with your thumb.

And that brings us to actually using the thing. What kinds of experiences does it offer? Well, for absolute starters, you can check out the “Dream Deck” These are example short experiences that give you a good feel for what’s possible. To navigate through them, you basically point your face (with a dot for a cursor) at the menu item you want and press the main button on your clicker.

Sound archaic? Yeah, it kind of is – but again: room to improve and invent! This post is getting a bit long, so I’ll cover the Dream Deck in another one.

 Also, as promised, here’s my parts list for the Windows desktop I built to accommodate this. My disclaimer is that I’ve never really cared to build a gaming machine – so I don’t know how mine compares to any other gamer’s machine. I also know that the parts I bought meet the minimum of what Oculus recommends. I ALSO bought them as a package from newegg.com because buying separate components that may or may not work together stressed me out.

Case:
Fractal Design Core 1000 USB 3.0 Cases FD-CA-CORE-1000-USB3-BL

Processor:
Intel Core i5-6500 6M Skylake Quad-Core 3.2 GHz LGA 1151 65W BX80662I56500 Desktop Processor Intel HD Graphics 530

Optical Drive:
LITE-ON DVD Burner SATA Model iHAS124-14 – OEM

Motherboard:
GIGABYTE GA-H110M-A (rev. 1.0) LGA 1151 Intel H110 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard

RAM:

CORSAIR ValueSelect 8GB 288-Pin DDR4 SDRAM DDR4 2133 (PC4 17000) Desktop Memory Model CMV8GX4M1A2133C15

OS:
Microsoft Windows 10 Home – 64-bit – OEM

Power Supply:
EVGA 100-W1-500-KR 500W ATX12V / EPS12V 80 PLUS Certified Active PFC Continuous Power Supply Intel 4th Gen CPU …

SSD:
ADATA Premier SP550 2.5″ 240GB SATA III TLC Internal Solid State Drive (SSD) ASP550SS3-240GM-C

GPU:
ZOTAC GeForce GTX 970 4GB