Back From VRLA

I believe it was during a session called “Shooting VR for Post” that I found myself identifying heavily with one of the panelists who said something to the effect of “Before VR, my work was a bit mundane. We’d take a look at a shot we needed to do in a meeting, and we wouldn’t even have to talk, we’d instantly know what our roles were and break to get down to work. With VR now, it’s not that easy, we need to knock our heads against the wall and really come up with ways to get the job done.”

As a web developer, I share this sentiment completely. The speaker expounded, giving an example like when Houdini comes out with a new node (I can only vaguely guess what this means), there’s a level of excitement, but it’s short lived. I feel similarly when a new Web API or Node.js based front-end workflow enhancement comes out, or a new framework is released. It changes our workflow in a nifty way, but it doesn’t necessarily change the work we create in a meaningful way.

It’s a big sentiment, and I feel it’s absolutely monumental that I happen to share this sentiment about the same new technology with a cinematographer…someone whom I might never even speak to in a professional capacity. I also seem to share this sentiment with sound engineers, game developers, VFX artists, hardware manufacturers, and more. I even had a fascinating conversation about depth being registered in your hypothalamus vs your visual cortex with a Game Developer/Designer/Cognitive Psychologist.

I’m silo-ing people a bit here because the more curious amongst us (including myself) have always enjoyed exploring the fringes of our craft. It’s not necessarily true that I wouldn’t talk to a cinematographer as a web developer, but it’s also not necessarily normal.

The point is that VR is bringing the best minds from all disciplines together and dissolving the fringes between these disciplines. Conferences like VRLA allow the stories of these boundaries breaking down to be told.

IMG_20170415_131157

This is incredibly important, not only for getting acquainted with what skills are being injected into this new medium and why, but also because nobody knows the right way to do things. When there’s no right way to do things, there’s no book you can buy, nothing to Google, nothing we can do except hear about fleeting experiences from people that got their hands dirty. We need to hear about their pain and about their opinions formed from creating something new and unique. When we hear lots of such perspectives, we can assemble a big picture, which I’m sure will be shattered by the next VRLA. I’ll be waiting to learn about the hypothetical magician a panelist cited as a great collaborator for focusing attention in a 360-degree world.

Also interesting is the regionality of VR creators. I feel like I hear an entirely different story in San Francisco versus what I heard at VRLA. When I attend the (admittedly low number of, so far) meetups around the Bay Area, it’s mostly about hardware, platforms, new app ideas, prototypes, social experiences. In LA, I feel that it was overwhelmingly VFX, cinematography, sound design…a very heavy focus on well-produced content. I’m still uncertain about the regionality around game development, perhaps because it’s relatively regionless. Though, one memorable paraphrased line on that subject was “Game devs are now sitting in the same room as VFX artists and directors.”

Perhaps one of the more interesting things I picked up was the different stories from different creators on immersive video. Immersive or 360 video seems like a mainstay in VR. The cries of it not really being VR have been sufficiently drowned out with most, if not all, presenters acknowledging the sentiment but disagreeing with it. Andrew Schwarz of Radiant Images, for example, called immersive video the “killer app” of VR. I expected this sentiment, especially in a city with so much film talent.

Andrew Schwarz of Radiant Images showing the new completely modular camera mount (AXA 360 Camera System) for immersive media
Andrew Schwarz of Radiant Images showing the new completely modular camera mount (AXA 360 Camera System) for immersive media

What I did not expect was the nuance verging on disagreement from Dario Raciti of OMD Zero Code. His point of view seemed to be that the novelty of immersive video has waned. His interest lies in creating marketing campaigns that make brands like Nissan and Gatorade stand out from the rest. Answering my question of what kinds of projects he tries to sell to clients, he flat out says he tries to discourage pure 360 video. Instead, he prefers a more immersive experience mixed with 360 video.

An excellent example of this was his “Let Hawaii Happen” piece. The user begins on a parachute they can steer and navigate to various islands in Hawaii. Once they’ve landed, it switches to a non-interactive 360 video tour.

I think Dario’s take on advertising with VR is very much worth listening to. His team also created a car-shopping VR experience for Nissan in which the user is seated to get a feel for the interior of the car, much like what you would do car shopping in reality. Outside the windows, however, a much different scene plays out: the viewer is also part of a battle in the Star Wars universe.

That exemplifies Dario’s notion of mixing real-time 3D content with immersive video, but it also touches on his point about advertising in general. To liberally paraphrase, Dario feels you should never beat the user over the head with branding. No logos, no mentioning of the brand unless its subtle and integrated into the experience. The experience always comes first, and if it’s memorable, it will sell the brand.

To me, this speaks to the larger issue of taking concepts we already employ en masse in traditional media and shoe-horning them into VR. Advertisers, I know you’re already thinking of this. You want to cut to commercial, put your logo on the bottom third of the screen, and include voice overs about how your brand is the best. Dario is saying to create good marketing experiences, let the content flow freely and be subtle about your brand. Consumers will respond better. He even cited “Pearl,” an Oscar-nominated VR short, as an example of something that could be a commercial with extremely limited changes.

The notion of shoe-horning brings another memorable notion to mind. To date, I’ve been thinking about VR like the jump from desktop to mobile. But the better analogy from one panelist was that “VR is like the jump from print to digital.” While stubbornness to hold on to the old ways can be detrimental, years of experience coupled with open-mindedness can be a huge asset.

In the Cinematographers’ panel, it was mentioned that old 3D tricks, because of limited processing power, are now coming back into fashion. The reason being that game engines like Unreal are coming into favor for doing real-time previews of scenes. Even traditional film equipment is being recreated in VR to help production. To hear a cinematographer talk about replicating a camera crane in VR and then shrinking it down, scaling it up, putting it on a mountain-top…. all within a day’s shoot was incredibly interesting.

IMG_20170414_173154
Shooting VR for Post Panel

The panelists and presenters at VRLA shared so much of their recent, and super fascinating, experiences based on their experimentation. This was a bit unfortunate, because I found myself glued to the presentation rooms and out of the expo floor. I saved my 2-hour lap through the expo hall until the very end. As expected, the lines for the more interesting experiences were either too long or closed. I can’t fault VRLA or their exhibitors for this; it seems a standard downside of VR conferences. I would wager that the most popular experience was the Augmented Reality (Hololens) Easter Egg hunt. As I didn’t experience it, I’ll just leave you with a photo because it looks awesome.

Microsoft Hololens Augmented Reality Easter Egg Hunt
Microsoft Hololens Augmented Reality Easter Egg Hunt

Of course, like Microsoft, a bunch of big vendors were there: Facebook, HTC, Intel. Although I don’t own a Vive, their talk of the multi-platform subscription service and their wireless headset was exciting. So was hearing how dedicated Intel, HTC, and HP are to VR developers. Yes, Facebook and MS are dedicated to Mixed Reality as well, but for me, that message was well received a while ago, so it’s awesome to see the pile on.

Being that there were around 170 exhibitors at VRLA, there were tons of smaller ones showing games, hardware, new experiences, and new creative tools. One notable company, Mindshow (http://mindshow.com), offers creative tools for recording animated characters with your body and voice in real-time. Watching from the expo floor, I was a bit disappointed as it felt too scripted. However, a booth attendant assured me it was that way for the 10-minute, quick demo for conference go-ers. It makes sense that you’d probably not want to start users with a blank slate if you only have a short window to impress them. So, if Mindshow is what I think it is, I can imagine having so much fun myself, and I can see many people creating awesome animated content extremely easily….but I’ve been known to overhype things in my own head.

Though it was my first time, VRLA has been going on for 3 years now and they’ve grown exponentially. The conference-going experience was not as seamless as others I’ve been to. The Friday keynote was delayed by at least 30 minutes because the speaker had no slide notes, which set off a cascade of presentation time pushbacks. There were constant audio issues, and the light field talk I was really looking forward to was cancelled with no explanation. This is all forgivable and probably par for the course given how many people from different disciplines are coming in and bringing their passions and experiences. There’s an amazing energy in VR. Organizations and conferences like VRLA focus it. It might not be laserlike as VR grows exponentially, but with a medium so young and with so many stories still to be told from creators about their experimentation, everything is appreciated.

A Week at the Hololens Academy

Ahhhhh the Hololens. I finally get to check it off my list. When I express my disappointment with not being able to try it out to my friends and co-workers that are interested in VR, it’s kinda like talking about going to Hawaii. “Ohhhh, you haven’t been? You really should, it’s an enjoyable experience.” (said, of course, with a knowing smirk and possibly a wink).

There’s a good reason for that knowing wink. Its a massively cool device, and despite being publicly available now to early adopters, there’s a waiting list and it’s $3k. Someone mentioned to me that they are in the “5th Wave” of wait list. So, right now, it’s hard to get your hands on it. And that’s IF you’re willing to shell out the money.

Should you buy it if you get the chance? Maybe. For me, there’s lots of parallels to Google Glass from a few years ago, but also lots of reasons it might break free from technological oddity into the mainstream.

In terms of sheer impressiveness in hardware, hell yes it’s worth $3k. Though it can be tethered via USB for the purposes of big deployments of your project, it’s completely wireless and independent. The computer to run it is built right into the device. It packs Wifi, 64GB of memory, a camera (both RGB and depth), and other sensors for headtracking (probably an accelerometer and gyroscope). Even the casing of the device is impressive. It looks slick, true, but the rotatable expandable band that makes every effort to custom fit your head is practically perfect. I originally didn’t put it on my head completely correct at first, and the display was resting on my nose a bit which would have been uncomfortable after awhile. Turns out, if you balance it on your head correctly, it barely touches your nose and almost floats on your face.

Compare the hardware to something like the Oculus Rift or the HTC Vive which are just display and you supply your own computer to tether to (and aren’t augmented reality). They run $600-800 plus at least a $1k desktop computer. I can’t recall who, but someone with me made the almost cruel observation of the size of an NVIDIA GTX970 graphics card compared to the size of the entire Hololens headset.

nvidiavshololensThe display is another massively cool hardware piece and makes the entire system come together as one. It has it’s problems which I’ll get into (cough cough field of view), but I’ll talk about that in a second when I get to usability. And make no mistake….usability is why or why you should not run right out and purchase one of these devices. The Hololens isn’t so much a tool as it is an experience. It’s not a hammer and nail. It’s more of a workbench. A beautiful workbench can be amazing, but if you can’t open the drawer to get to your hammer and nails and you want to create something, it’s worthless.

 

Training at Microsoft HQ

Awful analogies aside, and usablility aside, let me say a quick word about the training. Microsoft calls it “The Hololens Academy”. It occurs to me just now, that this might be a thinly veiled StarTrek reference. In fact, ALL of the training assets were space themed. From a floating astronaut, to a virtual futuristic tabletop projector, to a mid-air representation of our Solar System.

My company, Adobe, was kind enough to send me last minute to Redmond and do some learning. I honestly didn’t know what to expect because it was so last minute. Was it super secret stuff? No…but considering I hadn’t seen the not secret stuff yet, it really didn’t make too much difference. In fact it was SO not secret that our class followed along with well developed training material that MS has published online.

In fact, in a testament to how well developed it is…I was weirded out a bit on the first day to be honest. It had that theme park feel. Or that historical city tour feel. You know, where every word and joke your guide says is rehearsed and feels forced? But I got over that real fast, you know why? Because the sessions went like clockwork. The instructors kept exact time to an eerie degree, and the assistants WERE psychic. Virtually every time I had trouble, an instructor was behind me within a few seconds helping me out. I didn’t raise my hand, look confused, nothing. And there wasn’t a single time where I felt like they were annoyingly hovering. They just showed up out of the blue being insanely helpful.

The room itself was laid out extremely well for training. An open workspace with large screen TV’s on the wall facing every which way with the instructor in the center on a headset made a very great training space. The instructor didn’t even drive the software. He or she (they changed out every 3 hours), would have someone else driving the presentation machine while they spoke. This kind of coordination takes practice, no doubt.

The walls and tables were decorated for the event too, along with coffee tables specifically for placing your virtual assets on (holograms). The room is probably a permanent fixture specifically for this.

This all means one thing to me. We’ve got publicly available training materials, with tons of care put into creating them, extremely well staffed and smart trainers, and a training room just for the Hololens. Add to this the hundreds of engineers working on Hololens, adding the fact that MS is just now offering developer support for it… and the message is loud and clear. Microsoft is placing a HUGE bet on the Hololens. They aren’t half assing this like a lot of companies in their position might for a product that is so different and hard to predict how well it’s adopted.

Training style aside – I found another thing extremely interesting about the training. It’s all about Unity.

 

Authoring with Unity

Unity seems like kind of an underdog at the moment. It’s essentially a 3D authoring environment/player. It doesn’t nearly have the reach of something like Flash or Quicktime which at one point or another has been ubiquitous. Yet, its a favorite of 3D creators (designers and devs) who have the desire to easily make 3D interactive experiences. The reach of Unity alone (browser plugin, WebGL, Android, iOS, desktop application, Oculus, Vive, Gear, and now Hololens as well as others) puts it right in the middle of being THE tool for creating VR/AR/Mixed Reality content.

I was naive to not expect MS using Unity for experience creation. But, the fact is, it’s one of the ONLY tools for easy interactive 3D scene creation. I honestly expected Microsoft to push us into code only experience creation. Instead, they steered us into a combo of 3D scene building with Unity and code editing (C#) with Visual Studio. To be honest I’m a little resistant of Unity. Its not that its not an excellent tool, but I’ve gone through too many authoring tools that have fallen out of favor. This training is a wakeup call, though. If Oculus, Gear, HTC Vive weren’t enough to knock me over the head – a major company like MS (who has a great history of building dev tools) using a third party tool like this….well consider me knocked over the head and kicked in the shins.

The exercises themselves, were a mix of wiring things up in Unity and copying/pasting/pretending to code in Visual Studio. Its a hard thing to build a course around especially when offering this to everyone with no prerequisites, but MS certainly did a good job. I struggled a bit with C# syntax, not having used it in years, but easily fell back to the published online material when I couldn’t get something.

 

Usability and VR/AR Comparisons

OK so, the Hololens has the sweet sweet hardware. It has the training and developer support. All good right? Well no, there’s another huge consideration. The hugest consideration of all. How useable is it, and what can end users do with it?

You might guess that what end users do with it is up to you as a developer, and that’s partially right. Everything has limitations that enable or inhibit potential. Here’s the thing, though – take the iPhone or iPad for example. When it came out it WAS groundbreaking. But it wasn’t SO different that you had to experience it to imagine what it could do. Steve Jobs could simple show you a picture of it. Yep it had a screen. Jobs could show you interaction through a video: Yep you can swipe and tap and stuff. People were imaginitive enough to put 2 and 2 together and imagine the types of things you could do based on never having used the device. Sure, people are doing amazing things with touch devices that would have never been imagined without using it – but the simplest of interactions you can certainly get the gist when seeing it used without using it yourself.

VR is somewhat harder to pin down, but again, its somewhat easy to imagine. The promise is that you are thrown into another world. With VR, your imagination can certainly get ahead of itself. You might believe, without donning a headset that you can be teleported to another world and feel like you’re there.

Well, yes and no, and it’s all due to current limitations. VR can have a bit of a screen door effect meaning if you focus hard enough you feel like you’re in front of a screen. With VR, you are currently body-less. When you look down you’ll probably see no body, no hands, or even if it’s a great experience, it won’t look like YOUR body. This is a bit of a disconcerting experience. Also, you DEFINITELY feel like you’re wearing a headset. So yes…with VR, you ARE transported to a different and immersive space, however you need to suspend disbelief a bit (as amazing as it is).

AR is similar but a little worse. I can only comment on the Hololens, but its not the magical mixed reality fairly tale you might be led to believe. Even worse MS’s published videos and photos show the user being completely immersed in holograms. I can’t really fault them for this, because how do you sell and show a device like this that really must be worn to experience?

 

Field of View and other Visual Oddities

The biggest roadblock to achieving this vision is field of view. From what I’ve heard, its the one biggest complaint of the Hololens. I heard this going in and it was in the back of my head before I put the device on, but it took me an embarassingly long time to realize what was happening. A limited field of view means that the virtual objects or Holograms only take up a limited portion of the “screen”. Obviously. But in practice, this looks totally weird especially without some design trick to sweep it under the rug and integrate the limitation into the experience.

When you start viewing a 3D scene, if things are far away, they look fantastic! Well integrated with your environment and even interacting with it. Get closer though, and things start falling out of your field of view. Its as if you’re holding a mobile screen up fairly close to your face, but the screen has no edges and it doesn’t require your hand to hold it up. Well, what happens to things off screen? They simple disappear, or worse they are partially on screen but clipped to the window.

I took this image from a winbeta.com article about the field of view, here’s their take on it, but for our sake right now, here’s a great approximation of what you would see:

hololens-fov-2-29

People also use peripheral vision to find things in a large space, but unfortunately in this scenario you have no periphery – so it can be easy to not have a good understanding of the space you’re in right away.

There are a couple other visual limitations that make your holograms a bit less believable. For one, you can certainly see your headset. The best way to describe it is that you can certainly see when you’re wearing sunglasses and a baseball cap (though the Hololens certainly doesn’t protrude as far as a cap rim). You can also see the tinted projection area and some of the contours of that area in your periphery. It easy to ignore to an extent, but definitely still there. Also, you can see through the Holograms for sure. They’re pretty darn opaque, but they come across as a layer with maybe 90% transparency.

Another point is that in all the demo materials, if you get suspiciously close, the object starts disappearing or occluding. This is directly due to a camera setting in Unity. You can certainly decrease this value, however even the lowest setting is still a bit far and does occlude, and even then, the Hololens makes you go a bit crosseyed at something so close. You might say this is unfair because its simply a casualty of 3D scenes. To that, I say to check out the Oculus Rift Dreamdeck and use the cartoony city demo. You can put your head right up next to a virtual object, EXTREMELY close, and just feel like you can touch it with your cheek.

Lastly, overhead lights can cause some light separation and occasionaly push some rainbow streaks through your view especially on bright white objects like the Unity splash screen. This point, I can directly compare this to the flare of white objects on the Oculus Rift due to longer eyelashes.

For these above reasons – I don’t think the Hololens can be considered an immersive device yet like VR is. VR is really good at transporting you to a different place. I thought the Hololens would be similar in that it would convincingly augment your real world. But it doesn’t for me. It’s not believable. And thats why for now (at least 10-15 years), I’m convinced that AR is NOT the next generation after VR. They will happily live together.

If VR is the immersion vehicle – something that transports you, what’s AR? Or more specifically, the Hololens? Well, just because something isn’t immersive, doesn’t mean it can’t be incredibly useful. And I think that’s where the Hololens lies for the near term. It’s a productivity tool. I’m not sure I think games or storytelling or anything like that will catch on with the hardware as it is now (as cool as they are demo-wise until the immersion factor improves). No – I think it can extend your physical screen and digital world to an exceptional degree. Creating art, making music, even just reviewing documents can all be augmented. Your creation or productivity process doesn’t have to be immersive, just the content you create.

I think this point is where AR really shines over VR. In VR, we’re clumsily bringing our physical world into the virtual world so we can assist in creation using things modeled after both our real tools and 2D GUI tools. And usually this doesn’t work out. We have to remove our headset constantly to properly do a job. With AR, the physical world is already there. Do you have a task that needs to be done on your computer or tablet? Don’t even worry about removing your Hololens. Interact with both simultaneously…whatever. In fact, I think one HUGE area for the Hololens to venture into is the creation of immersive VR content itself. One for the immersive, one for the productive.

That’s not to say I don’t think casual consumers or others will eventually adopt it. It certainly could be useful for training, aid in hands free industrial work, anything that augments your world but doesn’t require suspension of disbelief.

 

Spatial Awareness

Hololens immersion isn’t all doom and gloom though. Spatial awareness is, in fact, AMAZING. The 3D sensor is constantly scanning your environment and mapping everything as a (not fantastically accurate but damn good) mesh. Since it uses infrared light like the Kinect to sense depth, it does have its limitations. It can’t see too far away, nor super close. The sun’s infrared light can also flood the sensor leaving it blind. One fun fact that I’ve learned is that leather seems to not reflect the light too well, so leather couches are completely invisible!

We did a really simple demo of spatial mapping. It looked amazing how we lined the real walls with a custom texture with blue lines. My Adobe colleague decided to make the lines flash and animate which was super mesmerizing. Unfortunately, I didn’t find the mixed reality video capture feature until after that, so here’s a nice demo I found on YouTube of something similar (though a bit more exceptional and interactive)

As scattered IR light tends to be sort of…well…scattered, meshes certainly don’t scan perfectly. That’s fine because MS has built some pre-packaged DLLs for smoothing the meshes out to flat planes and even offers advice on wall, ceiling, floor, and table finding.

Of course, once you’ve found the floor or surfaces to ineract with, you can place objects, introduce physics to make your Hologram interact with real surfaces (thanks Unity for simply collision and rigid bodies!), and even have your Holograms hidden behind real things. The trainers seemed most eager to show us punching holes in real objects like walls and tables to show incredible and expansive virtual worlds underneath. Again…though…the incredible and expansive can’t be immersive with the field of view the way it is.

Here’s a good time to show our group lobbing pellets at each other and hitting our real world bodies. The hole at the end SHOULD have been on the table, but I somehow screwed up the transformation of the 3D object in Unity, so it didn’t appear in the right spot. It does show some great spatial mapping, avatars that followed us around, and punching a hole through reality!

 

Spatial Audio

Spatial audio is another thing I’m on the fence about. It’s a bit weird on the Hololens. I give Microsoft major props for making the audio hardware AUGMENTED but not immersive. In VR systems, especially the Oculus Rift, you’d likely have over the ear headphones. Simple spatial audio (and not crazy advanced rocket science spatial audio) is limited to your vertical plane. Meaning, it matches your home stereo. Maybe a few front sources (left, right, and center), and a couple back source on your left and right. With these sources, you fade the audio between the sources and get some pretty awesome positional sounds.

On the Hololens, however, the hardware speakers are positioned above your ears on the headband. They aren’t covering your ear like headphones.

 

0e9693c5-78a1-4c51-9331-d66542e5fee9So yes, you can hear the real world as easily as you could without the headband on, but being positioned above your ears make it sound like the audio is always coming from above. One of our exercises included a Hologram astronaut. You’d click on the astronaut, and he’d disappear, but he’d talk to you and you were supposed to find him. Myself and everyone near me kept looking up to find him, but he was never up high – and I’m sure this is a direct result of the Hololens speaker placement. I asked the instructor about positional audio that included vertical orientation as well, and he said it was hard computationally. I know there are some cool solutions for VR (very mathy), but I’m skeptical on the Hololens. The instructors did say to make sure that objects you’d expect higher up (like birds) appear higher up in your world. I personally think this was a design cop-out to overcome the hardware.

 

Input

Last thing I want to cover is input. Frankly I’m disappointed with EVERYONE here (except for the HTC Vive). It seems mighty trendy for AR and VR headsets to make everyone do gaze input, but I hate it and it needs to die. The Hololens is no exception here, it’s included in all the training material and all of the OS interactions. Same goes for casual interactions on the Oculus Rift (gaming interactions use an XBOX controller, still dumb IMO) and Google Cardboard. The HTC Vive and soon the Oculus Rift will have touch controllers. Google Cardboard will soon be supplanted by Daydream which features a more expressive controller (though not positional). I’ve heard the Hololens might have some kind of pointer like Daydream, but I’ve only heard that offhand.

Gaze input is simply using the direction of your eyes to control a cursor on screen. Actually, it’s not even your eyes since your eyes can look around….Gaze input is using the center of your forehead as a cursor. The experience feels super rigid to me, I’d really prefer it be more natural and allow you to point at something you aren’t looking at. With the Oculus Rift, despite having gaze input, you also have a remote control. So to interact with something, gaze at it and click the remote.

The Hololens on the other hand, well it SEEMS cool, but it’s a bit clunky. You’re supposed to make an L with your thumb and index finger and drop the index finger in front of you (don’t bend your finger, or it may not recognize the action). You also have to do this in front of the 3D sensor, which doesn’t sound bad, but it would be way more comfortable to do it casually on your side or have your hand pointed down. And to be fair, spoken keywords like “select” can be used instead. We did also play with exercises that tracked your hands position to move and rotate a Hologram. All the same, I really think AR/VR requires something more expressive, more tactile, and less clunky for input.

 

Conclusion

All that said, the Hololens is an amazing device with enormous potential. Given that Microsoft’s CEO claims it is a “5 year journey”, what we have right now is really a developer preview of the device. For hardware, software, and support that feels so polished despite interaction roadblocks, it will be most likely be amazing what consumers get in their hands 5 years from now. So should you shove wads of cash at MS to get a device? Well, me…I’m excited about what’s to come, but I do see more potential for VR growth right now. I’m interested in not just new interaction patterns with AR/VR, but also about exploring how immersiveness makes you feel and react to your surroundings. The Hololens just doesn’t feel immersive yet. Additionally, it seems like the AR/VR community are really converging on the same tools, so lessons learned in VR can be easily translated to AR (adjusting for the real world aspect). The trainers made sure to point this out – the experiences you build with Unity should be easily built for other platforms. It will also be interesting to see in the next 5 years where Google takes Tango (AR without the head mounted display) and potentially pairs it with their Daydream project.

All that said, it’s all about use cases and ideas and making prototypes. If a killer idea comes along that makes sound business sense and specifically requires AR, the Hololens is pretty much the only game in town right now, so if that happens I’ll be sure to run out and (try to) get one. But in terms of adopting the Hololens because of perceived inevitability and coolness factor? I might wait.

But if you don’t own any AR/VR devices, cant wait to put something in the Windows store, can live with the limitations, and are already an MS junkie – maybe the Hololens is for you!

I’d like to give a big thanks to Microsoft for having us in their HQ and having such fantastic trainers and training material. I expect big things from this platform, and their level of commitment to developers for such a new paradigm is practically unheard of.

Adventure Time: Magic Man’s Head Games…

… and other platformers.

In my last post, I was really psyched over the suspension of disbelief, cartoony, fantasy world like experience. It’s fitting that my first purchased content would be this game.

To be honest, I bought this for two reasons.

  1. It’s cheap at $4.99
  2. I freaking love Adventure Time

The result was that I was blown away. And this is odd, because if it was a normal 3D game release, it would be INCREDIBLY underwhelming. Even a bit underwhelming for $4.99.

Why? Well, gameplay won’t last more than an hour or so. Maybe two. The enemies aren’t that good (you’re mostly fighting sandwiches that don’t do much). The story isn’t deep at all, and the graphics are “meh”.

You too, reading this post can be pretty ambivalent by looking at a screen capture

adventure-time

 

Like I said, the graphics are “meh”. But allow me to say the first good thing about it, and its that the graphics it does have captures the cartoony nature of the show pretty well.

In VR though? Wow.

In a year or two, I think this game will be as underwhelming in VR as it appears. But props to Turbo Button for making you feel like a part of the game. Right off the bat, it’s just crazy cute to live virtually inside this admittedly sparse world and seeing Finn the Human and Jake the Dog interacting with you.

Also right away, the story is very cleverly set up for the medium. I do think content creators should take note of the way you’re ingrained into the story and to have that become a mechanism for playing the game.

Lemme explain…

The game starts with you approaching Finn and Jake on a field as a tiny person/thing/whatever. You’re instantly accepted as buds with them. You never see who/what you are because it’s all first person view. Unfortunately, Magic Man pops in randomly from out of nowhere and starts wreaking havoc (sounds weird, but actually very in character with the show). Magic Man uses his magic and makes you, the player, incredibly huge. This story mechanism, effectively turns you into the camera.

Finn and Jake plod on as you control Finn with your XBOX controller. But you as….well, huge you/game camera both follow them around in hopes Magic Man can be found and subdued into turning you back to normal size. The perspective/size change alone, is something very interesting and ripe to explore in VR. This game only touches it briefly as it’s story intro, but all the same, I’d love to see more in other experiences.

Now that you’re the game camera, very interesting things can be found and, well NOT found.

Go back to the first set of popular 3D platformers. Say….Mario 64:

mario

Because it was 3D, there was a camera. The camera would awkwardly follow you around, and when it was exceptionally awkward, you’d use your joystick to move it.

With Adventure Time, the camera still follows Finn around…but only loosely. Remember that you are the camera, and peeling the onion skin back, you’re wearing a headset on your face that you control as naturally as you would looking around in real life.

The game doesn’t have very intricate levels, but there are some hiddenish paths to explore. Free movement of your head as well as the ability to physically lean, duck, stand on your tiptoes in real life adds a VERY interesting element to the old 3D platformer. In some ways I can liken it to controlling a character within a dollhouse in good old fashioned meat space. Its a very unique perspective. I only wish that there were other ways to control it besides the XBOX controller, because that feel in your hands pulls you back to thinking it’s fake again.

Its so hard to get this point across without experiencing it for yourself. Just imagine being able to standup and look around this environment while your character hangs tight

adventure-time2

In a further nod to keeping you part of the game, both Finn and Jake will interact with you and talk to you regularly. Sometimes exploiting the infamous cheesy “I’m watching a 3D movie gag” by throwing something in your face. But yah, here’s Finn chatting you up:

adventure-time3

All in all, its so worth $4.99. Probably not worth an extra zero, but I’m really glad I purchased this one as my first VR game. The original voices and sticking to an albeit simplish Adventure Time plot with very Finn and Jake-ish dialog makes me smile.

I should also toss a nod to a game called “Lucky’s Tale”. This game comes free with the Rift, but I didn’t try it until after Adventure Time. It’s obviously more geared towards kids, as adult me didn’t care about the story. It was also a bit boring and cheap just capturing coins as I plod through the levels. Use of the camera in this 3D platformer has the same gameplay mechanic as Adventure Time did, but without getting written into the story. I think by the time I got to Lucky’s Tale, my awe and wonder for the re-invention of the 3D platformer was used up, so this fell flat for me. That said, if you’re shy about trying it and you have a Rift, it certainly won’t cost you anything! And also to be fair, I do think the art direction, style, and level design surpasses Adventure time by a fair bit.

luckys-tale

What to cover next? I just recently bought Subnautica and the Climb. Both are pretty fascinating, and I’ll write these up later. As you can tell, I’m not so concerned with telling you about core gameplay or how fun it is. I even thought I might just analyze user interaction in VR – how it’s done in this brave new world, but it turns out that what kinds of feelings this content evokes is a major part of the user experience.

 

Progressive Web Apps

Last Friday, the SFHTML5 Meetup group met to discuss something called “Progressive Web Apps.” I had some preconceived notions that the topic would be pretty cool, but actually, it got me more excited about the state of mobile/web/desktop in 2016 than I could have imagined.

This might sound a bit dramatic, especially given the negative tone that Alex Russel (@slightlylate), the speaker, started off with on the mobile web. Despite being negative, he was spot on, and the talk was a real eye-opener for us who have been working on the mobile web for so long that we forget how much it sucks.

And yes, it does suck. A good point was made that every other mobile platform started out mobile. No vendor has ever really proposed, “OK, let’s take this UI platform, along with everything that’s ever been built with it that works on the desktop with mouse and keyboard, and dump it on mobile.” Nobody did that until it came to web browsers. It’s amazing that things work as well as they do right now.

Alex then took us through an exercise of asking for hands up on who used a web-based email client on the desktop. Around 95% of our hands raised. When the question was reversed, “Who uses a web-based email client on their mobile device?” the result was exactly the opposite.

Why does the mobile web suck so much? The folks that have given “Responsive Web Design” (RWD) a shot can’t be blamed for this problem. The rest of the web community…if you want your stuff to work on mobile, it’s time for a redesign.

Even with RWD, some mobile redesign love, and the MOBILE FIRST! mantras we shout, the fundamental user experience with the mobile web, as it is now, will never compete with those for mobile apps. It’s probably not because HTML/JS/CSS is slow. Yeah, native can be faster, but if you think about it, most apps you use really don’t need speed. If you don’t agree with me, tell that to all the app developers using Phonegap, Cordova, or even just a plain WebView for their entire products.

So speed isn’t the issue for most apps. Touch, screen orientation, and size don’t need to be an issue if the web team cares enough to address them. No, to compete with your typical mobile app, design comes down to how the browser itself runs and loads the page.

Real, installed apps have two pretty big advantages over mobile apps:

  • Once installed, the user can jump to the app from the home screen.
  • Even with no network connectivity, the app can still work or at least pretend to work.

There’s a 3rd advantage, and that’s push notifications: messages from the app that appear in the notification area even when the app isn’t running. I think that functionality is big-ish, but unless you have a significant base of users addicted to your app (think Facebook), it isn’t as big of a deal. Smaller guys and gals are just trying to develop a neat app.

Progressive Web Apps attempt to solve all of that missing functionality, and they do so in a way that doesn’t necessarily interfere with the current way we develop for the web.

Step #1: Invade the Home Screen and look like an app

Tackling the first issue of putting your page on the mobile home screen is pretty important. How the application is displayed, both on the home screen and when it loads, is part of that experience. To solve it, use the “Web App Manifest”! It’s a JSON file linked from your HTML head that allows you to define things like app icons, fullscreen display, splash screen, and more.

This is the point when I should confess that I haven’t worked with Progressive Web Apps yet. Luckily for me, this isn’t a “how-to” article. So for great details on how-to do this stuff, run an easy search, or for your convenience, read this nice technical article via MobiForge.

Either way, the idea is that if a user visits your page often enough within a certain time frame, the browser will ask the user if he or she would like to place the page on their home screen. Or, the user can simply add it to the home screen from the options menu in the mobile browser. That’s light-years better than having to open the browser, remember the URL, and load the page. I’m sure it’s a huge reason apps are winning on mobile right now.

Step #2: Be an app even when offline

Secondly, we have “Service Workers.” They sound nerdy and boring, and maybe they are, but the potential they open up for appifying a webpage is huge. Basically, you’d use a Service Worker to intercept a specific set of resources as the webpage fetches them. Yes, if the user is offline that first time they want to access the page, they’re outta luck. However, when they do intercept these resources with a connection, they’ll be cached. You, the developer, control which files get cached via a Javascript array in the code. On subsequent loads, even if the user is offline, the page can load with your cached assets, whether they are images, Javascript, JSON, styles, or whatever. Here’s a better technical description of how that works.

In fact, Google has published documentation and some tools on the similar notion of an “Application Shell Architecture” wherein persistent assets that don’t change can be cached, but dynamic content that isn’t cached will update.

What does this mean and will it all work?

Probably the most exciting thing about Progressive Web Apps is that both the Manifest and the Service Workers will not affect the web page negatively if the browser doesn’t support the features. This means that the worst you can do is waste time and JS code on something that doesn’t pan out as you hoped.

And there is some danger that it won’t work. You may have noticed that Facebook today uses push notifications with Service Workers and that they do increase engagement on their site. So that’s a win! Unfortunately, Service Workers and the Web App Manifest aren’t supported everywhere. Unsurprisingly, that means they’re pretty much everywhere but iOS/Safari. Even worse, browser vendors on iOS can’t use their own web engines to support the Progressive Web Apps — under the hood, both Chrome and Firefox have to use Safari tech.

Apple seems tight-lipped about whether they intend to adopt Progressive Web Apps at all. I’m going to say that for now, it doesn’t matter. If you’ve hung around the SF Bay Area enough, you may have noticed that many companies operate on an “iOS first, Android distant second” agenda. That doesn’t make sense in that Android devices far surpass iOS devices in sales. But it does make sense, in that iOS app sales are greater and it can be daunting to develop apps for the large ecosystem of Android devices on the market.

However you slice it, Android is second for developers, which is bad for consumers. Right now, many companies will adopt a Web + iOS + maybe Android strategy. If they can combine the Android + Web strategy with Progressive Web Apps AND not force folks through the Google App Store, it’ll be a huge win for everyone. I’m guessing Google probably doesn’t even care much about having an app store, save for the fact that it was necessary to maintain a mobile ecosystem.

Meanwhile, the point was made at this Meetup that with every additional step a user must go through to download an app, there’s around a 20% dropoff rate. Think about how many steps there are in clicking an app link, going to the store, starting the download, waiting for the install, and finally opening the app — many apps are losing out on users. And let’s face it, the app gold rush is over. There are some lottery winners still, but most apps are too costly to make and market to justify what they bring in return.

Progressive Web Apps short circuit that whole process by eliminating app discovery and install. While Android users will enjoy a huge user experience win, Apple will most likely try to maintain their stranglehold on their app store and come kicking and screaming only once web devs demand these new features.

What’s more, and what I’m really excited for, is our return to disposable digital experiences. Hate Adobe Flash or not, it really created a heyday for disposable experiences: Flash games to play a couple times and get bored with, nifty digital playgrounds, etc. It’s way harder to convince someone to download an app than it is to go to a webpage and pop it on their homescreen until they get bored of it in a week.

To extend, I think Progressive Web Apps will also be a huge boon for web-based Virtual Reality. Immersive experiences will come from many different places and frankly, will not be wanted as a permanent app install. Already, we’re seeing the rise of VR portals like MilkVR because smaller, one-off VR experiences need some kind of entry onto a device. When Progressive Web Apps make WebVR easier to get before eyes than an app portal, VR will win big.

To reiterate, I think Progressive Web Apps are the next big thing for mobile, potentially replace lots of simple apps, and will mark the return of fun, disposable experiences. I don’t have the technical experience with these new tools to back me up yet, but I will soon. Don’t take my word for it, though! Read up on it and try it yourself.

Here’s another post from the aforementioned Alex Russel: https://infrequently.org/2015/06/progressive-apps-escaping-tabs-without-losing-our-soul/

ES6 Web Components Part 5 – Wrap-Up

In Part 4 of my 5-part write-up, Project Setup and Opinions, I talked about lessons I took away from experimenting with ES6 Web Components. Lastly, is my wrap-up post…

This was a monster write-up! In my four previous parts, I’ve shown you the basics on Web Components, what features make up a Web Component, how ES6 can help, and some coding conventions I’ve stumbled on through my experimentation.

That last sentence is my big caveat – it’s trial and error for me. I’m constantly experimenting and improving my workflow where it needs to be improved. Some pieces I’ve presented here, but I may come up with an even better way. Or worse, I may discover I showed you folks a really bad way to do something.

One particular thing to be cautious of is recognizing I’m not talking about cross-browser compatibility here. I have done a bit of research to show that, theoretically, things should work cross-browser, especially if you use the WebComponents.js polyfill. I have done a little testing in Firefox, but that’s it. I really haven’t tested in IE, Edge, Safari, et cetera. I’m lucky enough to be in a position right now at my job and in my personal experiments where I’m focusing on building in Chrome, Chromium, or Electron (built on Chromium). I’m trying to keep compatibility in mind; however, without a real effort to test in various browsers, you may run into issues I haven’t encountered.

It isn’t all doom and gloom, though. WebComponents.js is used as the Google Polymer polyfill. Its why Polymer claims to have the cross-platform reach it has. See the support grid here for supported browsers.

Even better, as I complete this series, Webkit has just announced support for the Shadow DOM. This is fantastic, because the Shadow DOM is the hardest piece to polyfill. A while back, Polymer/WebComponents.js had removed polyfilled Shadow DOM support for CSS because it wasn’t very performant. Microsoft announced a while back that it’s working on the Shadow DOM, while Firefox has it hidden behind a flag.

All this is to say, if you take anything away from this series of blog posts on ES6 Web Components, takeaway ideas. Treat them as such. Don’t take this to your team and say “Ben Farrell has solved it all; we’re all in on Web Components.” I truly hope everything I’ve said is accurate and a fantastic idea for you to implement, but don’t risk your production project on it.

With all that said, aside from the implementation details, I do think Web Components are a huge leap forward in web development. It’s been encouraging me to use pure vanilla Javascript everywhere. I haven’t needed jQuery, syntactic sugar provided by a framework, nontraditional markup for binding – it’s all pure JS. I have been using JS techniques like addEventListener, querySelector, cloneNode, et cetera. Those are all core JS, CSS, and HTML concepts. When you understand them, you understand what every JS framework and library is built on. They transcend Angular, React, jQuery, Polymer, everything. They will help you learn why your favorite tool is built the way it is and why it has the shortcomings it does.

Not only am I building pure JS here, but I’m organizing my code into reusable and modular components – what every JS framework tries to give you.

For these reasons, I think there is huge potential in Web Components and I think it most likely represents what we’ll be doing as a community years from now, especially when (hopefully not if) all features of Web Components and ES6 are implemented in browsers everywhere.

As I said in my first post, I do like Google’s Polymer a lot. But again, I strive to do less application-like things and more creative-like things. Therefore, MY components are fairly custom and don’t need a library of Google’s Material-designed elements. I’ve started a Github Org called Creative Code Web Components that contains a video player and camera that draw to the canvas and effects can be created for them on the pixels. I’ve created a speech-input component as well, along with a pure ES6 Web Component slide deck viewer.

Those components are all in early stages, but for fabricating various creative projects, I feel like this the right way forward for me. Thus far, I have a real modular set of pieces for creating a neat prototype or project.

Perhaps if you are doing a real business application, Polymer is great for you. Or React. Or Angular. Regardless, I think what I’ve been learning is great info for anyone in web dev today to have. I wouldn’t have written 10,000 words about it otherwise!

This has been my big 5-part post about creating Web Components with ES6. To view the entire thing, check out my first article.