A-Bad attitude about A-Frame (I was wrong)

I haven’t written many posts lately, especially tech related. The reason why is that I’ve been chasing the mixed reality train and learning. 3D development in general has had a number of false starts with me, and I never went in the deep end until now. This past year or so, I’ve been using both Unity and WebVR.

My failed 3D career

Prior to this, I went through a Shockwave 3D phase in the early 2000’s. People forgot about that REALLLLLLL quick. Likewise when I got all excited about Flash’s CPU based 3D engine Papervision (not Macromedia made, but for Flash), I remember learning it and then the hype died down to 0 within a few months. And then of course, there was Adobe Flash’s Stage3D. But as you might recall, that was about the time that Steve Jobs took the wind out of Flash’s sails and it was knocked down several pegs in the public eyes.

Whatever your opinion on Director or Flash, it doesn’t matter. Approachable 3D development never felt like it had a fair shot (sorry, OpenGL/C++ just isn’t approachable to me or lots of people). In my opinion, there were two prime reasons for this. The first is that GPU’s are really just now standard on everyone’s machine. I remember having to worry about how spectacularly crappy things would look with CPU rendering as a fallback. Secondly, and this is huge: visual tooling.

Don’t believe me? I think the huge success of Unity proves my point. Yes, C# is approachable, but also, being able to place objects in a 3D viewport and wire up behaviors via a visual property panel is huge. Maybe seasoned developers will eventually lay of this approach, but it introduces a learning curve that isn’t a 50ft high rock face in front of you.

Unity is fantastic, for sure, but after being a Flash developer in 2010 and my career seemed to be crumbling around me, I’m not going to be super quick to sign up for the same situation with a different company.

Hello, WebVR

So, enter WebVR. It’s Javascript and WebGL based, so I can continue being a web developer and using existing skills. It’s also bleeding edge, so it’s not like I have to worry about IE (hint: VR will never work on IE, though Edge’s Mixed Reality support  is now the only publicly released version of WebVR (FF and Chrome WebVR are in experimental builds)! Point being, is that all those new ES6 features I dig, I can use them freely without worrying about polyfills (I do polyfill for import, though….but that’s another article for another time).

Those of us who were excited about WebVR early on, probably used Three.js with some extensions. As excitement picked up steam, folks started packaging up Three.js with polyfills to support everything from regular desktop mouse interaction, to Google Cardboard, to the Oculus Rift and Vive, all with the same experience with little effort from the developer.

I found that object oriented development with ES6 classes driving Three.js made a lot of sense. If you take a peek at any of the examples in Three.js, they are short, but the code is kind of an unorganized mess. This is certainly forgivable for small examples, but not for big efforts that I might want to try.

So, I was pretty happy here for a while. Having a nice workflow that you ironed out doesn’t necessarily make you the most receptive to even newer ways, especially those that are super early and rough around the edges.

Enter A-Frame

I believe early last year (2016), when I was attending some meetups and conference sessions for WebVR and Mozilla made a splash with A-Frame. Showing such an early release of A-Frame was a double edged sword. On the plus side, Mozilla was showing leadership in the WebVR space and getting web devs and designers interested in the promises of approachable, tag based 3D and VR. The down side was that people like me who were ALREADY interested in WebVR and already had a decent approach for prototyping were shown an alpha quality release with a barely functional inspection and visual editing tool that didn’t seem to offer anything better than the Three.js editor.

As I wasn’t excited about it at all, I also reasoned that the whole premise of A-Frame was silly. Why would a sane person find value in HTML tags for 3D elements?

Months passed, and I was happy doing my own thing without A-Frame. I even made a little prototyping library based on existing WebVR polyfills with an ES6 class based approach for 3D object and lifecycle management. It was fairly lightweight, but it worked for a couple prototypes I was working on.

A-Frame Round 2

Right around when A-Frame released 0.4 or 0.5, the San Francisco HTML5 meetup group invited them on for another session in another WebVR event. The A-Frame community had grown. There were a crazy number of components that their community built because…hey A-Frame is extensible now (I didn’t know that!). The A-Frame visual inspector/editor is now the really nice and accessible as a debug tool from any A-Frame scene as you develop it. Based on the community momentum alone, I knew I had to take a second look.

To overcome my bad A-Frame attitude, I carved out a weekend to accomplish two goals:

  • Reason an organized and scalable workflow that doesn’t look like something someone did in 2007 with jQuery
  • Have a workflow where tags are completely optional

I actually thought these might be unreasonable goals and I was just going to prove failure.

A-Scene

As I mentioned briefly, I had my own library I was using for prototyping. Like I said it was basically a package of some polyfills that had already been created for WebVR with some nice ES6 class based organization around it.

I knew that A-Frame was built much the same way – on top of Three.js with the same polyfills (though slightly modified). What I didn’t count on was that our approach to everything was so similar that it took me just a few hours to completely swap my entire foundational scene out for their <a-scene> tag, and…. it…worked.

This blew my mind, because I had my own 3D objects and groups created with Three.js and the only tag I put on my HTML page was that <a-scene> tag.

Actually, there were a few hiccups along the way, but given that I was shoving what I thought was a square peg into a round hole, two minor code changes are nothing.

My approach is like so:

Have a “BaseApplication” ES6 class. This class would be extended for your application. It used to be that I’d create the underlying Three.js scene here in the class, but with A-Frame, I simply pass the <a-scene> element to the constructor and go from there. One important application or 3D object lifecycle event is to get a render call every frame so you can take action and do animation, interaction, etc. Prior to A-Frame, I just picked this event up from Three.js.

Like I said, two hiccups. First, my application wasn’t rendering it’s children and I didn’t know how to pickup the render event every frame. Easy. First pretend it’s an element by assigning an “el” property to the class and set it to playing:

this.el = { isPlaying: true };

Next, simply register this class with the A-Frame scene behaviors like this:

this._ascene.addBehavior(this);

Once this behavior is added, if your class has a “tick” method, it will be fired:

/**
* a-frame tick
* @param time
*/
tick(time) {
...
}

Likewise, any objects you add to the scene, whom you want to have these tick methods, simply add them to the behavior system in the same way.

In the end my hefty BaseApplication.js class that instantiated a 3D scene, plugins, and polyfills, was chopped down to something 50 lines long (and I DO use block comments)

export default class BaseApplication {
    constructor(ascene, cfg) {
        if (!cfg) {
            cfg = {};
        }
        this._ascene = ascene;
        this._ascene.appConfig = cfg;
        this._ascene.addBehavior(this);
        this.el = { isPlaying: true };
        this.onCreate(ascene);
    }

    get config() {
        return this._ascene.appConfig;
    }

    /**
     * a-frame tick
     * @param time
     */
    tick(time) {
        this.onRender(time)
    }

    /**
     * add objects to scene
     * @param grouplist
     */
    add(grouplist) {
        if (grouplist.length === undefined) {
            grouplist = [grouplist];
        }
        for (var c in grouplist) {
            grouplist[c].addedToScene(this._ascene);

            if (grouplist[c].group) {
                this._ascene.appendChild(grouplist[c].group);
                this._ascene.addBehavior(grouplist[c]);
            } else {
                this._ascene.appendChild(grouplist[c]);
            }
        }
    }
 // meant to be overridden with your app
 onCreate(ascene) {}
 onRender(time) {}
}

As you might be able to tell, the only verbose part is the method to add children where I determine what kind of children they are: A-Frame elements, or my custom ES6 class based Object Groups.

How I learned to Love Markup

So, at this point I said to myself…”Great! I still really think markup is silly, but A-Frame has a team of people that will keep up with WebVR and will update the basics as browsers and the spec evolves, and I should just use their <a-scene> and ignore most everything else.

Then, I hit ctrl-alt-i.

For those that don’t know, this loads the A-Frame visual inspector and editor. Though, of course, it won’t save your changes into your code. Let me say first, the inspector got reallllllly nice and is imperative for debugging your scene. The A-Frame team is forging ahead with more amazing features like recording your interactions in VR so you can replay them at your desk and do development without constantly running around.

So, when I loaded that inspector for the first time, I was disappointed that I didn’t see any of my objects. I can’t fault A-Frame for this, I completely bypassed their tags.

That alone roped me in. We have this perfectly nice visual inspector, and I’m not going to deny my use of it because I can’t be convinced to write some HTML.

Using Tags with Code

At this point, me and A-Frame are BFF’s. But I still want to avoid a 2008 era jQuery mess. Well, turns out, 3D object instantiation is about as easy in A-Frame as it is with code. It’s easier actually because tags are concise, where as instantiating materials, primitives, textures, etc, can get pretty verbose.

My whole perspective has been flipped now.

  • Step 1: Create that element, just set that innerHTML to whatever or createElement and set attributes individually
  • Step 2: appendChild to the scene (or another A-Frame element)

That’s it. I’m actually amazed how responsive the 3D scene is for appending and removing elements. There’s no “refresh” call, nothing. It just works.

I actually created a little utility method to parse JSON into an element that you could append to your scene:

sceneElement.appendChild(AFrameGroup.utils.createNode('a-entity', {
    'scale': '3 3 3',
    'obj_model': 'obj: ./assets/flag/model.obj; mtl: ./assets/flag/materials.mtl',
    'position': '0 -13 0'
}));

AFrameGroup.utils.createNode(tagname, attributes) {
    var el = document.createElement(tagname);
    for (var c in attributes) {
        var key = c.replace(/_/g, '-'); // hyphens not cool in JSON, use underscore, and we convert to hyphen here
        el.setAttribute(key, attributes[c]);
    }
    return el;
}

Yah, there’s some stuff I don’t like all that much, like if I want to change the position of an object, I have to go through element.setAttribute(‘position’, ‘0 50 0′). Seems bit verbose, but I’ll take it.

A-Happy Prototyper

Overall, the markup aspect, early versions, and lack of organization/cleanliness in the code examples made me sad. But examples are just examples. I can’t fault them for highlighting simple examples that don’t scale well as an application when they intend to showcase quick experiences. A-Frame wants to be approachable, and if I yammer on to people about my ES6 class based approach with extendable BaseApplication and BaseGroup/Object classes, yes, I might appeal to some folks, but the real draw of A-Frame right now is for newcomers to fall in love with markup that easily gets them running and experience their own VR creations.

All that said, I did want to share my experience for the more seasoned web dev crowd because if you peel back the layers in A-Frame you’ll find an extremely well made library that proves to be very flexible for however you might choose to use it.

I’m not sure whether I want to link you folks to my library that’s in progress yet, but I’ll do it anyway. It’s majorly in flux, and I keep tearing out stuff as I see better ways in A-Frame to do things. But it’s helping me prototype and create clear distinction of helper code from my prototype logic, so maybe it’ll help you (just don’t rely on it!)

 

 

A Week at the Hololens Academy

Ahhhhh the Hololens. I finally get to check it off my list. When I express my disappointment with not being able to try it out to my friends and co-workers that are interested in VR, it’s kinda like talking about going to Hawaii. “Ohhhh, you haven’t been? You really should, it’s an enjoyable experience.” (said, of course, with a knowing smirk and possibly a wink).

There’s a good reason for that knowing wink. Its a massively cool device, and despite being publicly available now to early adopters, there’s a waiting list and it’s $3k. Someone mentioned to me that they are in the “5th Wave” of wait list. So, right now, it’s hard to get your hands on it. And that’s IF you’re willing to shell out the money.

Should you buy it if you get the chance? Maybe. For me, there’s lots of parallels to Google Glass from a few years ago, but also lots of reasons it might break free from technological oddity into the mainstream.

In terms of sheer impressiveness in hardware, hell yes it’s worth $3k. Though it can be tethered via USB for the purposes of big deployments of your project, it’s completely wireless and independent. The computer to run it is built right into the device. It packs Wifi, 64GB of memory, a camera (both RGB and depth), and other sensors for headtracking (probably an accelerometer and gyroscope). Even the casing of the device is impressive. It looks slick, true, but the rotatable expandable band that makes every effort to custom fit your head is practically perfect. I originally didn’t put it on my head completely correct at first, and the display was resting on my nose a bit which would have been uncomfortable after awhile. Turns out, if you balance it on your head correctly, it barely touches your nose and almost floats on your face.

Compare the hardware to something like the Oculus Rift or the HTC Vive which are just display and you supply your own computer to tether to (and aren’t augmented reality). They run $600-800 plus at least a $1k desktop computer. I can’t recall who, but someone with me made the almost cruel observation of the size of an NVIDIA GTX970 graphics card compared to the size of the entire Hololens headset.

nvidiavshololensThe display is another massively cool hardware piece and makes the entire system come together as one. It has it’s problems which I’ll get into (cough cough field of view), but I’ll talk about that in a second when I get to usability. And make no mistake….usability is why or why you should not run right out and purchase one of these devices. The Hololens isn’t so much a tool as it is an experience. It’s not a hammer and nail. It’s more of a workbench. A beautiful workbench can be amazing, but if you can’t open the drawer to get to your hammer and nails and you want to create something, it’s worthless.

 

Training at Microsoft HQ

Awful analogies aside, and usablility aside, let me say a quick word about the training. Microsoft calls it “The Hololens Academy”. It occurs to me just now, that this might be a thinly veiled StarTrek reference. In fact, ALL of the training assets were space themed. From a floating astronaut, to a virtual futuristic tabletop projector, to a mid-air representation of our Solar System.

My company, Adobe, was kind enough to send me last minute to Redmond and do some learning. I honestly didn’t know what to expect because it was so last minute. Was it super secret stuff? No…but considering I hadn’t seen the not secret stuff yet, it really didn’t make too much difference. In fact it was SO not secret that our class followed along with well developed training material that MS has published online.

In fact, in a testament to how well developed it is…I was weirded out a bit on the first day to be honest. It had that theme park feel. Or that historical city tour feel. You know, where every word and joke your guide says is rehearsed and feels forced? But I got over that real fast, you know why? Because the sessions went like clockwork. The instructors kept exact time to an eerie degree, and the assistants WERE psychic. Virtually every time I had trouble, an instructor was behind me within a few seconds helping me out. I didn’t raise my hand, look confused, nothing. And there wasn’t a single time where I felt like they were annoyingly hovering. They just showed up out of the blue being insanely helpful.

The room itself was laid out extremely well for training. An open workspace with large screen TV’s on the wall facing every which way with the instructor in the center on a headset made a very great training space. The instructor didn’t even drive the software. He or she (they changed out every 3 hours), would have someone else driving the presentation machine while they spoke. This kind of coordination takes practice, no doubt.

The walls and tables were decorated for the event too, along with coffee tables specifically for placing your virtual assets on (holograms). The room is probably a permanent fixture specifically for this.

This all means one thing to me. We’ve got publicly available training materials, with tons of care put into creating them, extremely well staffed and smart trainers, and a training room just for the Hololens. Add to this the hundreds of engineers working on Hololens, adding the fact that MS is just now offering developer support for it… and the message is loud and clear. Microsoft is placing a HUGE bet on the Hololens. They aren’t half assing this like a lot of companies in their position might for a product that is so different and hard to predict how well it’s adopted.

Training style aside – I found another thing extremely interesting about the training. It’s all about Unity.

 

Authoring with Unity

Unity seems like kind of an underdog at the moment. It’s essentially a 3D authoring environment/player. It doesn’t nearly have the reach of something like Flash or Quicktime which at one point or another has been ubiquitous. Yet, its a favorite of 3D creators (designers and devs) who have the desire to easily make 3D interactive experiences. The reach of Unity alone (browser plugin, WebGL, Android, iOS, desktop application, Oculus, Vive, Gear, and now Hololens as well as others) puts it right in the middle of being THE tool for creating VR/AR/Mixed Reality content.

I was naive to not expect MS using Unity for experience creation. But, the fact is, it’s one of the ONLY tools for easy interactive 3D scene creation. I honestly expected Microsoft to push us into code only experience creation. Instead, they steered us into a combo of 3D scene building with Unity and code editing (C#) with Visual Studio. To be honest I’m a little resistant of Unity. Its not that its not an excellent tool, but I’ve gone through too many authoring tools that have fallen out of favor. This training is a wakeup call, though. If Oculus, Gear, HTC Vive weren’t enough to knock me over the head – a major company like MS (who has a great history of building dev tools) using a third party tool like this….well consider me knocked over the head and kicked in the shins.

The exercises themselves, were a mix of wiring things up in Unity and copying/pasting/pretending to code in Visual Studio. Its a hard thing to build a course around especially when offering this to everyone with no prerequisites, but MS certainly did a good job. I struggled a bit with C# syntax, not having used it in years, but easily fell back to the published online material when I couldn’t get something.

 

Usability and VR/AR Comparisons

OK so, the Hololens has the sweet sweet hardware. It has the training and developer support. All good right? Well no, there’s another huge consideration. The hugest consideration of all. How useable is it, and what can end users do with it?

You might guess that what end users do with it is up to you as a developer, and that’s partially right. Everything has limitations that enable or inhibit potential. Here’s the thing, though – take the iPhone or iPad for example. When it came out it WAS groundbreaking. But it wasn’t SO different that you had to experience it to imagine what it could do. Steve Jobs could simple show you a picture of it. Yep it had a screen. Jobs could show you interaction through a video: Yep you can swipe and tap and stuff. People were imaginitive enough to put 2 and 2 together and imagine the types of things you could do based on never having used the device. Sure, people are doing amazing things with touch devices that would have never been imagined without using it – but the simplest of interactions you can certainly get the gist when seeing it used without using it yourself.

VR is somewhat harder to pin down, but again, its somewhat easy to imagine. The promise is that you are thrown into another world. With VR, your imagination can certainly get ahead of itself. You might believe, without donning a headset that you can be teleported to another world and feel like you’re there.

Well, yes and no, and it’s all due to current limitations. VR can have a bit of a screen door effect meaning if you focus hard enough you feel like you’re in front of a screen. With VR, you are currently body-less. When you look down you’ll probably see no body, no hands, or even if it’s a great experience, it won’t look like YOUR body. This is a bit of a disconcerting experience. Also, you DEFINITELY feel like you’re wearing a headset. So yes…with VR, you ARE transported to a different and immersive space, however you need to suspend disbelief a bit (as amazing as it is).

AR is similar but a little worse. I can only comment on the Hololens, but its not the magical mixed reality fairly tale you might be led to believe. Even worse MS’s published videos and photos show the user being completely immersed in holograms. I can’t really fault them for this, because how do you sell and show a device like this that really must be worn to experience?

 

Field of View and other Visual Oddities

The biggest roadblock to achieving this vision is field of view. From what I’ve heard, its the one biggest complaint of the Hololens. I heard this going in and it was in the back of my head before I put the device on, but it took me an embarassingly long time to realize what was happening. A limited field of view means that the virtual objects or Holograms only take up a limited portion of the “screen”. Obviously. But in practice, this looks totally weird especially without some design trick to sweep it under the rug and integrate the limitation into the experience.

When you start viewing a 3D scene, if things are far away, they look fantastic! Well integrated with your environment and even interacting with it. Get closer though, and things start falling out of your field of view. Its as if you’re holding a mobile screen up fairly close to your face, but the screen has no edges and it doesn’t require your hand to hold it up. Well, what happens to things off screen? They simple disappear, or worse they are partially on screen but clipped to the window.

I took this image from a winbeta.com article about the field of view, here’s their take on it, but for our sake right now, here’s a great approximation of what you would see:

hololens-fov-2-29

People also use peripheral vision to find things in a large space, but unfortunately in this scenario you have no periphery – so it can be easy to not have a good understanding of the space you’re in right away.

There are a couple other visual limitations that make your holograms a bit less believable. For one, you can certainly see your headset. The best way to describe it is that you can certainly see when you’re wearing sunglasses and a baseball cap (though the Hololens certainly doesn’t protrude as far as a cap rim). You can also see the tinted projection area and some of the contours of that area in your periphery. It easy to ignore to an extent, but definitely still there. Also, you can see through the Holograms for sure. They’re pretty darn opaque, but they come across as a layer with maybe 90% transparency.

Another point is that in all the demo materials, if you get suspiciously close, the object starts disappearing or occluding. This is directly due to a camera setting in Unity. You can certainly decrease this value, however even the lowest setting is still a bit far and does occlude, and even then, the Hololens makes you go a bit crosseyed at something so close. You might say this is unfair because its simply a casualty of 3D scenes. To that, I say to check out the Oculus Rift Dreamdeck and use the cartoony city demo. You can put your head right up next to a virtual object, EXTREMELY close, and just feel like you can touch it with your cheek.

Lastly, overhead lights can cause some light separation and occasionaly push some rainbow streaks through your view especially on bright white objects like the Unity splash screen. This point, I can directly compare this to the flare of white objects on the Oculus Rift due to longer eyelashes.

For these above reasons – I don’t think the Hololens can be considered an immersive device yet like VR is. VR is really good at transporting you to a different place. I thought the Hololens would be similar in that it would convincingly augment your real world. But it doesn’t for me. It’s not believable. And thats why for now (at least 10-15 years), I’m convinced that AR is NOT the next generation after VR. They will happily live together.

If VR is the immersion vehicle – something that transports you, what’s AR? Or more specifically, the Hololens? Well, just because something isn’t immersive, doesn’t mean it can’t be incredibly useful. And I think that’s where the Hololens lies for the near term. It’s a productivity tool. I’m not sure I think games or storytelling or anything like that will catch on with the hardware as it is now (as cool as they are demo-wise until the immersion factor improves). No – I think it can extend your physical screen and digital world to an exceptional degree. Creating art, making music, even just reviewing documents can all be augmented. Your creation or productivity process doesn’t have to be immersive, just the content you create.

I think this point is where AR really shines over VR. In VR, we’re clumsily bringing our physical world into the virtual world so we can assist in creation using things modeled after both our real tools and 2D GUI tools. And usually this doesn’t work out. We have to remove our headset constantly to properly do a job. With AR, the physical world is already there. Do you have a task that needs to be done on your computer or tablet? Don’t even worry about removing your Hololens. Interact with both simultaneously…whatever. In fact, I think one HUGE area for the Hololens to venture into is the creation of immersive VR content itself. One for the immersive, one for the productive.

That’s not to say I don’t think casual consumers or others will eventually adopt it. It certainly could be useful for training, aid in hands free industrial work, anything that augments your world but doesn’t require suspension of disbelief.

 

Spatial Awareness

Hololens immersion isn’t all doom and gloom though. Spatial awareness is, in fact, AMAZING. The 3D sensor is constantly scanning your environment and mapping everything as a (not fantastically accurate but damn good) mesh. Since it uses infrared light like the Kinect to sense depth, it does have its limitations. It can’t see too far away, nor super close. The sun’s infrared light can also flood the sensor leaving it blind. One fun fact that I’ve learned is that leather seems to not reflect the light too well, so leather couches are completely invisible!

We did a really simple demo of spatial mapping. It looked amazing how we lined the real walls with a custom texture with blue lines. My Adobe colleague decided to make the lines flash and animate which was super mesmerizing. Unfortunately, I didn’t find the mixed reality video capture feature until after that, so here’s a nice demo I found on YouTube of something similar (though a bit more exceptional and interactive)

As scattered IR light tends to be sort of…well…scattered, meshes certainly don’t scan perfectly. That’s fine because MS has built some pre-packaged DLLs for smoothing the meshes out to flat planes and even offers advice on wall, ceiling, floor, and table finding.

Of course, once you’ve found the floor or surfaces to ineract with, you can place objects, introduce physics to make your Hologram interact with real surfaces (thanks Unity for simply collision and rigid bodies!), and even have your Holograms hidden behind real things. The trainers seemed most eager to show us punching holes in real objects like walls and tables to show incredible and expansive virtual worlds underneath. Again…though…the incredible and expansive can’t be immersive with the field of view the way it is.

Here’s a good time to show our group lobbing pellets at each other and hitting our real world bodies. The hole at the end SHOULD have been on the table, but I somehow screwed up the transformation of the 3D object in Unity, so it didn’t appear in the right spot. It does show some great spatial mapping, avatars that followed us around, and punching a hole through reality!

 

Spatial Audio

Spatial audio is another thing I’m on the fence about. It’s a bit weird on the Hololens. I give Microsoft major props for making the audio hardware AUGMENTED but not immersive. In VR systems, especially the Oculus Rift, you’d likely have over the ear headphones. Simple spatial audio (and not crazy advanced rocket science spatial audio) is limited to your vertical plane. Meaning, it matches your home stereo. Maybe a few front sources (left, right, and center), and a couple back source on your left and right. With these sources, you fade the audio between the sources and get some pretty awesome positional sounds.

On the Hololens, however, the hardware speakers are positioned above your ears on the headband. They aren’t covering your ear like headphones.

 

0e9693c5-78a1-4c51-9331-d66542e5fee9So yes, you can hear the real world as easily as you could without the headband on, but being positioned above your ears make it sound like the audio is always coming from above. One of our exercises included a Hologram astronaut. You’d click on the astronaut, and he’d disappear, but he’d talk to you and you were supposed to find him. Myself and everyone near me kept looking up to find him, but he was never up high – and I’m sure this is a direct result of the Hololens speaker placement. I asked the instructor about positional audio that included vertical orientation as well, and he said it was hard computationally. I know there are some cool solutions for VR (very mathy), but I’m skeptical on the Hololens. The instructors did say to make sure that objects you’d expect higher up (like birds) appear higher up in your world. I personally think this was a design cop-out to overcome the hardware.

 

Input

Last thing I want to cover is input. Frankly I’m disappointed with EVERYONE here (except for the HTC Vive). It seems mighty trendy for AR and VR headsets to make everyone do gaze input, but I hate it and it needs to die. The Hololens is no exception here, it’s included in all the training material and all of the OS interactions. Same goes for casual interactions on the Oculus Rift (gaming interactions use an XBOX controller, still dumb IMO) and Google Cardboard. The HTC Vive and soon the Oculus Rift will have touch controllers. Google Cardboard will soon be supplanted by Daydream which features a more expressive controller (though not positional). I’ve heard the Hololens might have some kind of pointer like Daydream, but I’ve only heard that offhand.

Gaze input is simply using the direction of your eyes to control a cursor on screen. Actually, it’s not even your eyes since your eyes can look around….Gaze input is using the center of your forehead as a cursor. The experience feels super rigid to me, I’d really prefer it be more natural and allow you to point at something you aren’t looking at. With the Oculus Rift, despite having gaze input, you also have a remote control. So to interact with something, gaze at it and click the remote.

The Hololens on the other hand, well it SEEMS cool, but it’s a bit clunky. You’re supposed to make an L with your thumb and index finger and drop the index finger in front of you (don’t bend your finger, or it may not recognize the action). You also have to do this in front of the 3D sensor, which doesn’t sound bad, but it would be way more comfortable to do it casually on your side or have your hand pointed down. And to be fair, spoken keywords like “select” can be used instead. We did also play with exercises that tracked your hands position to move and rotate a Hologram. All the same, I really think AR/VR requires something more expressive, more tactile, and less clunky for input.

 

Conclusion

All that said, the Hololens is an amazing device with enormous potential. Given that Microsoft’s CEO claims it is a “5 year journey”, what we have right now is really a developer preview of the device. For hardware, software, and support that feels so polished despite interaction roadblocks, it will be most likely be amazing what consumers get in their hands 5 years from now. So should you shove wads of cash at MS to get a device? Well, me…I’m excited about what’s to come, but I do see more potential for VR growth right now. I’m interested in not just new interaction patterns with AR/VR, but also about exploring how immersiveness makes you feel and react to your surroundings. The Hololens just doesn’t feel immersive yet. Additionally, it seems like the AR/VR community are really converging on the same tools, so lessons learned in VR can be easily translated to AR (adjusting for the real world aspect). The trainers made sure to point this out – the experiences you build with Unity should be easily built for other platforms. It will also be interesting to see in the next 5 years where Google takes Tango (AR without the head mounted display) and potentially pairs it with their Daydream project.

All that said, it’s all about use cases and ideas and making prototypes. If a killer idea comes along that makes sound business sense and specifically requires AR, the Hololens is pretty much the only game in town right now, so if that happens I’ll be sure to run out and (try to) get one. But in terms of adopting the Hololens because of perceived inevitability and coolness factor? I might wait.

But if you don’t own any AR/VR devices, cant wait to put something in the Windows store, can live with the limitations, and are already an MS junkie – maybe the Hololens is for you!

I’d like to give a big thanks to Microsoft for having us in their HQ and having such fantastic trainers and training material. I expect big things from this platform, and their level of commitment to developers for such a new paradigm is practically unheard of.

Progressive Web Apps

Last Friday, the SFHTML5 Meetup group met to discuss something called “Progressive Web Apps.” I had some preconceived notions that the topic would be pretty cool, but actually, it got me more excited about the state of mobile/web/desktop in 2016 than I could have imagined.

This might sound a bit dramatic, especially given the negative tone that Alex Russel (@slightlylate), the speaker, started off with on the mobile web. Despite being negative, he was spot on, and the talk was a real eye-opener for us who have been working on the mobile web for so long that we forget how much it sucks.

And yes, it does suck. A good point was made that every other mobile platform started out mobile. No vendor has ever really proposed, “OK, let’s take this UI platform, along with everything that’s ever been built with it that works on the desktop with mouse and keyboard, and dump it on mobile.” Nobody did that until it came to web browsers. It’s amazing that things work as well as they do right now.

Alex then took us through an exercise of asking for hands up on who used a web-based email client on the desktop. Around 95% of our hands raised. When the question was reversed, “Who uses a web-based email client on their mobile device?” the result was exactly the opposite.

Why does the mobile web suck so much? The folks that have given “Responsive Web Design” (RWD) a shot can’t be blamed for this problem. The rest of the web community…if you want your stuff to work on mobile, it’s time for a redesign.

Even with RWD, some mobile redesign love, and the MOBILE FIRST! mantras we shout, the fundamental user experience with the mobile web, as it is now, will never compete with those for mobile apps. It’s probably not because HTML/JS/CSS is slow. Yeah, native can be faster, but if you think about it, most apps you use really don’t need speed. If you don’t agree with me, tell that to all the app developers using Phonegap, Cordova, or even just a plain WebView for their entire products.

So speed isn’t the issue for most apps. Touch, screen orientation, and size don’t need to be an issue if the web team cares enough to address them. No, to compete with your typical mobile app, design comes down to how the browser itself runs and loads the page.

Real, installed apps have two pretty big advantages over mobile apps:

  • Once installed, the user can jump to the app from the home screen.
  • Even with no network connectivity, the app can still work or at least pretend to work.

There’s a 3rd advantage, and that’s push notifications: messages from the app that appear in the notification area even when the app isn’t running. I think that functionality is big-ish, but unless you have a significant base of users addicted to your app (think Facebook), it isn’t as big of a deal. Smaller guys and gals are just trying to develop a neat app.

Progressive Web Apps attempt to solve all of that missing functionality, and they do so in a way that doesn’t necessarily interfere with the current way we develop for the web.

Step #1: Invade the Home Screen and look like an app

Tackling the first issue of putting your page on the mobile home screen is pretty important. How the application is displayed, both on the home screen and when it loads, is part of that experience. To solve it, use the “Web App Manifest”! It’s a JSON file linked from your HTML head that allows you to define things like app icons, fullscreen display, splash screen, and more.

This is the point when I should confess that I haven’t worked with Progressive Web Apps yet. Luckily for me, this isn’t a “how-to” article. So for great details on how-to do this stuff, run an easy search, or for your convenience, read this nice technical article via MobiForge.

Either way, the idea is that if a user visits your page often enough within a certain time frame, the browser will ask the user if he or she would like to place the page on their home screen. Or, the user can simply add it to the home screen from the options menu in the mobile browser. That’s light-years better than having to open the browser, remember the URL, and load the page. I’m sure it’s a huge reason apps are winning on mobile right now.

Step #2: Be an app even when offline

Secondly, we have “Service Workers.” They sound nerdy and boring, and maybe they are, but the potential they open up for appifying a webpage is huge. Basically, you’d use a Service Worker to intercept a specific set of resources as the webpage fetches them. Yes, if the user is offline that first time they want to access the page, they’re outta luck. However, when they do intercept these resources with a connection, they’ll be cached. You, the developer, control which files get cached via a Javascript array in the code. On subsequent loads, even if the user is offline, the page can load with your cached assets, whether they are images, Javascript, JSON, styles, or whatever. Here’s a better technical description of how that works.

In fact, Google has published documentation and some tools on the similar notion of an “Application Shell Architecture” wherein persistent assets that don’t change can be cached, but dynamic content that isn’t cached will update.

What does this mean and will it all work?

Probably the most exciting thing about Progressive Web Apps is that both the Manifest and the Service Workers will not affect the web page negatively if the browser doesn’t support the features. This means that the worst you can do is waste time and JS code on something that doesn’t pan out as you hoped.

And there is some danger that it won’t work. You may have noticed that Facebook today uses push notifications with Service Workers and that they do increase engagement on their site. So that’s a win! Unfortunately, Service Workers and the Web App Manifest aren’t supported everywhere. Unsurprisingly, that means they’re pretty much everywhere but iOS/Safari. Even worse, browser vendors on iOS can’t use their own web engines to support the Progressive Web Apps — under the hood, both Chrome and Firefox have to use Safari tech.

Apple seems tight-lipped about whether they intend to adopt Progressive Web Apps at all. I’m going to say that for now, it doesn’t matter. If you’ve hung around the SF Bay Area enough, you may have noticed that many companies operate on an “iOS first, Android distant second” agenda. That doesn’t make sense in that Android devices far surpass iOS devices in sales. But it does make sense, in that iOS app sales are greater and it can be daunting to develop apps for the large ecosystem of Android devices on the market.

However you slice it, Android is second for developers, which is bad for consumers. Right now, many companies will adopt a Web + iOS + maybe Android strategy. If they can combine the Android + Web strategy with Progressive Web Apps AND not force folks through the Google App Store, it’ll be a huge win for everyone. I’m guessing Google probably doesn’t even care much about having an app store, save for the fact that it was necessary to maintain a mobile ecosystem.

Meanwhile, the point was made at this Meetup that with every additional step a user must go through to download an app, there’s around a 20% dropoff rate. Think about how many steps there are in clicking an app link, going to the store, starting the download, waiting for the install, and finally opening the app — many apps are losing out on users. And let’s face it, the app gold rush is over. There are some lottery winners still, but most apps are too costly to make and market to justify what they bring in return.

Progressive Web Apps short circuit that whole process by eliminating app discovery and install. While Android users will enjoy a huge user experience win, Apple will most likely try to maintain their stranglehold on their app store and come kicking and screaming only once web devs demand these new features.

What’s more, and what I’m really excited for, is our return to disposable digital experiences. Hate Adobe Flash or not, it really created a heyday for disposable experiences: Flash games to play a couple times and get bored with, nifty digital playgrounds, etc. It’s way harder to convince someone to download an app than it is to go to a webpage and pop it on their homescreen until they get bored of it in a week.

To extend, I think Progressive Web Apps will also be a huge boon for web-based Virtual Reality. Immersive experiences will come from many different places and frankly, will not be wanted as a permanent app install. Already, we’re seeing the rise of VR portals like MilkVR because smaller, one-off VR experiences need some kind of entry onto a device. When Progressive Web Apps make WebVR easier to get before eyes than an app portal, VR will win big.

To reiterate, I think Progressive Web Apps are the next big thing for mobile, potentially replace lots of simple apps, and will mark the return of fun, disposable experiences. I don’t have the technical experience with these new tools to back me up yet, but I will soon. Don’t take my word for it, though! Read up on it and try it yourself.

Here’s another post from the aforementioned Alex Russel: https://infrequently.org/2015/06/progressive-apps-escaping-tabs-without-losing-our-soul/

ES6 Web Components Part 5 – Wrap-Up

In Part 4 of my 5-part write-up, Project Setup and Opinions, I talked about lessons I took away from experimenting with ES6 Web Components. Lastly, is my wrap-up post…

This was a monster write-up! In my four previous parts, I’ve shown you the basics on Web Components, what features make up a Web Component, how ES6 can help, and some coding conventions I’ve stumbled on through my experimentation.

That last sentence is my big caveat – it’s trial and error for me. I’m constantly experimenting and improving my workflow where it needs to be improved. Some pieces I’ve presented here, but I may come up with an even better way. Or worse, I may discover I showed you folks a really bad way to do something.

One particular thing to be cautious of is recognizing I’m not talking about cross-browser compatibility here. I have done a bit of research to show that, theoretically, things should work cross-browser, especially if you use the WebComponents.js polyfill. I have done a little testing in Firefox, but that’s it. I really haven’t tested in IE, Edge, Safari, et cetera. I’m lucky enough to be in a position right now at my job and in my personal experiments where I’m focusing on building in Chrome, Chromium, or Electron (built on Chromium). I’m trying to keep compatibility in mind; however, without a real effort to test in various browsers, you may run into issues I haven’t encountered.

It isn’t all doom and gloom, though. WebComponents.js is used as the Google Polymer polyfill. Its why Polymer claims to have the cross-platform reach it has. See the support grid here for supported browsers.

Even better, as I complete this series, Webkit has just announced support for the Shadow DOM. This is fantastic, because the Shadow DOM is the hardest piece to polyfill. A while back, Polymer/WebComponents.js had removed polyfilled Shadow DOM support for CSS because it wasn’t very performant. Microsoft announced a while back that it’s working on the Shadow DOM, while Firefox has it hidden behind a flag.

All this is to say, if you take anything away from this series of blog posts on ES6 Web Components, takeaway ideas. Treat them as such. Don’t take this to your team and say “Ben Farrell has solved it all; we’re all in on Web Components.” I truly hope everything I’ve said is accurate and a fantastic idea for you to implement, but don’t risk your production project on it.

With all that said, aside from the implementation details, I do think Web Components are a huge leap forward in web development. It’s been encouraging me to use pure vanilla Javascript everywhere. I haven’t needed jQuery, syntactic sugar provided by a framework, nontraditional markup for binding – it’s all pure JS. I have been using JS techniques like addEventListener, querySelector, cloneNode, et cetera. Those are all core JS, CSS, and HTML concepts. When you understand them, you understand what every JS framework and library is built on. They transcend Angular, React, jQuery, Polymer, everything. They will help you learn why your favorite tool is built the way it is and why it has the shortcomings it does.

Not only am I building pure JS here, but I’m organizing my code into reusable and modular components – what every JS framework tries to give you.

For these reasons, I think there is huge potential in Web Components and I think it most likely represents what we’ll be doing as a community years from now, especially when (hopefully not if) all features of Web Components and ES6 are implemented in browsers everywhere.

As I said in my first post, I do like Google’s Polymer a lot. But again, I strive to do less application-like things and more creative-like things. Therefore, MY components are fairly custom and don’t need a library of Google’s Material-designed elements. I’ve started a Github Org called Creative Code Web Components that contains a video player and camera that draw to the canvas and effects can be created for them on the pixels. I’ve created a speech-input component as well, along with a pure ES6 Web Component slide deck viewer.

Those components are all in early stages, but for fabricating various creative projects, I feel like this the right way forward for me. Thus far, I have a real modular set of pieces for creating a neat prototype or project.

Perhaps if you are doing a real business application, Polymer is great for you. Or React. Or Angular. Regardless, I think what I’ve been learning is great info for anyone in web dev today to have. I wouldn’t have written 10,000 words about it otherwise!

This has been my big 5-part post about creating Web Components with ES6. To view the entire thing, check out my first article.

ES6 Web Components Part 4 – Project Setup and Opinions

This article continues my ES6 Web Components series. The last article was the third in the series: Making an ES6 Component Class.

So far, the basics have been pretty….basic. I hope I’ve given some ideas on how to create ES6 Web Components – but these basics only go so far. I do have some opinions on how to take this further, but they are only opinions that have made sense to me. The beauty of this is that you can hear me out and decide for yourself if these ideas are good for you.

Project and File Setup

Lets start with dependencies. I like Babel to compile the ES6 and Gulp to do the tasks. Source maps are also a must in my book for debugging the compiled ES6 as Javascript! Also, given that WebComponents.js has been so instrumental in providing cross platform functionality, lets use that too.

Here’s a sample package.json file:

{
  "name": "ccwc-slideshow",
  "version": "0.1.0",
  "dependencies": {
    "webcomponents.js": "^0.7.2"
  },
  "devDependencies": {
    "babel": "^5.8.21",
    "gulp": "^3.9.0",
    "gulp-babel": "^5.2.0",
    "gulp-sourcemaps": "^1.5.2"
  }
}

Next up is Gulp. I have nothing against Grunt…but I use Gulp. Frankly I stopped caring about the battle of the task runners and landed on the last one that I tried and liked. There probably won’t be too many tasks – I just need to compile the ES6 to Javascript. However, I may have multiple components in my repo. As a result, I’ll have a compile task per component. Here’s a sample Gulpfile:

var gulp = require('gulp');
var sourcemaps = require('gulp-sourcemaps');
var babel = require('gulp-babel');
var concat = require('gulp-concat');

gulp.task('slide', function () {
  return gulp.src('src/ccwc-slide.es6')
    .pipe(sourcemaps.init())
    .pipe(babel())
    .pipe(concat('ccwc-slide.js'))
    .pipe(sourcemaps.write('.'))
    .pipe(gulp.dest('./src'));
});

gulp.task('slideshow', function () {
  return gulp.src('src/ccwc-slideshow.es6')
    .pipe(sourcemaps.init())
    .pipe(babel())
    .pipe(concat('ccwc-slideshow.js'))
    .pipe(sourcemaps.write('.'))
    .pipe(gulp.dest('./src'));
});

gulp.task('default', ['slide', 'slideshow']);

One last Javascript note: I like to have an ES6 extension on my ES6 files, and those to live in a “src” older. Some folks seem to be using .js, and then compiling them to “mycomponent-compiled.js”. I don’t like this for a couple of reasons. First, its not obvious that your source JS file is ES6, and secondly I kinda think it’s silly to force devs to use a non-obvious naming convention when including a script. When you make your script tag, you should link to “mycomponent.js”. Not “mycomponent dot, ummm…what was my naming convention last week?”.

Your Web Component HTML files should live in your project root. When you link to a Web Component, you shouldn’t need to remember what folder you put your stuff in…it should be a simple and easy to remember “mycomponent/mycomponent.html”.

Lastly, your demo is important! A Web Component should demonstrate use! When I started out, I was making a “demo” folder in my component root, and putting an index.html or demo.html file in there. There’s a problem with this though: if you use images (or other assets), the relative path to your image will be different from the demo folder than what it is during actual use of your component. Bummer. So I like to put a “demo.html” usage example in my component root. I still have a demo folder, but this folder would contain any assets to support running the demo that aren’t really part of your component (like JSON data).

Actually – one more. This is the last one for real. Documentation for your component. I didn’t think about it here, because I didn’t even think of doing it for my components yet. My bad. My horrible horrible bad. Google’s Polymer actually has a very nice self documenting structure which is very sweet. Maybe someday, I’ll base whatever I plan to do about docs on that.

Here’s a sample project structure of a component I made for showing a slide deck. You’ll notice 2 components here. One is the slide deck viewer, and one is a component to show a single slide. The first uses the second inside it and it all works together. I have some sample slide deck contents in my demo folder:

componentstructure
You’ll notice that I have the compiled .js.map files and the .js files here, too. I check these in to source control. I always feel a little icky about checking in compiled files. For one, they don’t NEED to be in source control since they are generated and don’t need to be diffed. Secondly, you don’t want to allow people to edit these files instead of editing the ES6 files. Lastly, I am occasionally forgetful of building then checking in! Sometimes only the ES6 files get checked in, and I’m left doing another commit of the compiled files when I remember that I didn’t build.

All that said, I DO check these compiled files in. For my workflow, I want these file instantly useable after NPM installing my component. Forcing a compile step for an NPM module and requiring dev dependencies seem like an unnecessary burden on the end user. I’m always trying to think of ideas to make myself happy here on all counts, but I haven’t yet.

Component Class and Method Conventions

I’ve already documented the bare minimum of methods in your Web Component class. These include: “attachedCallback”, “createdCallback”, “detachedCallback”, and “attributeChangedCallback”. These, however, are just HTMLElement lifecycle callbacks. I have some other methods of my own I like to consistently use (all invented by me, of course, and not part of any spec).

setProperties

In ES6, there is no attaching properties directly to the class inline. Properties can only be set on the class from within a method. So I made my own convention that I consistently use. My “setProperties” method initializes all of the variables that would be used for the class. In other languages, public/private variables would be used at the top of a class. In ES6, I use my “setProperties” method for this and give my variables a little extra documentation/commenting.

parseAttributes

Once the component is created or attached, you may want to look at the attributes on your component tag to read in some initial component parameters. You could potentially scatter these all over your code. I like to read them all in one spot: “parseAttributes”.

registerElements and the “dom” object

I really dug Polymer’s “$” syntax. Anything in your component’s HTML that had an ID, you could access with “this.$.myelement”. Well, in my DIY Web Component world, I can’t just magically expect to access this. I COULD querySelector(‘#myelement’) everytime, but its more performant to save these references to a variable if you’re using them often. And it also creates more readable code to save your important element references in well named variables. At the same time, though, it might be confusing to mix elements on your root “this” scope with other variables that aren’t elements.

So here’s what I do…

When I have a bunch of stuff that I want to reference in the imported HTML template at the very start, like buttons, text fields, whatever, I’ll run my custom method “registerElements” in the attachedCallback after appending the template to my Shadow Root.

In “registerElements”, I’ll create a new Object called “dom” on my root scope “this” (this.dom = {};). I’ll then querySelect any elements I want, grab the reference, and populate “this.dom.myelement” with the references. Then elsewhere in my code, I can just reference the property like a normal variable (but I know it’s a DOM element since its in my “this.dom” object).

root

One last thing I do consistently….and this is not a method, but a property…is using a custom variable “root” to represent the Shadow DOM. So when I want to use querySelector on an element, I use “this.root.querySelector(‘myelement’)”. I COULD just call it “shadow”. However, there’s been a couple times I’ve been a bit wishy-washy about using the Shadow DOM, and I can just set “this.root” to the host content, or even the document if I wanted. In this fashion I can keep swapping around what “root” is to whatever I choose and keep my code pretty much the same.

An Example

I’ll leave you with a complete example of my Web Component that functions as a Slide Deck viewer. Remember, the slide inside is a web component on its own! In my next post, I’ll wrap this whole thing up and link you all to my real components.


class CCWCSlideShow extends HTMLElement {
  setProperties() {
    /**
     * slides deck
     *
     * @property deck
     * @type string
     */
    this.deck = '';

    /**
     * next slide key mapping
     *
     * @property nextSlideKey
     * @type integer
     */
    this.nextSlideKey = 39; // right arrow key

    /**
     * previous slide key mapping
     *
     * @property previousSlideKey
     * @type integer
     */
    this.previousSlideKey = 37; // left arrow key

    /**
     * toggle timer key mapping
     *
     * @property toggleTimerKey
     * @type integer
     */
    this.toggleTimerKey = 84; // "t" key

    /**
     * timer start time
     *
     * @property timer start time
     * @type Number
     */
    this.timerStartTime = 0;

    /**
     * current slide/chapter
     *
     * @property current slide/chapter
     * @type object
     */
    this.current = { chapter: 0, slide: 0 };

    /**
     * running
     * is slide deck running (being timed)
     * @property running
     * @type boolean
     */
    this.running = false;

    /**
     * slides
     *
     * @property slides
     * @type array
     */
    this.slides = [];
  };

  /**
   * register dom elements
   */
  registerElements() {
    this.dom = {};
    this.dom.slideviewer = this.root.querySelector('#slideviewer');
    this.dom.slideinfo = this.root.querySelector('.infobar .slides');
    this.dom.runtime = this.root.querySelector('.infobar .runtime');
  };


  /**
   * ready
   *
   * @method ready
   */
   init() {
    this.loadDeck('./deck/manifest.json');
    document.addEventListener('keyup', event => this.onKeyPress(event) );

    setInterval( () => {
      if (this.running) {
        var duration = Math.floor((new Date().getTime() - this.timerStartTime) / 1000);
        var totalSeconds = duration;
        var hours = Math.floor(totalSeconds / 3600);
        totalSeconds %= 3600;
        var minutes = Math.floor(totalSeconds / 60);
        var seconds = totalSeconds % 60;
        if (seconds.toString().length == 1) {
          seconds = "0" + seconds;
        }
        if (minutes.toString().length == 1) {
          minutes = "0" + minutes;
        }
        this.dom.runtime.innerText = hours + ":" + minutes + ":" + seconds;
      }
    }, 1000);
  };

  /**
   * toggle timer
   *
   * @method toggleTimer
   */
  toggleTimer() {
    this.running = !this.running;
    if (this.timerStartTime === 0) {
      this.timerStartTime = new Date().getTime();
    }
  };

  /**
   * on keypress
   * @param event
   */
  onKeyPress(event) {
    switch(event.keyCode) {
      case this.nextSlideKey:
            this.nextSlide();
            break;

      case this.previousSlideKey:
            this.previousSlide();
            break;

      case this.toggleTimerKey:
            this.toggleTimer();
            break;
    }
  }

  /**
   * load chapter in slide deck
   * @param index
   * @param uri
   */
  loadChapter(index, name, uri) {
    var xmlhttp = new XMLHttpRequest();
    xmlhttp.onreadystatechange = () => {
      if (xmlhttp.readyState == 4) {
        if (xmlhttp.status == 200) {
          var chapter = JSON.parse(xmlhttp.responseText);
          chapter.index = index;
          chapter.name = name;
          this.chapters.push(chapter);
          this.chapters.sort(function(a, b) {
            if (a.index > b.index) { return 1; } else { return -1; }
          });
          this.manifest.slideCount += chapter.slides.length;
          this.goSlide(0, 0);
        }
      }
    };
    xmlhttp.open("GET", uri, true);
    xmlhttp.send();
  };

  /**
   * load deck
   * @param uri of manifest
   */
  loadDeck(uri) {
    var xmlhttp = new XMLHttpRequest();
    xmlhttp.onreadystatechange = () => {
      if (xmlhttp.readyState == 4) {
        if (xmlhttp.status == 200) {
          this.manifest = JSON.parse(xmlhttp.responseText);
          this.manifest.slideCount = 0;
          this.dom.slideviewer.imgpath = this.manifest.baseImagePath;
          this.dom.slideviewer.htmltemplatepath = this.manifest.baseHTMLTemplatePath;

          this.chapters = [];
          for (var c = 0; c < this.manifest.content.length; c++) {
            this.loadChapter(c, name, this.manifest.content[c].file);
          }
        }
      }
    };

    xmlhttp.open("GET", uri, true);
    xmlhttp.send();
  };

  /**
   * next slide
   */
  nextSlide() {
    this.current.slide ++;
    if (this.current.slide >= this.chapters[this.current.chapter].slides.length) {
      this.current.slide = 0;
      this.current.chapter ++;

      if (this.current.chapter >= this.chapters.length) {
        this.current.chapter = 0;
      }
    }
    this.goSlide(this.current.chapter, this.current.slide);
  };

  /**
   * previous slide
   */
  previousSlide() {
    this.current.slide --;
    if (this.current.slide < 0) {
      this.current.chapter --;

      if (this.current.chapter < 0) {
        this.current.chapter = this.chapters.length - 1;
      }
      this.current.slide = this.chapters[this.current.chapter].slides.length - 1;
    }
    this.goSlide(this.current.chapter, this.current.slide);
  };

  /**
   * go to slide
   * @param {int} index of chapter
   * @param {int} index of slide
   */
  goSlide(chapter, slide) {
    this.current.chapter = chapter;
    this.current.slide = slide;

    var slidecount = slide;
    for (var c = 0; c < chapter; c++) {
      slidecount += this.chapters[c].slides.length;
    }

    this.dom.slideinfo.innerText = 'Chapter ' + (chapter+1) + '.' + (slide+1) + '    ' + (slidecount + 1) + '/' + this.manifest.slideCount;
    this.dom.slideviewer.clear();
    var sld = this.chapters[chapter].slides[slide];

    if (sld.htmlinclude) {
      this.dom.slideviewer.setHTML(sld.htmlinclude);
    }

    if (sld.webpage) {
      this.dom.slideviewer.setIFrame(sld.webpage);
    }

    if (sld.text) {
      sld.text.forEach( item => {
        this.dom.slideviewer.setText(item.html, item.region);
      });
    }

    if (sld.images) {
      sld.images.forEach( item => {
        this.dom.slideviewer.setImage(item.image, item.region);
      });
    }

    if (sld.background) {
      this.dom.slideviewer.setBackgroundImage(sld.background, sld.backgroundProperties);
    }
  };

  /**
   * getter for slide element
   *
   * @return slide element
   */
  getSlideComponent(id) {
    return this.dom.slideviewer;
  };

  /**
   * getter for slide element
   *
   * @param {string} class name
   * @return {array}
   */
  getHTMLIncludeElementsByClass(clazz) {
    return this.getSlideComponent().getHTMLIncludeElementsByClass(clazz);
  };

  // Fires when an instance was removed from the document.
  detachedCallback() {};

  // Fires when an attribute was added, removed, or updated.
  attributeChangedCallback(attr, oldVal, newVal) {};

  /**
   * parse attributes on element
   */
  parseAttributes() {
    if (this.hasAttribute('deck')) {
      this.deck = this.getAttribute('deck');
    }

    if (this.hasAttribute('nextSlideKey')) {
      this.nextSlideKey = parseInt(this.getAttribute('nextSlideKey'));
    }

    if (this.hasAttribute('previousSlideKey')) {
      this.previousSlideKey = parseInt(this.getAttribute('previousSlideKey'));
    }

    if (this.hasAttribute('toggleTimerKey')) {
      this.toggleTimerKey = parseInt(this.getAttribute('toggleTimerKey'));
    }
  };


  // Fires when an instance of the element is created.
  createdCallback() {
    this.setProperties();
    this.parseAttributes();
  };

  // Fires when an instance was inserted into the document.
  attachedCallback() {
    let template = this.owner.querySelector('template');
    let clone = document.importNode(template.content, true);
    this.root = this.createShadowRoot();
    this.root.appendChild(clone);
    this.registerElements();
    this.init();
  };
}

if (document.createElement('ccwc-slideshow').constructor !== CCWCSlideShow) {
  CCWCSlideShow.prototype.owner = (document._currentScript || document.currentScript).ownerDocument;
  document.registerElement('ccwc-slideshow', CCWCSlideShow);
}

Continue on to the conclusion of my ES6 Web Component Series