A-Bad attitude about A-Frame (I was wrong)

I haven’t written many posts lately, especially tech related. The reason why is that I’ve been chasing the mixed reality train and learning. 3D development in general has had a number of false starts with me, and I never went in the deep end until now. This past year or so, I’ve been using both Unity and WebVR.

My failed 3D career

Prior to this, I went through a Shockwave 3D phase in the early 2000’s. People forgot about that REALLLLLLL quick. Likewise when I got all excited about Flash’s CPU based 3D engine Papervision (not Macromedia made, but for Flash), I remember learning it and then the hype died down to 0 within a few months. And then of course, there was Adobe Flash’s Stage3D. But as you might recall, that was about the time that Steve Jobs took the wind out of Flash’s sails and it was knocked down several pegs in the public eyes.

Whatever your opinion on Director or Flash, it doesn’t matter. Approachable 3D development never felt like it had a fair shot (sorry, OpenGL/C++ just isn’t approachable to me or lots of people). In my opinion, there were two prime reasons for this. The first is that GPU’s are really just now standard on everyone’s machine. I remember having to worry about how spectacularly crappy things would look with CPU rendering as a fallback. Secondly, and this is huge: visual tooling.

Don’t believe me? I think the huge success of Unity proves my point. Yes, C# is approachable, but also, being able to place objects in a 3D viewport and wire up behaviors via a visual property panel is huge. Maybe seasoned developers will eventually lay of this approach, but it introduces a learning curve that isn’t a 50ft high rock face in front of you.

Unity is fantastic, for sure, but after being a Flash developer in 2010 and my career seemed to be crumbling around me, I’m not going to be super quick to sign up for the same situation with a different company.

Hello, WebVR

So, enter WebVR. It’s Javascript and WebGL based, so I can continue being a web developer and using existing skills. It’s also bleeding edge, so it’s not like I have to worry about IE (hint: VR will never work on IE, though Edge’s Mixed Reality support  is now the only publicly released version of WebVR (FF and Chrome WebVR are in experimental builds)! Point being, is that all those new ES6 features I dig, I can use them freely without worrying about polyfills (I do polyfill for import, though….but that’s another article for another time).

Those of us who were excited about WebVR early on, probably used Three.js with some extensions. As excitement picked up steam, folks started packaging up Three.js with polyfills to support everything from regular desktop mouse interaction, to Google Cardboard, to the Oculus Rift and Vive, all with the same experience with little effort from the developer.

I found that object oriented development with ES6 classes driving Three.js made a lot of sense. If you take a peek at any of the examples in Three.js, they are short, but the code is kind of an unorganized mess. This is certainly forgivable for small examples, but not for big efforts that I might want to try.

So, I was pretty happy here for a while. Having a nice workflow that you ironed out doesn’t necessarily make you the most receptive to even newer ways, especially those that are super early and rough around the edges.

Enter A-Frame

I believe early last year (2016), when I was attending some meetups and conference sessions for WebVR and Mozilla made a splash with A-Frame. Showing such an early release of A-Frame was a double edged sword. On the plus side, Mozilla was showing leadership in the WebVR space and getting web devs and designers interested in the promises of approachable, tag based 3D and VR. The down side was that people like me who were ALREADY interested in WebVR and already had a decent approach for prototyping were shown an alpha quality release with a barely functional inspection and visual editing tool that didn’t seem to offer anything better than the Three.js editor.

As I wasn’t excited about it at all, I also reasoned that the whole premise of A-Frame was silly. Why would a sane person find value in HTML tags for 3D elements?

Months passed, and I was happy doing my own thing without A-Frame. I even made a little prototyping library based on existing WebVR polyfills with an ES6 class based approach for 3D object and lifecycle management. It was fairly lightweight, but it worked for a couple prototypes I was working on.

A-Frame Round 2

Right around when A-Frame released 0.4 or 0.5, the San Francisco HTML5 meetup group invited them on for another session in another WebVR event. The A-Frame community had grown. There were a crazy number of components that their community built because…hey A-Frame is extensible now (I didn’t know that!). The A-Frame visual inspector/editor is now the really nice and accessible as a debug tool from any A-Frame scene as you develop it. Based on the community momentum alone, I knew I had to take a second look.

To overcome my bad A-Frame attitude, I carved out a weekend to accomplish two goals:

  • Reason an organized and scalable workflow that doesn’t look like something someone did in 2007 with jQuery
  • Have a workflow where tags are completely optional

I actually thought these might be unreasonable goals and I was just going to prove failure.


As I mentioned briefly, I had my own library I was using for prototyping. Like I said it was basically a package of some polyfills that had already been created for WebVR with some nice ES6 class based organization around it.

I knew that A-Frame was built much the same way – on top of Three.js with the same polyfills (though slightly modified). What I didn’t count on was that our approach to everything was so similar that it took me just a few hours to completely swap my entire foundational scene out for their <a-scene> tag, and…. it…worked.

This blew my mind, because I had my own 3D objects and groups created with Three.js and the only tag I put on my HTML page was that <a-scene> tag.

Actually, there were a few hiccups along the way, but given that I was shoving what I thought was a square peg into a round hole, two minor code changes are nothing.

My approach is like so:

Have a “BaseApplication” ES6 class. This class would be extended for your application. It used to be that I’d create the underlying Three.js scene here in the class, but with A-Frame, I simply pass the <a-scene> element to the constructor and go from there. One important application or 3D object lifecycle event is to get a render call every frame so you can take action and do animation, interaction, etc. Prior to A-Frame, I just picked this event up from Three.js.

Like I said, two hiccups. First, my application wasn’t rendering it’s children and I didn’t know how to pickup the render event every frame. Easy. First pretend it’s an element by assigning an “el” property to the class and set it to playing:

this.el = { isPlaying: true };

Next, simply register this class with the A-Frame scene behaviors like this:


Once this behavior is added, if your class has a “tick” method, it will be fired:

* a-frame tick
* @param time
tick(time) {

Likewise, any objects you add to the scene, whom you want to have these tick methods, simply add them to the behavior system in the same way.

In the end my hefty BaseApplication.js class that instantiated a 3D scene, plugins, and polyfills, was chopped down to something 50 lines long (and I DO use block comments)

export default class BaseApplication {
    constructor(ascene, cfg) {
        if (!cfg) {
            cfg = {};
        this._ascene = ascene;
        this._ascene.appConfig = cfg;
        this.el = { isPlaying: true };

    get config() {
        return this._ascene.appConfig;

     * a-frame tick
     * @param time
    tick(time) {

     * add objects to scene
     * @param grouplist
    add(grouplist) {
        if (grouplist.length === undefined) {
            grouplist = [grouplist];
        for (var c in grouplist) {

            if (grouplist[c].group) {
            } else {
 // meant to be overridden with your app
 onCreate(ascene) {}
 onRender(time) {}

As you might be able to tell, the only verbose part is the method to add children where I determine what kind of children they are: A-Frame elements, or my custom ES6 class based Object Groups.

How I learned to Love Markup

So, at this point I said to myself…”Great! I still really think markup is silly, but A-Frame has a team of people that will keep up with WebVR and will update the basics as browsers and the spec evolves, and I should just use their <a-scene> and ignore most everything else.

Then, I hit ctrl-alt-i.

For those that don’t know, this loads the A-Frame visual inspector and editor. Though, of course, it won’t save your changes into your code. Let me say first, the inspector got reallllllly nice and is imperative for debugging your scene. The A-Frame team is forging ahead with more amazing features like recording your interactions in VR so you can replay them at your desk and do development without constantly running around.

So, when I loaded that inspector for the first time, I was disappointed that I didn’t see any of my objects. I can’t fault A-Frame for this, I completely bypassed their tags.

That alone roped me in. We have this perfectly nice visual inspector, and I’m not going to deny my use of it because I can’t be convinced to write some HTML.

Using Tags with Code

At this point, me and A-Frame are BFF’s. But I still want to avoid a 2008 era jQuery mess. Well, turns out, 3D object instantiation is about as easy in A-Frame as it is with code. It’s easier actually because tags are concise, where as instantiating materials, primitives, textures, etc, can get pretty verbose.

My whole perspective has been flipped now.

  • Step 1: Create that element, just set that innerHTML to whatever or createElement and set attributes individually
  • Step 2: appendChild to the scene (or another A-Frame element)

That’s it. I’m actually amazed how responsive the 3D scene is for appending and removing elements. There’s no “refresh” call, nothing. It just works.

I actually created a little utility method to parse JSON into an element that you could append to your scene:

sceneElement.appendChild(AFrameGroup.utils.createNode('a-entity', {
    'scale': '3 3 3',
    'obj_model': 'obj: ./assets/flag/model.obj; mtl: ./assets/flag/materials.mtl',
    'position': '0 -13 0'

AFrameGroup.utils.createNode(tagname, attributes) {
    var el = document.createElement(tagname);
    for (var c in attributes) {
        var key = c.replace(/_/g, '-'); // hyphens not cool in JSON, use underscore, and we convert to hyphen here
        el.setAttribute(key, attributes[c]);
    return el;

Yah, there’s some stuff I don’t like all that much, like if I want to change the position of an object, I have to go through element.setAttribute(‘position’, ‘0 50 0′). Seems bit verbose, but I’ll take it.

A-Happy Prototyper

Overall, the markup aspect, early versions, and lack of organization/cleanliness in the code examples made me sad. But examples are just examples. I can’t fault them for highlighting simple examples that don’t scale well as an application when they intend to showcase quick experiences. A-Frame wants to be approachable, and if I yammer on to people about my ES6 class based approach with extendable BaseApplication and BaseGroup/Object classes, yes, I might appeal to some folks, but the real draw of A-Frame right now is for newcomers to fall in love with markup that easily gets them running and experience their own VR creations.

All that said, I did want to share my experience for the more seasoned web dev crowd because if you peel back the layers in A-Frame you’ll find an extremely well made library that proves to be very flexible for however you might choose to use it.

I’m not sure whether I want to link you folks to my library that’s in progress yet, but I’ll do it anyway. It’s majorly in flux, and I keep tearing out stuff as I see better ways in A-Frame to do things. But it’s helping me prototype and create clear distinction of helper code from my prototype logic, so maybe it’ll help you (just don’t rely on it!)



ES6 Web Components Part 5 – Wrap-Up

In Part 4 of my 5-part write-up, Project Setup and Opinions, I talked about lessons I took away from experimenting with ES6 Web Components. Lastly, is my wrap-up post…

This was a monster write-up! In my four previous parts, I’ve shown you the basics on Web Components, what features make up a Web Component, how ES6 can help, and some coding conventions I’ve stumbled on through my experimentation.

That last sentence is my big caveat – it’s trial and error for me. I’m constantly experimenting and improving my workflow where it needs to be improved. Some pieces I’ve presented here, but I may come up with an even better way. Or worse, I may discover I showed you folks a really bad way to do something.

One particular thing to be cautious of is recognizing I’m not talking about cross-browser compatibility here. I have done a bit of research to show that, theoretically, things should work cross-browser, especially if you use the WebComponents.js polyfill. I have done a little testing in Firefox, but that’s it. I really haven’t tested in IE, Edge, Safari, et cetera. I’m lucky enough to be in a position right now at my job and in my personal experiments where I’m focusing on building in Chrome, Chromium, or Electron (built on Chromium). I’m trying to keep compatibility in mind; however, without a real effort to test in various browsers, you may run into issues I haven’t encountered.

It isn’t all doom and gloom, though. WebComponents.js is used as the Google Polymer polyfill. Its why Polymer claims to have the cross-platform reach it has. See the support grid here for supported browsers.

Even better, as I complete this series, Webkit has just announced support for the Shadow DOM. This is fantastic, because the Shadow DOM is the hardest piece to polyfill. A while back, Polymer/WebComponents.js had removed polyfilled Shadow DOM support for CSS because it wasn’t very performant. Microsoft announced a while back that it’s working on the Shadow DOM, while Firefox has it hidden behind a flag.

All this is to say, if you take anything away from this series of blog posts on ES6 Web Components, takeaway ideas. Treat them as such. Don’t take this to your team and say “Ben Farrell has solved it all; we’re all in on Web Components.” I truly hope everything I’ve said is accurate and a fantastic idea for you to implement, but don’t risk your production project on it.

With all that said, aside from the implementation details, I do think Web Components are a huge leap forward in web development. It’s been encouraging me to use pure vanilla Javascript everywhere. I haven’t needed jQuery, syntactic sugar provided by a framework, nontraditional markup for binding – it’s all pure JS. I have been using JS techniques like addEventListener, querySelector, cloneNode, et cetera. Those are all core JS, CSS, and HTML concepts. When you understand them, you understand what every JS framework and library is built on. They transcend Angular, React, jQuery, Polymer, everything. They will help you learn why your favorite tool is built the way it is and why it has the shortcomings it does.

Not only am I building pure JS here, but I’m organizing my code into reusable and modular components – what every JS framework tries to give you.

For these reasons, I think there is huge potential in Web Components and I think it most likely represents what we’ll be doing as a community years from now, especially when (hopefully not if) all features of Web Components and ES6 are implemented in browsers everywhere.

As I said in my first post, I do like Google’s Polymer a lot. But again, I strive to do less application-like things and more creative-like things. Therefore, MY components are fairly custom and don’t need a library of Google’s Material-designed elements. I’ve started a Github Org called Creative Code Web Components that contains a video player and camera that draw to the canvas and effects can be created for them on the pixels. I’ve created a speech-input component as well, along with a pure ES6 Web Component slide deck viewer.

Those components are all in early stages, but for fabricating various creative projects, I feel like this the right way forward for me. Thus far, I have a real modular set of pieces for creating a neat prototype or project.

Perhaps if you are doing a real business application, Polymer is great for you. Or React. Or Angular. Regardless, I think what I’ve been learning is great info for anyone in web dev today to have. I wouldn’t have written 10,000 words about it otherwise!

This has been my big 5-part post about creating Web Components with ES6. To view the entire thing, check out my first article.

ES6 Web Components Part 4 – Project Setup and Opinions

This article continues my ES6 Web Components series. The last article was the third in the series: Making an ES6 Component Class.

So far, the basics have been pretty….basic. I hope I’ve given some ideas on how to create ES6 Web Components – but these basics only go so far. I do have some opinions on how to take this further, but they are only opinions that have made sense to me. The beauty of this is that you can hear me out and decide for yourself if these ideas are good for you.

Project and File Setup

Lets start with dependencies. I like Babel to compile the ES6 and Gulp to do the tasks. Source maps are also a must in my book for debugging the compiled ES6 as Javascript! Also, given that WebComponents.js has been so instrumental in providing cross platform functionality, lets use that too.

Here’s a sample package.json file:

  "name": "ccwc-slideshow",
  "version": "0.1.0",
  "dependencies": {
    "webcomponents.js": "^0.7.2"
  "devDependencies": {
    "babel": "^5.8.21",
    "gulp": "^3.9.0",
    "gulp-babel": "^5.2.0",
    "gulp-sourcemaps": "^1.5.2"

Next up is Gulp. I have nothing against Grunt…but I use Gulp. Frankly I stopped caring about the battle of the task runners and landed on the last one that I tried and liked. There probably won’t be too many tasks – I just need to compile the ES6 to Javascript. However, I may have multiple components in my repo. As a result, I’ll have a compile task per component. Here’s a sample Gulpfile:

var gulp = require('gulp');
var sourcemaps = require('gulp-sourcemaps');
var babel = require('gulp-babel');
var concat = require('gulp-concat');

gulp.task('slide', function () {
  return gulp.src('src/ccwc-slide.es6')

gulp.task('slideshow', function () {
  return gulp.src('src/ccwc-slideshow.es6')

gulp.task('default', ['slide', 'slideshow']);

One last Javascript note: I like to have an ES6 extension on my ES6 files, and those to live in a “src” older. Some folks seem to be using .js, and then compiling them to “mycomponent-compiled.js”. I don’t like this for a couple of reasons. First, its not obvious that your source JS file is ES6, and secondly I kinda think it’s silly to force devs to use a non-obvious naming convention when including a script. When you make your script tag, you should link to “mycomponent.js”. Not “mycomponent dot, ummm…what was my naming convention last week?”.

Your Web Component HTML files should live in your project root. When you link to a Web Component, you shouldn’t need to remember what folder you put your stuff in…it should be a simple and easy to remember “mycomponent/mycomponent.html”.

Lastly, your demo is important! A Web Component should demonstrate use! When I started out, I was making a “demo” folder in my component root, and putting an index.html or demo.html file in there. There’s a problem with this though: if you use images (or other assets), the relative path to your image will be different from the demo folder than what it is during actual use of your component. Bummer. So I like to put a “demo.html” usage example in my component root. I still have a demo folder, but this folder would contain any assets to support running the demo that aren’t really part of your component (like JSON data).

Actually – one more. This is the last one for real. Documentation for your component. I didn’t think about it here, because I didn’t even think of doing it for my components yet. My bad. My horrible horrible bad. Google’s Polymer actually has a very nice self documenting structure which is very sweet. Maybe someday, I’ll base whatever I plan to do about docs on that.

Here’s a sample project structure of a component I made for showing a slide deck. You’ll notice 2 components here. One is the slide deck viewer, and one is a component to show a single slide. The first uses the second inside it and it all works together. I have some sample slide deck contents in my demo folder:

You’ll notice that I have the compiled .js.map files and the .js files here, too. I check these in to source control. I always feel a little icky about checking in compiled files. For one, they don’t NEED to be in source control since they are generated and don’t need to be diffed. Secondly, you don’t want to allow people to edit these files instead of editing the ES6 files. Lastly, I am occasionally forgetful of building then checking in! Sometimes only the ES6 files get checked in, and I’m left doing another commit of the compiled files when I remember that I didn’t build.

All that said, I DO check these compiled files in. For my workflow, I want these file instantly useable after NPM installing my component. Forcing a compile step for an NPM module and requiring dev dependencies seem like an unnecessary burden on the end user. I’m always trying to think of ideas to make myself happy here on all counts, but I haven’t yet.

Component Class and Method Conventions

I’ve already documented the bare minimum of methods in your Web Component class. These include: “attachedCallback”, “createdCallback”, “detachedCallback”, and “attributeChangedCallback”. These, however, are just HTMLElement lifecycle callbacks. I have some other methods of my own I like to consistently use (all invented by me, of course, and not part of any spec).


In ES6, there is no attaching properties directly to the class inline. Properties can only be set on the class from within a method. So I made my own convention that I consistently use. My “setProperties” method initializes all of the variables that would be used for the class. In other languages, public/private variables would be used at the top of a class. In ES6, I use my “setProperties” method for this and give my variables a little extra documentation/commenting.


Once the component is created or attached, you may want to look at the attributes on your component tag to read in some initial component parameters. You could potentially scatter these all over your code. I like to read them all in one spot: “parseAttributes”.

registerElements and the “dom” object

I really dug Polymer’s “$” syntax. Anything in your component’s HTML that had an ID, you could access with “this.$.myelement”. Well, in my DIY Web Component world, I can’t just magically expect to access this. I COULD querySelector(‘#myelement’) everytime, but its more performant to save these references to a variable if you’re using them often. And it also creates more readable code to save your important element references in well named variables. At the same time, though, it might be confusing to mix elements on your root “this” scope with other variables that aren’t elements.

So here’s what I do…

When I have a bunch of stuff that I want to reference in the imported HTML template at the very start, like buttons, text fields, whatever, I’ll run my custom method “registerElements” in the attachedCallback after appending the template to my Shadow Root.

In “registerElements”, I’ll create a new Object called “dom” on my root scope “this” (this.dom = {};). I’ll then querySelect any elements I want, grab the reference, and populate “this.dom.myelement” with the references. Then elsewhere in my code, I can just reference the property like a normal variable (but I know it’s a DOM element since its in my “this.dom” object).


One last thing I do consistently….and this is not a method, but a property…is using a custom variable “root” to represent the Shadow DOM. So when I want to use querySelector on an element, I use “this.root.querySelector(‘myelement’)”. I COULD just call it “shadow”. However, there’s been a couple times I’ve been a bit wishy-washy about using the Shadow DOM, and I can just set “this.root” to the host content, or even the document if I wanted. In this fashion I can keep swapping around what “root” is to whatever I choose and keep my code pretty much the same.

An Example

I’ll leave you with a complete example of my Web Component that functions as a Slide Deck viewer. Remember, the slide inside is a web component on its own! In my next post, I’ll wrap this whole thing up and link you all to my real components.

class CCWCSlideShow extends HTMLElement {
  setProperties() {
     * slides deck
     * @property deck
     * @type string
    this.deck = '';

     * next slide key mapping
     * @property nextSlideKey
     * @type integer
    this.nextSlideKey = 39; // right arrow key

     * previous slide key mapping
     * @property previousSlideKey
     * @type integer
    this.previousSlideKey = 37; // left arrow key

     * toggle timer key mapping
     * @property toggleTimerKey
     * @type integer
    this.toggleTimerKey = 84; // "t" key

     * timer start time
     * @property timer start time
     * @type Number
    this.timerStartTime = 0;

     * current slide/chapter
     * @property current slide/chapter
     * @type object
    this.current = { chapter: 0, slide: 0 };

     * running
     * is slide deck running (being timed)
     * @property running
     * @type boolean
    this.running = false;

     * slides
     * @property slides
     * @type array
    this.slides = [];

   * register dom elements
  registerElements() {
    this.dom = {};
    this.dom.slideviewer = this.root.querySelector('#slideviewer');
    this.dom.slideinfo = this.root.querySelector('.infobar .slides');
    this.dom.runtime = this.root.querySelector('.infobar .runtime');

   * ready
   * @method ready
   init() {
    document.addEventListener('keyup', event => this.onKeyPress(event) );

    setInterval( () => {
      if (this.running) {
        var duration = Math.floor((new Date().getTime() - this.timerStartTime) / 1000);
        var totalSeconds = duration;
        var hours = Math.floor(totalSeconds / 3600);
        totalSeconds %= 3600;
        var minutes = Math.floor(totalSeconds / 60);
        var seconds = totalSeconds % 60;
        if (seconds.toString().length == 1) {
          seconds = "0" + seconds;
        if (minutes.toString().length == 1) {
          minutes = "0" + minutes;
        this.dom.runtime.innerText = hours + ":" + minutes + ":" + seconds;
    }, 1000);

   * toggle timer
   * @method toggleTimer
  toggleTimer() {
    this.running = !this.running;
    if (this.timerStartTime === 0) {
      this.timerStartTime = new Date().getTime();

   * on keypress
   * @param event
  onKeyPress(event) {
    switch(event.keyCode) {
      case this.nextSlideKey:

      case this.previousSlideKey:

      case this.toggleTimerKey:

   * load chapter in slide deck
   * @param index
   * @param uri
  loadChapter(index, name, uri) {
    var xmlhttp = new XMLHttpRequest();
    xmlhttp.onreadystatechange = () => {
      if (xmlhttp.readyState == 4) {
        if (xmlhttp.status == 200) {
          var chapter = JSON.parse(xmlhttp.responseText);
          chapter.index = index;
          chapter.name = name;
          this.chapters.sort(function(a, b) {
            if (a.index > b.index) { return 1; } else { return -1; }
          this.manifest.slideCount += chapter.slides.length;
          this.goSlide(0, 0);
    xmlhttp.open("GET", uri, true);

   * load deck
   * @param uri of manifest
  loadDeck(uri) {
    var xmlhttp = new XMLHttpRequest();
    xmlhttp.onreadystatechange = () => {
      if (xmlhttp.readyState == 4) {
        if (xmlhttp.status == 200) {
          this.manifest = JSON.parse(xmlhttp.responseText);
          this.manifest.slideCount = 0;
          this.dom.slideviewer.imgpath = this.manifest.baseImagePath;
          this.dom.slideviewer.htmltemplatepath = this.manifest.baseHTMLTemplatePath;

          this.chapters = [];
          for (var c = 0; c < this.manifest.content.length; c++) {
            this.loadChapter(c, name, this.manifest.content[c].file);

    xmlhttp.open("GET", uri, true);

   * next slide
  nextSlide() {
    this.current.slide ++;
    if (this.current.slide >= this.chapters[this.current.chapter].slides.length) {
      this.current.slide = 0;
      this.current.chapter ++;

      if (this.current.chapter >= this.chapters.length) {
        this.current.chapter = 0;
    this.goSlide(this.current.chapter, this.current.slide);

   * previous slide
  previousSlide() {
    this.current.slide --;
    if (this.current.slide < 0) {
      this.current.chapter --;

      if (this.current.chapter < 0) {
        this.current.chapter = this.chapters.length - 1;
      this.current.slide = this.chapters[this.current.chapter].slides.length - 1;
    this.goSlide(this.current.chapter, this.current.slide);

   * go to slide
   * @param {int} index of chapter
   * @param {int} index of slide
  goSlide(chapter, slide) {
    this.current.chapter = chapter;
    this.current.slide = slide;

    var slidecount = slide;
    for (var c = 0; c < chapter; c++) {
      slidecount += this.chapters[c].slides.length;

    this.dom.slideinfo.innerText = 'Chapter ' + (chapter+1) + '.' + (slide+1) + '    ' + (slidecount + 1) + '/' + this.manifest.slideCount;
    var sld = this.chapters[chapter].slides[slide];

    if (sld.htmlinclude) {

    if (sld.webpage) {

    if (sld.text) {
      sld.text.forEach( item => {
        this.dom.slideviewer.setText(item.html, item.region);

    if (sld.images) {
      sld.images.forEach( item => {
        this.dom.slideviewer.setImage(item.image, item.region);

    if (sld.background) {
      this.dom.slideviewer.setBackgroundImage(sld.background, sld.backgroundProperties);

   * getter for slide element
   * @return slide element
  getSlideComponent(id) {
    return this.dom.slideviewer;

   * getter for slide element
   * @param {string} class name
   * @return {array}
  getHTMLIncludeElementsByClass(clazz) {
    return this.getSlideComponent().getHTMLIncludeElementsByClass(clazz);

  // Fires when an instance was removed from the document.
  detachedCallback() {};

  // Fires when an attribute was added, removed, or updated.
  attributeChangedCallback(attr, oldVal, newVal) {};

   * parse attributes on element
  parseAttributes() {
    if (this.hasAttribute('deck')) {
      this.deck = this.getAttribute('deck');

    if (this.hasAttribute('nextSlideKey')) {
      this.nextSlideKey = parseInt(this.getAttribute('nextSlideKey'));

    if (this.hasAttribute('previousSlideKey')) {
      this.previousSlideKey = parseInt(this.getAttribute('previousSlideKey'));

    if (this.hasAttribute('toggleTimerKey')) {
      this.toggleTimerKey = parseInt(this.getAttribute('toggleTimerKey'));

  // Fires when an instance of the element is created.
  createdCallback() {

  // Fires when an instance was inserted into the document.
  attachedCallback() {
    let template = this.owner.querySelector('template');
    let clone = document.importNode(template.content, true);
    this.root = this.createShadowRoot();

if (document.createElement('ccwc-slideshow').constructor !== CCWCSlideShow) {
  CCWCSlideShow.prototype.owner = (document._currentScript || document.currentScript).ownerDocument;
  document.registerElement('ccwc-slideshow', CCWCSlideShow);

Continue on to the conclusion of my ES6 Web Component Series

ES6 Web Components Part 1 – A Man Without a Framework

Before I launch into this 5-part series of posts, I just want to give a high-level overview of it: I wrote a lot. Partially because I’m long overdue on a blog post on all the stuff I’ve been experimenting with for the past several months, but mostly because much of the web tech I’ve been looking into is Web Components and I’m truly excited about it and feel like it represents a pretty big chunk of the future for web devs. Best of all, I think whether you use Polymer, React, Angular, or anything else, we can all be happy together in the common ecosystem that Web Components give us. So this post isn’t telling you what religion to pick; it’s telling you there’s something awesome happening we can all learn from.

You might not need all the information I’m writing here. So here are links and summaries to the individual segments of my 5-part ES6 Web Components Series.

Part 1: A Man Without a Framework (this page)
An opinion piece on why and how I decided to give ES6 Web Components a shot with no help from frameworks or libraries.

Part 2: The Building Blocks
A look at what I mean when I say Web Components and all the pieces that make that up.

Part 3: Making an ES6 Component Class
How to make a real ES6 Class by extending HTMLElement and making a proper component.

Part 4: Project Setup and Opinions
Opinions I’ve formed on project setup as I experimented with rolling my own components. This covers repo setup as well as common methods in my class.

Part 5: Conclusion
An important look forward if you read and took the other 4 parts of this series to heart and are as excited as I am about them. The short of it: While Web Components is super promising, take it with a grain of salt and do your own due diligence. In other words, I’m very excited about ES6 Web Components, but I haven’t been releasing production cross-platform code. Think about it before you do.

And so we begin:

Part 1: A Man Without a Framework

The web is a steaming mess of Javascript frameworks. It can feel impossible to keep up. You can learn something, love it, but then find a bunch of online hate for it. For me, as best as I can recall, my first legitimate love for a JS framework was Dojo. It was several years ago, so my memory is fuzzy, but it had a runtime, modular bits of code I could asynchronously load when I needed it (with AMD), and what seemed like an extensive UI widget set…well, it got me excited.

Dojo, might have been a little ahead of its time, though. In an age during which folks copied and pasted snippets of jQuery to make things go, Dojo was a hefty setup. It had a learning curve. Unless you were dedicated enough to get around the infuriating bits of that learning curve….wellllll….back to copy/pasting jQuery code.

I’m not against folks copying and pasting jQuery. It’s so fantastic that people learn new things, and there’s nowhere better to start. Throwing a bunch of code into a file, tweaking little bits here and there – that’s how I learn a brand-new technique to this day.

At this point in my JS career, though, I want a good place to call home. A comfy cottage that has decent conveniences. Fun to live in, but with enough maintenance already done for me to carve time out for doing more fun things. I don’t like doing dishes or handwashing laundry just like I don’t care for managing script dependencies, doing DOM manipulation with cases for every browser, or a whole other host of things. I want to work on my app!

My last comfy cottage was Angular. Angular had an easier learning curve than Dojo…but it still had one. What Angular did better was allow me to create modular bits (directives) that could all work together to make some very cool apps. Lots of times, I could focus on my actual application, rather than on what made Angular work. But occasionally, I’d still have to dig into Angular’s weird nuances to work something out. $Scope.digest anyone?

When I get that invested in a framework like Angular, I stretch my feet out toward the toasty fire and do some cool things with it I don’t think others are doing. It starts to feel like home, even if it’s a little messy. I invite some people over.

At its heart, though, it’s still an Angular cottage. If I’d rented it out to someone for a week, they’d have felt a bit awkward and uncomfortable in it. They might have known Javascript pretty well, but Angular’s a platform unto itself, so they wouldn’t have been sure what was going on at first. It would’ve been a great little cottage that made them feel at home, too, but that’s because I lived in it before them.

Then, of course, Angular 2 was announced. Lots of things I knew went out the window. What the hell!? Also, React was happening. Also, Polymer was happening.

So I thought to myself, well, if Angular 2 is no longer recognizable, why not reconsider this whole cottage thing. Leave all the options on the table and check out the new real estate.

React seems pretty awesome; I even played with it some. The whole virtual DOM thing is a cool paradigm. You update this DOM that’s off to the side in memory, and it watches the changes, and if there are things that change, the real DOM is updated. Clever. Lots of folks seem to love it and are building some pretty brilliant stuff with and for it.

Angular 2 also seems great. I was skeptical, but I got good vibes when my friend, Adrian Pomilio, presented on it at NCDevCon. Despite changing some stuff around, creating a slight learning curve, and making lots of Angular 1 developers angry, its early stage progress looks promising.

And then there’s Polymer. I started learning Google’s fancy Web Component framework at a very early stage (at 0.5). I took Polymer pretty seriously. Web Components just made sense. Yes, it’s a set of emerging standards, all rolled up into one buzzword. But the modularity of it….well, it gives me real hope for a freaking awesome way to work and even share our work.

Now, the problem with Polymer started for me when the 0.5 release iterated to the 0.8 release. And then the 0.8 release iterated to the 1.0 release. I knew to expect breaking changes between these sub 1.0 releases, but it was too much change to call working with Polymer fun. During the slow times, I could get cool stuff done for my application, but then the next wave came and I had to refactor everything and re-learn stuff that was gelling in my mind. What’s worse is I didn’t know how to keep track of the Polymer component seed generators or the different components that had different version dependencies. They never seemed to work out in a major upgrade; I’d have to tweak bower files to pull in new versions or old versions, and the errors weren’t clear on what was happening when something went wrong during this confusing time.

Everything became pretty stable with an awesome 1.0 release, but it was too late. I was already jaded enough to evaluate if I really loved Polymer…or perhaps it was simply Web Components that could build the cozy cottage of my dreams. When I thought about it, I liked some of Polymer’s syntax, but they weren’t really Javascript convention. Polymer also added a whole slew of methods for dealing with variable/method scope – but that was already custom Polymer. And if I thought about it, if I wasn’t creating a Material designed application and wanted to customize how I wanted my application to look and act anyway, why import a paper-button that has a whole host of downstream dependencies? Especially when a custom, CSS-styled button tag would suffice?

So I got the brilliant idea to take on Web Components on my own. No help from a framework. I’d use HTML Imports, the Shadow DOM, and more with plain old Javascript. Except I wanted to learn ES6, too, so maybe I’d go a little fancier than plain old Javascript. And oh yeah, I’d need to polyfill for browsers that don’t support Web Components, so maybe some help from webcomponents.js (the polyfill that spun off from Polymer).

The further I’ve built it up, the more I’ve settled into my current comfy cottage. Sure, I have to plug some holes other frameworks already fixed, but I can look to them for inspiration and guidance. What seems common to all of these frameworks is the convergence on some sort of Web Components. Whether it’s true Web Components or not doesn’t matter; I can pull ideas from a host of frameworks that all seem to agree on modularity and custom element creation. And of course, an added bonus is learning how everything works under the hood. There were some design decisions, especially in Polymer, that had me scratching my head until I learned how the browser actually dealt with components.

What’s more, with modern browsers, lots of housekeeping problems have gone away. Cross-browser needs that jQuery answered have disappeared if the target browser is recent enough. The application lifecycle has become that of a component, and modularity can provide the foundation of an application structure. If that isn’t good enough, I can sprinkle in libraries to help that aren’t entire platforms of their own like Angular, React, and Polymer are.

Even better, since we’re talking Web Components, we can hypothetically share components between React, Angular 2, and Polymer. That’s a pretty great place to be.

In my next article, I’ll talk coding specifics. But I wanted to explain how I landed here with this one, and impress on you that Web Components aren’t just the next passing fad. You might say Angular, React, or Polymer aren’t either. I certainly won’t argue; they seem legitimately cool. But I find it much safer to rely on web standards and plain Javascript. You might say Web Components aren’t a standard yet. Neither is ES6. You might say Chrome is the only browser supporting Web Components as the rest rely on polyfills. Yes, Web Components may sound riskier than the other so-called passing fads, but with the Google-supported polyfill that Polymer uses (webcomponents.js), I feel it’s worth the risk right now to learn and play with something that has serious potential down the road. Go ahead, put the tea kettle on over that fire.

(And an exciting note: Webkit just announced Shadow DOM support!)

If this pans out, we’ll know the underpinnings of how most future web technology works. Polymer works on Web Components. Both React and Angular have hinted at basing future tech on them, but they are smart enough to not bet the farm on it yet.

If it doesn’t pan out, well, if you follow along with this series, you’ll have custom built this technology with your own Javascript. We can both simply take the bits and pieces that do pan out and adapt.

Couldn’t it be cool to be a man or woman without a framework?

Check out Part 2 in this monster 5-part series (cause I’m really excited about this stuff) on the building blocks of Web Components.

Tron (for Github’s Electron)

I’m a big fan of Github’s Electron lately. Electron is the underlying tech behind Github’s Atom Editor, which they have kindly made open source….YAY!

Electron marries Google’s open source browser Chromium, with IO.js (the fork of Node.js). What you end up having here is a desktop wrapper that lets you do all sorts of HTML5 goodness with the power of Node.js which lets you access your local system, run C++, and everything else Node/IO.js does.

To get started in Electron, you’d need to do several things:

  • Grab the Electron binaries
  • Create some code to run your application and HTML window
  • Find some way to run the app (through the terminal targeting the binaries, or by dragging your files into the released package

None of these are that bad, but like any other ecosystem, you need to know how things work and figure out what pieces of the puzzle need to be in place.

Even for me, who is pretty good with how to scaffold an Electron app, I prefer not having to do the same things over and over again – so my needs are similar to those of a beginner, in that I just want a quick way to create and run an Electron project.

I’m also a big fan of Polymer. The main thing I love about Polymer is how it enforces everything to be a Web Component. You can read all about it’s encapsulation model with the Shadow or Shady DOM elsewhere, but I like the fact that Polymer strongly suggests that each component you have is self-runnable. Of course, this isn’t exclusive to Polymer, but it’s the first time I’ve really seen self-running components en masse.

So, I’ve taken some inspiration from Polymer…

What if you can not only run your main project in Electron, but also run any components you want as standalone Electron applications?

With that, let me tell you about my new CLI tool call “Tron-CLI”. Tron is a tool that you’d install globally under node like so:

npm install -g tron-cli

Once installed, go ahead into a new project directory and type the following into the terminal:

tron create

Tron will download the Electron binaries and create an application folder for you with a fully working dummy application.

To run, you’d typically need to target the Electron executable hidden in the binaries folder and pass in the app folder. I’ve played around with popping these shell commands into a grunt or gulp file (which is a fine way to go). However, with Tron, you’d simply type:

tron run

The Application Javascript I provide also accepts arguments via the Tron CLI. So if you wanted to pop open the Chromium developer tools when you run your app, you’d do this:

tron run -d

And like I said, I’m a fan of self running components. So if you have those, and especially if you’ve created them with the Yeoman Polymer Generator where the demo files are placed in your <yourcomponent>/demo/index.html, you can demo your component in Electron by doing this:

tron comp <yourcomponent>

Of course, if your component structure doesn’t adhere to this scheme, go ahead and tweak the tron.json file.

Yes, your application may quickly grow into something pretty big and you might outgrow Tron. I am to add more features to the application JS as I need them, but for now this is a good quick way to start an app that you can throwaway or mold as you need.

Tron does a few more good things, but the above is mainly what I’m using it for now in combo with my Polymer projects. To deep dive, go ahead and checkout my readme, but in the meantime…it’s super easy to get an Electron project up and running with “tron create” and “tron run”!


I did just recently become aware of Electron-Prebuilt. Great project – it looks like it installs Electron as a global dependency and allows you to use the CLI tool “electron” to run your app. It will assume nothing about your app and lets you author it however you want. My Tron-CLI is more opinionated with how things are setup and scaffolds and app and dev environment for you based on these opinions. Because of these opinions and application code, it does a fair bit more.

Also, Tron lets you have an Electron install per project, whereas Electron-Prebuilt uses a common one. I wouldn’t say that either is right or wrong, just a matter of preference.

Please by all means ignore Tron if it’s not right for you!