Progressive Web Apps

Last Friday, the SFHTML5 Meetup group met to discuss something called “Progressive Web Apps.” I had some preconceived notions that the topic would be pretty cool, but actually, it got me more excited about the state of mobile/web/desktop in 2016 than I could have imagined.

This might sound a bit dramatic, especially given the negative tone that Alex Russel (@slightlylate), the speaker, started off with on the mobile web. Despite being negative, he was spot on, and the talk was a real eye-opener for us who have been working on the mobile web for so long that we forget how much it sucks.

And yes, it does suck. A good point was made that every other mobile platform started out mobile. No vendor has ever really proposed, “OK, let’s take this UI platform, along with everything that’s ever been built with it that works on the desktop with mouse and keyboard, and dump it on mobile.” Nobody did that until it came to web browsers. It’s amazing that things work as well as they do right now.

Alex then took us through an exercise of asking for hands up on who used a web-based email client on the desktop. Around 95% of our hands raised. When the question was reversed, “Who uses a web-based email client on their mobile device?” the result was exactly the opposite.

Why does the mobile web suck so much? The folks that have given “Responsive Web Design” (RWD) a shot can’t be blamed for this problem. The rest of the web community…if you want your stuff to work on mobile, it’s time for a redesign.

Even with RWD, some mobile redesign love, and the MOBILE FIRST! mantras we shout, the fundamental user experience with the mobile web, as it is now, will never compete with those for mobile apps. It’s probably not because HTML/JS/CSS is slow. Yeah, native can be faster, but if you think about it, most apps you use really don’t need speed. If you don’t agree with me, tell that to all the app developers using Phonegap, Cordova, or even just a plain WebView for their entire products.

So speed isn’t the issue for most apps. Touch, screen orientation, and size don’t need to be an issue if the web team cares enough to address them. No, to compete with your typical mobile app, design comes down to how the browser itself runs and loads the page.

Real, installed apps have two pretty big advantages over mobile apps:

  • Once installed, the user can jump to the app from the home screen.
  • Even with no network connectivity, the app can still work or at least pretend to work.

There’s a 3rd advantage, and that’s push notifications: messages from the app that appear in the notification area even when the app isn’t running. I think that functionality is big-ish, but unless you have a significant base of users addicted to your app (think Facebook), it isn’t as big of a deal. Smaller guys and gals are just trying to develop a neat app.

Progressive Web Apps attempt to solve all of that missing functionality, and they do so in a way that doesn’t necessarily interfere with the current way we develop for the web.

Step #1: Invade the Home Screen and look like an app

Tackling the first issue of putting your page on the mobile home screen is pretty important. How the application is displayed, both on the home screen and when it loads, is part of that experience. To solve it, use the “Web App Manifest”! It’s a JSON file linked from your HTML head that allows you to define things like app icons, fullscreen display, splash screen, and more.

This is the point when I should confess that I haven’t worked with Progressive Web Apps yet. Luckily for me, this isn’t a “how-to” article. So for great details on how-to do this stuff, run an easy search, or for your convenience, read this nice technical article via MobiForge.

Either way, the idea is that if a user visits your page often enough within a certain time frame, the browser will ask the user if he or she would like to place the page on their home screen. Or, the user can simply add it to the home screen from the options menu in the mobile browser. That’s light-years better than having to open the browser, remember the URL, and load the page. I’m sure it’s a huge reason apps are winning on mobile right now.

Step #2: Be an app even when offline

Secondly, we have “Service Workers.” They sound nerdy and boring, and maybe they are, but the potential they open up for appifying a webpage is huge. Basically, you’d use a Service Worker to intercept a specific set of resources as the webpage fetches them. Yes, if the user is offline that first time they want to access the page, they’re outta luck. However, when they do intercept these resources with a connection, they’ll be cached. You, the developer, control which files get cached via a Javascript array in the code. On subsequent loads, even if the user is offline, the page can load with your cached assets, whether they are images, Javascript, JSON, styles, or whatever. Here’s a better technical description of how that works.

In fact, Google has published documentation and some tools on the similar notion of an “Application Shell Architecture” wherein persistent assets that don’t change can be cached, but dynamic content that isn’t cached will update.

What does this mean and will it all work?

Probably the most exciting thing about Progressive Web Apps is that both the Manifest and the Service Workers will not affect the web page negatively if the browser doesn’t support the features. This means that the worst you can do is waste time and JS code on something that doesn’t pan out as you hoped.

And there is some danger that it won’t work. You may have noticed that Facebook today uses push notifications with Service Workers and that they do increase engagement on their site. So that’s a win! Unfortunately, Service Workers and the Web App Manifest aren’t supported everywhere. Unsurprisingly, that means they’re pretty much everywhere but iOS/Safari. Even worse, browser vendors on iOS can’t use their own web engines to support the Progressive Web Apps — under the hood, both Chrome and Firefox have to use Safari tech.

Apple seems tight-lipped about whether they intend to adopt Progressive Web Apps at all. I’m going to say that for now, it doesn’t matter. If you’ve hung around the SF Bay Area enough, you may have noticed that many companies operate on an “iOS first, Android distant second” agenda. That doesn’t make sense in that Android devices far surpass iOS devices in sales. But it does make sense, in that iOS app sales are greater and it can be daunting to develop apps for the large ecosystem of Android devices on the market.

However you slice it, Android is second for developers, which is bad for consumers. Right now, many companies will adopt a Web + iOS + maybe Android strategy. If they can combine the Android + Web strategy with Progressive Web Apps AND not force folks through the Google App Store, it’ll be a huge win for everyone. I’m guessing Google probably doesn’t even care much about having an app store, save for the fact that it was necessary to maintain a mobile ecosystem.

Meanwhile, the point was made at this Meetup that with every additional step a user must go through to download an app, there’s around a 20% dropoff rate. Think about how many steps there are in clicking an app link, going to the store, starting the download, waiting for the install, and finally opening the app — many apps are losing out on users. And let’s face it, the app gold rush is over. There are some lottery winners still, but most apps are too costly to make and market to justify what they bring in return.

Progressive Web Apps short circuit that whole process by eliminating app discovery and install. While Android users will enjoy a huge user experience win, Apple will most likely try to maintain their stranglehold on their app store and come kicking and screaming only once web devs demand these new features.

What’s more, and what I’m really excited for, is our return to disposable digital experiences. Hate Adobe Flash or not, it really created a heyday for disposable experiences: Flash games to play a couple times and get bored with, nifty digital playgrounds, etc. It’s way harder to convince someone to download an app than it is to go to a webpage and pop it on their homescreen until they get bored of it in a week.

To extend, I think Progressive Web Apps will also be a huge boon for web-based Virtual Reality. Immersive experiences will come from many different places and frankly, will not be wanted as a permanent app install. Already, we’re seeing the rise of VR portals like MilkVR because smaller, one-off VR experiences need some kind of entry onto a device. When Progressive Web Apps make WebVR easier to get before eyes than an app portal, VR will win big.

To reiterate, I think Progressive Web Apps are the next big thing for mobile, potentially replace lots of simple apps, and will mark the return of fun, disposable experiences. I don’t have the technical experience with these new tools to back me up yet, but I will soon. Don’t take my word for it, though! Read up on it and try it yourself.

Here’s another post from the aforementioned Alex Russel:

Did HTML5 and Mobile Kill the Volume Slider?

Seems so simple and obvious doesn’t it? If you make a media player, whether audio or video, you should include a volume slider…..right?

Now that I’m developing web apps that extend onto the mobile screen, I got the feeling that a volume slider wasn’t needed.  I couldn’t quite articulate why – but it was along the lines of the fact that you have a physical volume switch in your hands already as you hold the device, so you get tactile control.  I also had in the back of my mind that I wasn’t seeing so many volume sliders anymore, especially on mobile – but couldn’t back that up with actual facts.

But, as I’m working on my daily music playlist web app at, I do like to get user feedback.  I took an informal twitter poll asking people if they think volume sliders in their media players are necessary, or if they clutter things up too much.

Two people responded, both saying that volume sliders are better than no volume sliders.  So OK, I know when I’m beat!  It wasn’t my opinion, but I would certainly give it a shot and see how it worked.  I could always throw them away if it’s horrible!

So, I set off to work.  Interestingly enough, I wasn’t quite sure how to begin.  I had a few options that I knew of.  The first was to use a plugin like jQuery UI.  Problem is, I was only using jQuery – not the bloat that is jQuery UI yet.  To be fair, it was only bloat because the proposition is to include a ton of extra crap in my code JUST to get a slider to work.  Luckily I can pare down what I need in my custom jQuery UI library through the builder.  That’s just what I did – for a time.  I got a slider working, even custom styled it chopping away block after block of unnecessary CSS from my style.  Eventually I got something a little bloated, but not horribly so.

Then I thought to myself – “Damn, you’re basically doubling the size of your project to include a UI element that you aren’t even sure you need!”.   OK, lets think second option – writing a slider from scratch….ummmmm….no.  While I do enjoy UI challenges, I don’t enjoy creating commonplace UI components from scratch.  If I needed to invest the time and effort, that would be one thing, I’d suck it up and dive in.

Last option – the range slider!  OK I actually forgot about this one.  As far as I’m concerned, the only new HTML5 elements worth talking about are the canvas, video, and audio tags.  So, its easy to forget something this unsexy.  Fine though, let’s look into this and see how it works.

It turned out quite easy to pop in – though I did want it to be vertical.  After Googling for some time, I could see no other option than doing a CSS rotation transform on the slider.  Using CSS3 seems pretty overkill and weird for this.  I had some trouble positioning it with the rotation rules applied, but eventually got it where I wanted it.  Of course, after i did that, I decided a horizontal slider in a different spot on my page would work better visually after all.

Last stop was styling the slider.  Eventually I got something with a round thumbnail with a trough that looked something like this:

The HTML was pretty simple:

<input id=”volume” type=”range” value=”80″>

And the CSS used a webkit vendor prefix:

input::-webkit-slider-thumb {
       border-style: solid;
       border-color: #1a1a1a;
       border-width: 1px;
       background-image: url(“../images/vol.png”);

Fine, now how about Firefox? Does that work?  Lets test in that….
BOOM!  Nope, doesn’t work.  Well, that’s PROBABLY because I have a Webkit vendor prefix on the thumb and it’s just breaking down entirely.  OK, to Google in search of customizing our range slider for Firefox!

Well, after some searching, not only does Firefox not support range slider customization, but it doesn’t support the stupid HTML5 range slider at ALL.  Great, well, I did all this work…let’s just add in the volume slider, hide it in Firefox, and see how I live with it on the page.  Let’s see if I like it at least in Chrome.

I sat with it for a few days.  I never really found myself reaching for it.  Usually, I’d just reach for my computer volume (hardware or software) to turn my entire system down.  But eventually, I was on a mobile device, and wanted to test it out and see if it worked.

So….the slider worked, but it had NO EFFECT ON THE VOLUME!!!  I swear, every time I turn around, something annoying pops up with this.  I went full on “TAKE COMMAND MODE”, and went and installed the Android development tools on my system for the first time.  I rigged up Chrome debugging over USB so I could view my console log, DOM tree, and all the Chrome goodies I could get right on my desktop.  I expected some red text in my console indicating a Javascript error.  NOPE, completely silent.

Fine…let’s Google “HTML5″ and “set volume mobile” (which I should have done in the first place).

Turns out….well this.

Both iOS and Android’s version of HTML5 don’t support setting the volume on video or audio elements.  The iOS page specifies that you may use the hardware switch to decrease and increase the audio of the system.

…..And we’re back to square one.  It would seem as if I had the right inclination all along on mobile.  It’s expected that users will adjust audio physically on web pages.  Of course there is the unfortunate side-effect that you are controlling all applications on your device at once.

Now, if I am making a similar experience on the desktop, do I include audio adjustment?  Well, it might be a good idea to match the user experience from one to the other.  So a easily recognized feature like audio that is usually placed next to your main playback controls is hard to miss when it’s there on some experiences but not others.  It would be one thing if we just NEEDED TO HAVE IT.  But we don’t.  As a feature that is on the chopping block already, keeping a consistent user interface is a good reason to leave it out on the desktop.

And then there’s the ease of implementation.  Holy crap, it’s like you’re either doubling your project size with jQuery UI or taking on a second job to do it from scratch.  Or you know, just make it work on webkit only.

After all this, you start to wonder if it’s really that important – and i’m beginning to think it’s just not worth it….to anyone.  Maybe I should just kill it from my project.  I might imagine folks are doing similar things and finding it’s better to leave it out.  So is it dead yet?  Did workflow and capabilities kill the volume slider on the web?

As a last note…the slider seems alive and well in native apps.  I’m watching Netflix right now on my Galaxy Tab 10.1.  I typically leave it charging when I’m watching Netflix.  So, while it’s in landscape mode and charging (with the charger sticking out the top), the device itself actually rests on the side where the physical power button and volume are.  As such, when I watch Netflix, the physical volume is completely inaccessible.  I’m not sure about the iPad, as I don’t have a fancy case for it yet.

And wouldn’t you know it – Netflix has a nice little horizontal volume slider :)

Showing an Image for your Audio Only HLS Stream on iOS

Now that I have a handle on this, it seemed like a sick joke how many false paths I was led down with this task.

So what am I trying to do here, what’s the use case?

Well – one of the unique things about the Apple marketplace is their restrictions on streaming network video. If I have an Apple HTTP Live Stream that I host on my site and I’d like my iOS device to stream/play it Apple has some restrictions. If I don’t follow these restrictions, they will reject my app!

One of these restrictions is to have an audio only component to my stream that is 64kbps or less. In the world of streaming, you’d typically switch back and forth between different quality streams depending on your bandwidth. And in this case, Apple forces you to have a low quality, no video stream. The reasoning is that if the user has a bad connection, they’ll sit through this instead of a video that keeps pausing to buffer.

This presents an interesting user experience problem. Does my app feel broken if my stream keeps stuttering and buffering, or does my app feel broken if I see no video, but a black screen and just hear audio. Personally, as someone who is tech savvy I know a bad connection when I see it, so I’d rather have it pause to buffer, but ANYWAY….

One of the ways to enhance this unfortunate user experience is to show some sort of text to indicate to the user that “No this app isn’t broken, we’re just not showing you video because your connection sucks”. How to do that though? Well it was suggested to me on the Apple forums to embed an image into the audio only stream, so that when it switches over the user is presented with the messaging.

How to embed it in the stream isn’t so hard and outlined here.  Basically you set a couple flags to indicate that your metadata is a “picture” and where the file is that you’d like to embed.  Just be careful though, because the image is embedded into every segment and contributes to your 64kbps allotment.

Showing the image is the thing that I had a hard time with.  The first thing that led me astray is this line from the HTTP Live Streaming Overview:

If an audio-only stream includes an image as metadata, the Apple client software automatically displays it. Currently, the only metadata that is automatically displayed by the Apple-supplied client software is a still image accompanying an audio-only stream.

Well FANTASTIC right?  So if I included an image in my stream as metadata, it will display automatically!  Um no.  And let me tell you, it was pure hell trying to find this out.  I was trying to test my newly minted stream outside of my app.  I used Safari, Quicktime, and VLC to try to see my awesome image – no luck.  I even opened the AAC file in Adobe Audition – it wasn’t there either in the metadata section.

Then I started my online hunt for example streams online with this image.  No luck – either my Google-Fu isn’t strong enough, or nobody does this (or advertises doing it).  A co-worker pointed out that he DID see the image data in the stream, and yes, my AAC  audio segments were the right size to have the image in them, so what gives?

Turns out that this magical image only works using the video tag in HTML on iOS Safari and in your own iOS app.  So there’s no hope in verifying it on the desktop.  But wait, it gets worse!

In your own iOS app, it will only display automatically if you are using the older MPMoviePlayer framework.  I checked it out, and yup it worked (finally)!  The problem is that I was using the newer AVPlayer framework.  And from here on in, it’s COMPLETELY manual.  What’s manual mean?  Well, you’re in charge of all your metadata.  This means that if you’d like to show that image you embedded, you need to grab the raw binary data, convert it to an image, and display it your DAMN self.

Fine, then – we have a “timedMetadata” property, let’s use it:

for (AVMetadataItem* metadata in [self._avplayer currentItem].timedMetadata) {
    if ([[metadata commonKey] isEqualToString:@"artwork"]) {
       self._overlayImage = [UIImage imageWithData:metadata.dataValue];
       self._overlayImageView = [[UIImageView alloc] initWithFrame:CGRectMake(self.frame.origin.x, self.frame.origin.y,     self._overlayImage.size.width, self._overlayImage.size.height)];
       [self._overlayImageView setImage:self._overlayImage];
       [self addSubview:self._overlayImageView];

That’s actually not too bad right? That timedMetadata property is pretty handy. There’s one mind-boggling catch though. You must add a AVPlayerItem observer for timedMetadata, like so: (where item is a AVPlayerItem)

[item addObserver:self forKeyPath:@"timedMetadata" options:0 context:nil];

If you don’t do this, your timedMetadata will be null. So it’s like that old riddle – if a tree falls in the woods and nobody is around to observe it, did it really fall? Apple says no. Actually they didn’t say no – they just assume that you’ll arrive to that conclusion.

When you do add that observer, you’d think that you would have an event to trigger showing this image. That would be true…..if you don’t care that it won’t go away when your stream switches back to video+audio. It’s kind of maddening.

So, when you get the timedMetadata event, all seems well. You have the image data available to show the image, and you can go ahead and do it. After around 10 seconds pass and you get to your next segment you’ll get another timedMetadata event. If the stream switched to video+audio, this will be the last one you get. It’s kind of late to let us know that “for this past segment we should have not been showing the poster image”.

“But don’t worry”, you might say – we’ll just check the timedMetadata property of the AVPlayerItem. And I would say you’re smart to try that, but no – metadata will always persist for this AVPlayerItem whether it’s on the active segment or not. This means that with the timedMetadata property or timedMetadata events there seems to be absolutely no way to tell if the segment that you are currently playing has metadata on it and if it is an audio only segment.

Ick. Well, what the hell is the point of the image metadata on an audio only stream if it’s all manual and this hard to control. But I needed to persist to get this task done…how can we know when to show this image and when not to?

I tried with AVPlayerItem.tracks. This will expose a track listing for the asset. Seemed pretty good at first – I was noticing that it was showing me I had video, audio, and metadata. Occasionally video would be dropped, and this seemed to coincide with the audio only stream – however this wasn’t always the case. It seemed very flaky – so in the end I couldn’t base things off of the tracks listing.

FINALLY I found the AVPlayerItem.presentationSize. When the stream was audio only, it would indicate that the presentationSize.width and presentationSize.height were 0. And I can use this and ping my video every second to figure out if I should be showing my image to the user that the stream at this very moment is audio only.

What an experience. We’ve gone from the documentation indicating that the feature was automatic to having to wrangle our own bytes, manage everything ourselves, and deal with several weird quirks of iOS. I’m glad I’m drinking tonight.

The worst part of it is, I got no help from Google searches and some limited* help from the Apple dev forums. So I hope this helps YOU!

*My limited help from the Apple dev forums consisted of a very nice developer hailing from Cupertino and another from Maryland saying he was having a hard time too. The Cupertino dev helped out immensely, but not enough because I don’t think I was asking the right questions to get to my inevitable conclusion of suckiness.

Front Enders vs Creative Developers

Today, Rebecca Murphey wrote a post called “A Baseline for Front-End Developers”.  The post had some good advice, it was well written, and it was well received, judging from the comments.

Something about it really didn’t sit right with me though.  I was very uneasy about it, and I didn’t know why until I thought about it on my bike ride home from work.

Here’s the essence of the problem: THEY STOLE OUR WORD!!!!


What is a “Front-End Developer”?

Frankly, I thought I already knew.  After all, the terms front end and back end have been bandied about for at least a couple decades now.  To make sure I wasn’t deluding myself, I turned to Wikipedia:

“In software architecture there may be many layers between the hardware and end user. Each can be spoken of as having a front end and a back end. The front is an abstraction, simplifying the underlying component by providing a user-friendly interface.”

“In Web content management systems the terms front end and back end may refer to the end-user facing views of the CMS and the administrative views respectively.”

“For major computer subsystems, a graphical file manager is a front end to the computer’s file system, and a shell interfaces the operating system — the front end faces the user and the back end launches the programs of the operating system in response.”

When I read the various definitions, one thing seemed very clear to me: a front end is any user-facing view.  It is the layer of software that a human must go through to use a larger software or hardware system.

So, by definition, a front-end developer must be someone who develops, scripts, programs, or codes a user interface. which is a pretty broad job description.  By contrast, Rebecca gives a pretty narrow list of technologies and skills that a front-end developer must possess.

Guess what, though?  There is nothing on her list but web technologies.  More specifically, it’s all about HTML, CSS,  and Javascript.  Maybe I’m right, maybe I’m wrong, but more to the point, we’re talking about a very specific, though popular, subset of HTML usage.  Rebecca isn’t alone, she just seems to have encapsulated very well what I’m seeing in blogs, job postings, and more about the role of Front-End Developers.  Honestly, I’d call what she defined as Front-End Developers (for lack of a better word), boutique web developers.

The boutique web is a subset of smallish-scale sites that are quite attractively designed and use the latest HTML5 fantasticness.  Its developers are striving for pixel perfect designs and may have some sexy animated jQuery widgets.  Lots of money is being poured into this area right now given that people are running away from Flash . . . you know, the technology that we used to do boutique websites with.

I might be pigeonholing front enders by calling them boutiquers, but think about the following types of sites:

  • Wikipedia – Awesome site, but not very attractive.  If you’re using your pixel perfect CSS skills or a CSS preprocesser and you work on this site, you’re probably just extremely bored with your job.  The interesting and fun work here is probably on the back end, dealing with big data.
  • Any newspaper site – A horrible mess.  I’m not so sure these people don’t just copy and paste code in there and hope it works.
  • Games – There’s a wealth of specialized knowledge in this area.  Does a game developer need to worry about stylesheets at all?  No. Collision detection, physics, game theory? Yes.
  • Video – We worry about the video tag, encoding specs, ad networks, reporting networks, etc.  Javascript is king here.  Who cares so much about CSS and HTML?
  • Enterprise applications – There’s a whole other level of abstraction here.  A number of large-scale applications are built with Java with GWT and compiled down.

Do many of these skills overlap with Front-End Development?  Sure.  But we’re talking definitions here.  Developers are awesome and adaptable, so I can’t pigeonhole them, but I can try to pigeonhole the term front-end developer. I think the baseline for front-end developers is geared towards boutique sites and only those that use HTML, CSS, or Javascript . . . and probably Ruby on Rails.

Again, the people with these skills and passions are awesome, but it’s a little confusing for people like me who have been developing old-school front ends since the late 90s.  I really thought I could include myself in the front-end cool kids club, but I don’t care about semantic markup or CSS preprocessors.

The Other Front Ends

There are TONS of other things I consider to be front end.  There’s Flash, Corona, Android, iOS, OSX, Windows, 3D games . . . the list goes on and on.  Even inside the HTML/CSS/Javascript stack, there are other front ends I mentioned like games, video, and enterprise applications.

What do you call those developers?  Well, lots of folks say, “I’m an Android developer”, “I’m a C# developer”, or “I’m a Corona developer.”  I find those to be a little narrow in scope, though.  I definitely got burned calling myself a Flash developer when Flashageddon went down last November.  I don’t want to make that mistake again, tying my professional identity to a specific technology. Technology changes, evolves, and falls out of favor; unless you feel like rebranding yourself every few years, it might be wise to come up with a broader term for what you do.

Before reading Rebecca’s post, I thought front-end developer was a pretty decent broad term!  I felt I could associate myself with it. After all, I really enjoy pushing graphics around and making users happy. But it turns out I can’t, or rather, I don’t really want to.  If I’m reading Rebecca’s post right, a front-end developer is more of the same.  You’re tying yourself to a specific technology, or rather, three of them.  Yes, it’s totally the hotness now, and it’s been around for a while. You’ll probably have a place in the world for the next 15 to 20 years if it’s all you do and you keep up with the trends.  But new hotnesses will inevitably come along, and HTML, CSS, and Javascript might get a little stale even if they’re still relevant.

If I don’t identify as a front-end developer, what can I identify as?


The Creative Developer

So creative developer is a term that I came up with on my own, only to find out it wasn’t original in the least.

I occasionally listen to the Creative Coding Podcast.  Both of the hosts, Seb Lee Delisle and Ian Lob, identify as creative coders.  I have a feeling they dumb it down a little to be more approachable, but the perception I get from them is that creative coders sort of copy and paste code and tweak it by trial and error until it meets their needs.  There are also folks calling themselves creative technologists.  The impression I get from these folks is that they might be all sorts of brilliant but don’t necessarily code at all.

I work with a bunch of people that call themselves software engineers.  BORING.  It evokes the right tone of capability, but I just don’t prefer huge projects where I take data, manipulate it, and move it around.  I like a little more razzle dazzle.  I like getting in front of users, creating nifty things that people can see in front of their faces.  I like doing things that make noncoders go “WOW!”

So I took the creative part, since I like making creative applications, and I put something after it that evokes a little more confidence in building a software solution.  Hence creative developer.  But again, after I coined the term, I saw blogs and job postings for creative developers.  Cool!  But what IS a creative developer?  What is our baseline, as Rebecca outlined for front-end developers?


A Baseline for Creative Developers

I think that a baseline for our type of developer isn’t so easy.  You should probably know a lot, or at least not be afraid to learn a lot.  Get to know a lot of different techs. Here are my suggestions:


Front-End Developers Are in Demand

What Rebecca said?  Yeah, learn most of that.  You should probably be a competent front-end developer because that is what’s going on right now. Personally, I don’t know in what universe I’d think CSS preprocessors are useful for my daily work.  But whatever tech you don’t learn on Rebecca’s list, you should be able to talk intelligently about.


Get Linux

The biggest realization I had last year was that if you use Windows, you’re at a disadvantage. It’s not something you can’t overcome, but make things easier for yourself — dual boot Linux. If you want to make Jesus cry, fine, get an Apple.  The only thing making me wish I had OSX right now is that I need to switch OSs just to boot up Photoshop or Illustrator.


Do the Mobile Thing

You should also do some mobile, of course.  Phonegap is taking the easy way out.  Useful yes, but learn some native development.  Objective C and Java have their quirks, but they don’t seem too bad.


Don’t Forget Flash

Flash, on a technical level, is pretty awesome.  AIR + Flash, like Phonegap, is taking the easy way out, but it’s still awesome.  I feel very fortunate that I already know Flash, because I don’t want anything more to do with it voluntarily. Taking development advice from Adobe is like taking religious advice from Heaven’s Gate.  But there will be work in Flash for years to come.  Who knows if the doomsayers are right and it’s dying — but it’s still useful today.


The Server Can Be Creative

I was so resistant to server-side tech for the longest time.  After all, I’m a creative developer; I need some cool stuff on the screen to keep me occupied.  I can do heavy lifting in the client — it saves me from scaling my back-end architecture if people are doing something computationally expensive.  Yes, clientside tech is slower, but it forces me to do some problem solving.

I was wrong and resistant for no reason.  I dabbled a little with PHP just to get the job done, but it really wasn’t fun. What’s fun?  Node.js is a freaking blast. People are so excited over it, and if you’re not a server-side guru, everything in Node.js is new so there are no experts right now, just people having fun and getting awesome things done.  It also has some very good conceptual parallels to other technologies like Python and Ruby.  You know how the Node Package Manager works?  Good, then you know conceptually how Ruby Gems and the Python Cheese Shop works.

Speaking of Python, learn it!  Or learn Ruby . . . but I’m seriously turned off by Ruby developers claiming they are god’s gift.  I hope those that do are the loud minority, but it’s really off-putting.  If I were a real creative developer, though, I would probably just get over it.

Why is the server creative?  Collaboration!  Multiplayer games!  Live video encoding!  It’s another tool in your arsenal.  Also, the environment is a lot more mature than the clientside environment. People use languages in it that we really don’t use on the clientside today.  It will expand your horizons – – BELIEVE IT!


Learn C++

What is C++ good for?  Everything.  Remember what I said about learning the server because the client side is slower?  You’ll run into that less with C++.  Chrome is leading the charge so you can run C++ in your browser.  If you use Flash, you can use Alchemy (don’t get me started on the Flash Premium feature debacle. Again, Adobe = Heaven’s Gate).  C++ will work on your Android phone, in your iOS application, everywhere.  Bonus: You can learn leading creative coding frameworks like Cinder or Open Frameworks.  A good basis in C++ will let you do some very cool, but very processor-intensive things.


Users Aren’t Everything

With Flash and Javascript, I was stuck in such a rut.  It felt like if I couldn’t put it on the Web and let my legions of fans adore my work, it wasn’t worth anything.  Guess what? I don’t have legions of adoring fans, and if I did, I should just freaking Vimeo or YouTube a shaky cam of my masterpiece.

This mind-set prevented me from doing some cool things with some very cool technologies.  Why use OpenFrameworks if I can’t run it on the Web?  Tacocopters are cool, but I can’t give tacos to everyone who visits my site!

Play around with stuff; you’ll have fun and get some ideas. If you do anything cool, your fans will adore you with a YouTube video instead of a working demo in your browser.


The Physical Side

So there’s the client side, the server side, and the PHYSICAL side!  This is probably another term I’m coining that’s in widespread use.  Not everything you do has to be on a screen.  If it does have to be on a screen, why not use a few projectors and a Microsoft Kinect?  The Kinect is awesome, but explore Arduino as well.You can make some damn cool things if you don’t constrain yourself to the screen and the Web.  Remember, users aren’t everything — just put your masterpiece on YouTube.


Talk Big. Do Half of It.

There are so many cool things to learn.  A few of the above items are a to-do list for myself.  I’m getting into native Android development, iOS development, Node.js, Python, and lots of Javascript.  I’ve been meaning to do C++, Open Frameworks, Arduino, and creative installations for some time.  I hope to have time and opportunity to explore these techs, but it might not happen.  That’s OK; there are just too many cool things to learn.  Try to get to at least half the ones on your list.

Sure I Can Geek Out, But Will I Get Hired?

Short answer: Nobody knows!  By all means, be a front-end developer and reach Rebecca’s baseline.  You’ll look good to HR people and be better able to answer the standard front end interview questions.  You’ll be good for several years, I’m sure.  But for me, those skills ares only part of my passion. Frankly, I’d rather creatively develop.

(note: the link above is a replacement link to another link that died. Someone representing contacted me to offer a guide on their site. Normally I leave 5 year old posts alone…but they were polite, insistent, and their link seems to offer decent advice)

The fact is, we’re seeing more smaller companies and startups bypass HR and recruiters. There are too many technologies to count — if you don’t know technology A, that’s OK, you’ll learn.  The important thing is that you’re passionate, driven, play well with others, and are proficient in something to prove you can be proficient in something else.

Also, there’s a developer shortage. You’ll probably be juuuuuuust fine.

iOS Video is Cranky!

I’m close to wrapping up a decent sized iOS project, and my part was video playback.  Prior to this, I’ve done tons of work with Flash video.

I hate to be labeled biased, but I’ve never appreciated the level of detail put into the Flash API before working with both HTML5 and iOS.  Flash has tons of quality of service queries you can make, you have explicit control of bitrate on multibitrate streams, you have complete seek accuracy in streams.  Virtually anything you could ever want, you got it!

Compare that to the AVPlayerFramework API for iOS.  You can listen to status messages and put an observer on the current playhead time.  That’s pretty much it.  And those status messages come in the form of ready to “play”, “failed”, “played to the end”, “failed to play to the end” and “unknown”.

The lack of features actually isn’t too much of a problem for streaming playback.  Streaming content is handled like a champ by the framework.  A bad network connection might cause the stream to stutter, freeze, and stall – but as the connection gets better the video is recovered quite well automatically.

One of the complications presented by Apple’s guidelines is that an application that streams must provide an audio only low-bitrate stream to compliment the higher bitrate video/audio streams.  The AVPlayer framework does a great job doing what it needs to do switching the user to the best available stream.  Unfortunately, if bandwidth gets really bad, the audio only stream is a valid option – and iOS WILL switch to it.

It presents an interesting conundrum.  Video looks broken to the user, and especially your clients who are footing the bill.  I’d almost rather have the video stammer and stutter than to switch to this audio only bitrate – at least then, most tech savvy users would know that they are having network connectivity issues.

The other problem is the ability to seek to exact positions in a stream using the AVPlayer in anything less than iOS 5.  A stream is broken up into chunks.  Those chunks are transport segment (.ts) files.  Typically those chunks will be around 7-10 seconds each.  Turns out that if you want to seek to a specific second of your stream, it better lie at the beginning of a segment, otherwise your seek accuracy is (at worst) 7-10 seconds off.

Past that, streams are pretty decent, like I said.  What got frustrating though, was simple progressive video!

Progressive is fairly easy to play.  You load, play and go.  After loading, you need to wait for the item ready status, and then tell the content to play.  By nature, progressive loads welllll….progressively.  You can start playing back the video before the entire thing finishes loading.

The limited API of AVPlayer seems to present some problems here.  Your video is only “ready” once.  What happens if you are progressively playing, but suddenly your connection drops low, and you can’t quite finish playing the content because you’ve reached the end of whats been playing?  I’ll tell you what happens….your video pauses.  While paused, the rest of the video will finish loading.  Unfortunately, there is no status message to inform you that the video is ready to play again.  So there it sits.  Paused.

Drastic shortcomings call for stupid hacks.   Such as setting up a timer to fire off every few seconds and tell the content to play.  If the content is currently playing, the call to play will do nothing.  However, if it’s stalled and paused, we can keep the video moving along automatically.

Another shortcoming I ran across is an unfortunate reality with the AVQueuePlayer.  The AVQueuePlayer is like the AVPlayer, but you can playlist things.  I chose to use this type of player due to other requirements in my project.  For my work, I wasn’t really using it as a queue.  Instead, I’d simply replace the current item in the queue to play a new asset.  Unfortunately this method of doing things didn’t seem very reliable.  If an item failed in my queue, I could add further items and make them play, however I wouldn’t get any status events about the item.  That means I’d never get told when it was ready to play, when it completed, when it failed.  It was really a showstopper.  Sometime, even if the item didn’t fail to play, I’d see that subsequent items wouldn’t produce status changes.

I ended up having to destroy the player and starting fresh each time – oh well.  Even though I wasn’t using it as a queue before, now I REALLY wasn’t using it as a queue.

So there it is.  I got through it, but video was  a little cranky!