Tuesday, January 27th, 2015

A question of timing

I’ve been updating my collection of design principles lately, adding in some more examples from Android and Windows. Coincidentally, Vasilis unveiled a neat little page that grabs one list of principles at random —just keep refreshing to see more.

I also added this list of seven principles of rich web applications to the collection, although they feel a bit more like engineering principles than design principles per se. That said, they’re really, really good. Every single one is rooted in performance and the user’s experience, not developer convenience.

Don’t get me wrong: developer convenience is very, very important. Nobody wants to feel like they’re doing unnecessary work. But I feel very strongly that the needs of the end user should trump the needs of the developer in almost all instances (you may feel differently and that’s absolutely fine; we’ll agree to differ).

That push and pull between developer convenience and user experience is, I think, most evident in the first principle: server-rendered pages are not optional. Now before you jump to conclusions, the author is not saying that you should never do client-side rendering, but instead points out the very important performance benefits of having the server render the initial page. After that—if the user’s browser cuts the mustard—you can use client-side rendering exclusively.

The issue with that hybrid approach—as I’ve discussed before—is that it’s hard. Isomorphic JavaScript (terrible name) can theoretically help here, but I haven’t seen too many examples of it in action. I suspect that’s because this approach doesn’t yet offer enough developer convenience.

Anyway, I found myself nodding along enthusiastically with that first of seven design principles. Then I got to the second one: act immediately on user input. That sounds eminently sensible, and it’s backed up with sound reasoning. But it finishes with:

Techniques like PJAX or TurboLinks unfortunately largely miss out on the opportunities described in this section.

Ah. See, I’m a big fan of PJAX. It’s essentially the same thing as the Hijax technique I talked about many years ago in Bulletproof Ajax, but with the new addition of HTML5’s History API. It’s a quick’n’dirty way of giving the illusion of a fat client: all the work is actually being done in the server, which sends back chunks of HTML that update the interface. But it’s true that, because of that round-trip to the server, there’s a bit of a delay and so you often end up briefly displaying a loading indicator.

I contend that spinners or “loading indicators” should become a rarity

I agree …but I also like using PJAX/Hijax. Now how do I reconcile what’s best for the user experience with what’s best for my own developer convenience?

I’ve come up with a compromise, and you can see it in action on The Session. There are multiple examples of PJAX in action on that site, like pretty much any page that returns paginated results: new tune settings, the latest events, and so on. The steps for initiating an Ajax request used to be:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Display a loading indicator,
  4. Request the new data from the server, and
  5. Update the page with the new data.

In one sense, I am acting immediately to user input, because I always display the loading indicator straight away. But because the loading indicator always appears, no matter how fast or slow the server responds, it sometimes only appears very briefly—just for a flash. In that situation, I wonder if it’s serving any purpose. It might even be doing the opposite to its intended purpose—it draws attention to the fact that there’s a round-trip to the server.

“What if”, I asked myself, “I only showed the loading indicator if the server is taking too long to send a response back?”

The updated flow now looks like this:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Start a timer, and
  4. Request the new data from the server.
  5. If the timer reaches an upper limit, show a loading indicator.
  6. When the server sends a response, cancel the timer and
  7. Update the page with the new data.

Even though there are more steps, there’s actually less happening from the user’s perspective. Where previously you would experience this:

  1. I click on a button,
  2. I briefly see a loading indicator,
  3. I see the new data.

Now your experience is:

  1. I click on a button,
  2. I see the new data.

…unless the server or the network is taking too long, in which case the loading indicator appears as an interim step.

The question is: how long is too long? How long do I wait before showing the loading indicator?

The Nielsen Norman group offers this bit of research:

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

So I should set my timer to 100 milliseconds. In practice, I found that I can set it to as high as 200 to 250 milliseconds and keep it feeling very close to instantaneous. Anything over that, though, and it’s probably best to display a loading indicator: otherwise the interface starts to feel a little sluggish, and slightly uncanny. (“Did that click do any—? Oh, it did.”)

You can test the response time by looking at some of the simpler pagination examples on The Session: new recordings or new discussions, for example. To see examples of when the server takes a bit longer to send a response, you can try paginating through search results. These take longer because, frankly, I’m not very good at optimising some of those search queries.

There you have it: an interface that—under optimal conditions—reacts to user input instantaneously, but falls back to displaying a loading indicator when conditions are less than ideal. The result is something that feels like a client-side web thang, even though the actual complexity is on the server.

Now to see what else I can learn from the rest of those design principles.

Friday, January 23rd, 2015

Angular momentum

I was chatting with some people recently about “enterprise software”, trying to figure out exactly what that phrase means (assuming it isn’t referring to the LCARS operating system favoured by the United Federation of Planets). I always thought of enterprise software as “big, bloated and buggy,” but those are properties of the software rather than a definition.

The more we discussed it, the clearer it became that the defining attribute of enterprise software is that it’s software you never chose to use: someone else in your organisation chose it for you. So the people choosing the software and the people using the software could be entirely different groups.

That old adage “No one ever got fired for buying IBM” is the epitome of the world of enterprise software: it’s about risk-aversion, and it doesn’t necessarily prioritise the interests of the end user (although it doesn’t have to be that way).

In his critique of AngularJS PPK points to an article discussing the framework’s suitability for enterprise software and says:

Angular is aimed at large enterprise IT back-enders and managers who are confused by JavaScript’s insane proliferation of tools.

My own anecdotal experience suggests that Angular is not only suitable for enterprise software, but—assuming the definition provided above—Angular is enterprise software. In other words, the people deciding that something should be built in Angular are not necessarily the same people who will be doing the actual building.

Like I said, this is just anecdotal, but it’s happened more than once that a potential client has approached Clearleft about a project, and made it clear that they’re going to be building it in Angular. Now, to me, that seems weird: making a technical decision about what front-end technologies you’ll be using before even figuring out what your website needs to do.

Ah, but there’s the rub! It’s only weird if you think of Angular as a front-end technology. The idea of choosing a back-end technology (PHP, Ruby, Python, whatever) before knowing what your website needs to do doesn’t seem nearly as weird to me—it shouldn’t matter in the least what programming language is running on the server. But Angular is a front-end technology, right? I mean, it’s written in JavaScript and it’s executed inside web browsers. (By the way, when I say “Angular”, I’m using it as shorthand for “Angular and its ilk”—this applies to pretty much all the monolithic JavaScript MVC frameworks out there.)

Well, yes, technically Angular is a front-end framework, but conceptually and philosophically it’s much more like a back-end framework (actually, I think it’s conceptually closest to a native SDK; something more akin to writing iOS or Android apps, while others compare it to ASP.NET). That’s what PPK is getting at in his follow-up post, Front end and back end. In fact, one of the rebuttals to PPKs original post basically makes the exactly same point as PPK was making: Angular is for making (possibly enterprise) applications that happen to be on the web, but are not of the web.

On the web, but not of the web. I’m well aware of how vague and hand-wavey that sounds so I’d better explain what I mean by that.

The way I see it, the web is more than just a set of protocols and agreements—HTTP, URLs, HTML. It’s also built with a set of principles that—much like the principles underlying the internet itself—are founded on ideas of universality and accessibility. “Universal access” is a pretty good rallying cry for the web. Now, the great thing about the technologies we use to build websites—HTML, CSS, and JavaScript—is that universal access doesn’t have to mean that everyone gets the same experience.

Yes, like a broken record, I am once again talking about progressive enhancement. But honestly, that’s because it maps so closely to the strengths of the web: you start off by providing a service, using the simplest of technologies, that’s available to anyone capable of accessing the internet. Then you layer on all the latest and greatest browser technologies to make the best possible experience for the most number of people. But crucially, if any of those enhancements aren’t available to someone, that’s okay; they can still accomplish the core tasks.

So that’s one view of the web. It’s a view of the web that I share with other front-end developers with a background in web standards.

There’s another way of viewing the web. You can treat the web as a delivery mechanism. It is a very, very powerful delivery mechanism, especially if you compare it to alternatives like CD-ROMs, USB sticks, and app stores. As long as someone has the URL of your product, and they have a browser that matches the minimum requirements, they can have instant access to the latest version of your software.

That’s pretty amazing, but the snag for me is that bit about having a browser that matches the minimum requirements. For me, that clashes with the universality that lies at the heart of the World Wide Web. Sites built in this way are on the web, but are not of the web.

This isn’t anything new. If you think about it, sites that used the Flash plug-in to deliver their experience were on the web, but not of the web. They were using the web as a delivery mechanism, but they weren’t making use of the capabilities of the web for universal access. As long as you have the Flash plug-in, you get 100% of the intended experience. If you don’t have the plug-in, you get 0% of the intended experience. The modern equivalent is using a monolithic JavaScript library like Angular. As longer as your browser (and network) fulfils the minimum requirements, you should get 100% of the experience. But if your browser falls short, you get nothing. In other words, Angular and its ilk treat the web as a platform, not a continuum.

If you’re coming from a programming environment where you have a very good idea of what the runtime environment will be (e.g. a native app, a server-side script) then this idea of having minimum requirements for the runtime environment makes total sense. But, for me, it doesn’t match up well with the web, because the web is accessed by web browsers. Plural.

It’s telling that we’ve fallen into the trap of talking about what “the browser” is capable of, as though it were indeed a single runtime environment. There is no single “browser”, there are multiple, varied, hostile browsers, with differing degrees of support for front-end technologies …and that’s okay. The web was ever thus, and despite the wishes of some people that we only code for a single rendering engine, the web will—I hope—always have this level of diversity and competition when it comes to web browsers (call it fragmentation if you like). I not only accept that the web is this messy, chaotic place that will be accessed by a multitude of devices, I positively welcome it!

The alternative is to play a game of “let’s pretend”: Let’s pretend that web browsers can be treated like a single runtime environment; Let’s pretend that everyone is using a capable browser on a powerful device.

The problem with playing this game of “let’s pretend” is that we’ve played it before and it never works out well: Let’s pretend that everyone has a broadband connection; Let’s pretend that everyone has a screen that’s at least 960 pixels wide.

I refused to play that game in the past and I still refuse to play it today. I’d much rather live with the uncomfortable truth of a fragmented, diverse landscape of web browsers than live with a comfortable delusion.

The alternative—to treat “the browser” as though it were a known quantity—reminds of the punchline to all those physics jokes that go “Assume a perfectly spherical cow…”

Monolithic JavaScript frameworks like Angular assume a perfectly spherical browser.

If you’re willing to accept that assumption—and say to hell with the 250,000,000 people using Opera Mini (to pick just one example)—then Angular is a very powerful tool for helping you build something that is on the web, but not of the web.

Now I’m not saying that this way of building is wrong, just that it is at odds with my own principles. That’s why Angular isn’t necessarily a bad tool, but it’s a bad tool for me.

We often talk about opinionated software, but the truth is that all software is opinionated, because all software is built by humans, and humans can’t help but imbue their beliefs and biases into what they build (Tim Berners-Lee’s World Wide Web being a good example of that).

Software, like all technologies, is inherently political. … Code inevitably reflects the choices, biases and desires of its creators.

—Jamais Cascio

When it comes to choosing software that’s supposed to help you work faster—a JavaScript framework, for example—there are many questions you can ask: Is the code well-written? How big is the file size? What’s the browser support? Is there an active community maintaining it? But all of those questions are secondary to the most important question of all, which is “Do the beliefs and assumptions of this software match my own beliefs and assumptions?”

If the answer to that question is “yes”, then the software will help you. But if the answer is “no”, then you will be constantly butting heads with the software. At that point it’s no longer a useful tool for you. That doesn’t mean it’s a bad tool, just that it’s not a good fit for your needs.

That’s the reason why you can have one group of developers loudly proclaiming that a particular framework “rocks!” and another group proclaiming equally loudly that it “sucks!”. Neither group is right …and neither group is wrong. It comes down to how well the assumptions of that framework match your own worldview.

Now when it comes to a big MVC JavaScript framework like Angular, this issue is hugely magnified because the software is based on such a huge assumption: a perfectly spherical browser. This is exemplified by the architectural decision to do client-side rendering with client-side templates (as opposed to doing server-side rendering with server-side templates, also known as serving websites). You could try to debate the finer points of which is faster or more efficient, but it’s kind of like trying to have a debate between an atheist and a creationist about the finer points of biology—the fundamental assumptions of both parties are so far apart that it makes a rational discussion nigh-on impossible.

(Incidentally, Brett Slatkin ran the numbers to compare the speed of client-side vs. server-side rendering. His methodology is very telling: he tested in Chrome and …another Chrome. “The browser” indeed.)

So …depending on the way you view the web—“universal access” or “delivery mechanism”—Angular is either of no use to you, or is an immensely powerful tool. It’s entirely subjective.

But the problem is that if Angular is indeed enterprise software—i.e. somebody else is making the decision about whether or not you will be using it—then you could end up in a situation where you are forced to use a tool that not only doesn’t align with your principles, but is completely opposed to them. That’s a nightmare scenario.

Tuesday, January 20th, 2015

Lining up Responsive Day Out 3

I’ve been scheming away for a little while now on the third and final Responsive Day Out, and things have been working out better than I could have hoped—my dream line-up is becoming a reality.

Two thirds of the line-up is assembled and ready to go:

See? It’s looking pretty darn good, if you ask me.

You can expect plenty of meaty front-end development topics around the latest in CSS and browser APIs, but also plenty of talk on process, accessibility, performance, and the design challenges of responsive design.

My plan is to go out with a bang for this last Responsive Day Out and, the way things are looking, that’s on the cards.

I’ll let you know when tickets will be available. It’ll probably be sometime in early March. They will, as with previous years, be ludicrously good value.

Oh, and to get you in the mood, this might be a good time to revisit the audio recordings from the first two years.

Friday, January 9th, 2015

Pointless

I’ve spoken at quite a few events over the last few years (2014 was a particularly busy year). Many—in fact, most—of those events were overseas. Quite a few were across the atlantic ocean, so I’ve partaken of quite a few transatlantic flights.

Most of the time, I’d fly British Airways. They generally have direct flights to most of the US destinations where those speaking engagements were happening. This means that I racked up quite a lot of frequent-flyer miles, or as British Airways labels them, “avios.”

Frequent-flyer miles were doing gamification before gamification was even a thing. You’re lured into racking up your count, even though it’s basically a meaningless number. With BA, for example, after I’d accumulated a hefty balance of avios points, I figured I’d try to the use them to pay for an upcoming flight. No dice. You can increase your avios score all you like; when it actually comes to spending them, computer says “no.”

So my frequent-flyer miles were basically like bitcoins—in one sense, I had accumulated all this wealth, but in another sense, it was utterly worthless.

(I’m well aware of just how first-world-problemy this sounds: “Oh, it’s simply frightful how inconvenient it is for one to spend one’s air miles these days!”)

Early in 2014, I decided to flip it on its head. Instead of waiting until I needed to fly somewhere and then trying to spend my miles to get there (which never worked), I instead looked at where I could possibly get to, given my stash of avios points. The BA website was able to tell me, “hey, you can fly to Japan and back …if you travel in the off-season …in about eight months’ time.”

Alrighty, then. Let’s do that.

Now, even if you can book a flight using avios points, you still have to pay all the taxes and surcharges for the flight (death and taxes remain the only certainties). The taxes for two people to fly from London to Tokyo and back are not inconsiderable.

But here’s the interesting bit: the taxes are a fixed charge; they don’t vary according to what class you’re travelling. So when I was booking the flight, I was basically presented with the option to spend X amount of unspendable imaginary currency to fly economy, or more of unspendable imaginary currency to fly business class, or even more of the same unspendable imaginary currency to fly—get this—first class!

Hmmm …well, let me think about that decision for almost no discernible length of time. Of course I’m going to use as many of those avios points as I can! After all, what’s the point of holding on to them—it’s not like they’re of any use.

The end result is that tomorrow, myself and Jessica are going to fly from Heathrow to Narita …and we’re going to travel in the first class cabin! Squee!

Not only that, but it turns out that there are other things you can spend your avios points on after all. One of those things is hotel rooms. So we’ve managed to spend even more of the remaining meaningless balance of imaginary currency on some really nice hotels in Tokyo.

We’ll be in Japan for just over a week. We’ll start in Tokyo, head down to Kyoto, do a day trip to Mount Kōya, and then end up back in Tokyo.

We are both ridiculously excited about this trip. I’m actually going somewhere overseas that doesn’t involve speaking at a conference—imagine that!

There’s so much to look forward to—Sushi! Ramen! Yakitori!

And all it cost us was a depletion of an arbitrary number of points in a made-up scoring mechanism.

Wednesday, January 7th, 2015

Events in 2015

Quite a significant chunk of my time last year was spent organising dConstruct 2014. The final result was worth it, but it really took it out of me. It got kind of stressful there for a while: ticket sales weren’t going as well as previous years, so I had to dip my toes into the world of… (shudder) marketing.

That was my third year organising dConstruct, and I’m immensely proud of all three events. dConstruct 2012—also known as “the one with James Burke”—remains a highlight of my life. But—especially after the particularly draining 2014 event—I’m going to pass on organising it this year.

To be honest, I think that dConstruct 2014, the tenth one, could stand as a perfectly fine final event. It’s not like it needs to run forever, right?

Andy has been pondering this very question, but he’s up for giving dConstruct at least one more go in 2015:

As we prepare for our tenth anniversary, we’ve also been asking whether it should be our last—at least for a while. The jury is still out, and we probably won’t make any decisions till after the event.

Y’know, it could turn out that dConstruct in 2015 might reinvigorate my energy, but for now, I’m just too burned out to contemplate taking it on myself. Anyway, I know that the other Clearlefties are more than capable of putting together a fantastic event.

But dConstruct wasn’t the only event I organised last year. 2014’s Responsive Day Out was a wonderful event, and much less stressful to organise. That’s mostly because it’s a very different beast to dConstruct; much looser, smaller, and easy-going, with fewer expectations. That makes for a fun day out all ‘round.

I wasn’t even sure if there was going to be a second Responsive Day Out, but I’m really glad we did it. In fact, I think there’s room for one last go.

I’ve already started putting a line-up together (and I’m squeeing with excitement about it already!), and this will definitely be the last Responsive Day Out, but keep your calendar clear on Friday, June 19th for…

Responsive Day Out 3: The Final Breakpoint.

Tuesday, January 6th, 2015

HTTPS

Tim Berners-Lee is quite rightly worried about linkrot:

The disappearance of web material and the rotting of links is itself a major problem.

He brings up an interesting point that I hadn’t fully considered: as more and more sites migrate from HTTP to HTTPS (A Good Thing), and the W3C encourages this move, isn’t there a danger of creating even more linkrot?

…perhaps doing more damage to the web than any other change in its history.

I think that may be a bit overstated. As many others point out, almost all sites making the switch are conscientious about maintaining redirects with a 301 status code.

(There’s also a similar 308 status code that I hadn’t come across, but after a bit of investigating, that looks to be a bit of mess.)

Anyway, the discussion does bring up some interesting points. Transport Layer Security is something that’s handled between the browser and the server—does it really need to be visible in the protocol portion of the URL? Or is that visibility a positive attribute that makes it clear that the URL is “good”?

And as more sites move to HTTPS, should browsers change their default behaviour? Right now, typing “example.com” into a browser’s address bar will cause it to automatically expand to http://example.com …shouldn’t browsers look for https://example.com first?

All good food for thought.

There’s a Google Doc out there with some advice for migrating to HTTPS. Unfortunately, the trickiest part—getting and installing certificates—is currently an owl-drawing tutorial, but hopefully it will get expanded.

If you’re looking for even more reasons why enabling TLS for your site is a good idea, look no further than the latest shenanigans from ISPs in the UK (we lost the battle for net neutrality in this country some time ago).

They can’t do that to pages served over HTTPS.

Sunday, January 4th, 2015

Soundcloudbusting

Matt wrote a great article called Ten Years of Podcasting: Fighting Human Nature (although I’m not entirely sure why he put it on Ev’s site instead of—or in addition to—his own). It’s a look back at the history of podcasting, and how it has grown out of its nerdy origins to become more of a mainstream activity. In it, he kindly gives a shout-out to Huffduffer:

…a way to make piecemeal meta-podcasts on the fly built up from random shows (here’s my feed).

Matt has written about how he uses Huffduffer before: a quick introduction to adding your Huffduffer feed to Instacast. It’s equally straightforward with Overcast, and most other iOS podcast apps.

If you use the iOS app Workflow, there’s a nifty tutorial for extracting the audio from YouTube videos, posting the audio to Dropbox, and subscribing in Huffduffer. I’m letting the side down somewhat though: Huffduffer’s API is currently read-only, but it would so much more powerful if you could post from other apps. I need to wrap my head around OAuth to do this. I was hoping to do OwnYourGram-style API with IndieAuth and micropub (once your Huffduffer profile has your website URL, and that URL has rel="me" links to OAuth providers like Twitter, Flickr, or Github, all the pieces should be in place), but alas IndieAuth only works on a domain or subdomain basis so /username URLs are out.

Anyway, back to Matt’s article about podcasting. He writes:

Personally, I like it when new podcasts use Soundcloud for their hosting, because on a desktop computer it means I can easily dip into their archives and play random episodes, scrub to certain segments and get a feel for the show before I subscribe.

It’s true that if you’re sitting in front of a desktop computer, Soundcloud is a great way to listen to an audio file there and then. But it’s a lousy way to host a podcast.

The whole point of podcasting is that it’s time-shifted. You get to listen to the audio you want, when you want. The whole point of Soundcloud is that you listen to audio then and there. That’s great if you’re a musician, looking to make sure that people can’t make copies of your music, but it’s terrible if you’re a podcaster.

To be fair, Soundcloud’s primary audience is still musicians, rather than podcasters, so it makes sense for them to prioritise that use-case. But still, they really go out of their way to obfuscate the actual audio file. Even if the publisher has checked the right box to allow users to download the audio file, the result is a very literal interpretation of that: you can download the file, but you can’t copy the URL and paste it into, say, an app for listening later (and you certainly can’t huffduff it).

Case in point: Matt finishes his article with:

If you don’t have time to read the above, it’s available as a 14min audio file…

That audio file is hosted on Soundcloud. You can listen to it there, or you can listen to it through the embedded player on the article itself. But that’s it. You can’t take it with you. You can’t listen to it later. You can’t, for example, listen to it in your car, even though as Matt says:

…for most Americans, killing time listening to podcasts in a car is a great place.

If you can figure out a way to get at Matt’s audio file (and maybe even huffduff it), I’d be much obliged.

Like Merlin says:

Wednesday, December 31st, 2014

It’s the end of the year as we know it.

It’s the last day of the year. I won’t be going out tonight. I’m going to stay in with Jessica in our cosy home.

The general consensus is that 2014 was a crappy year for human beings on planet Earth. In actuality, and contrary to popular belief, the human race continued its upward trend of improvement in almost all areas. Less violence, less disease, fewer wars, a record-breaking minimum of air crashes, and while the disparity between the richest and the poorest has increased, the baseline level of what constitutes poverty continues to increase throughout the world.

This trend is often met with surprise, or even disbelief. Just ask Matt Ridley and Steven Pinker. We tend to over-inflate the negative and undervalue the positive. And we seem to do it more and more with each passing year (which, in itself, can be seen as part of the overall positive trend: the fact that violence and inequality outrages us now more than ever is, on balance, a good thing). It seems to be part of our modern human nature to allow the bad to overwhelm the good in its importance.

Take my past year, for example. There was so much that was good. It was a good year for Clearleft and I travelled to marvellous places (Tel Aviv, Munich, Seattle, Austin, San Diego, Riga, Freiburg, Bologna, Florida, and more). I ate wonderful food. I read. I wrote. I listened. I spoke. I attended some workshops. I ran some workshops. I learned. I taught. I went to some great events. I organised Responsive Day Out 2 and dConstruct. I even wrote the occasional bit of code.

But despite all of that, 2014 is a year that feels dominated by death.

It started at the beginning of the year with the death of Jessica’s beloved Oma. The only positive spin I can put on it is that she had a long life, and she died surrounded by her family (Jessica included). But it was still a horrible event.

For the first half of the year, the web community was united behind Eric as he went through the unimaginable. Then, in June, Rebecca died. And the web community was united in sorrow. It was such an outrage against all that is good in this world.

I visited Eric that day. I tried to convey how much the people of the web were feeling for him. I couldn’t possibly convey it, but I had to try. I offered what comfort I could, but some situations are so far beyond normalcy that literally nothing can be done.

That death, the death of a child …there’s something so wrong, so obscene about it.

One month later, Chloe killed herself.

I miss her. I miss her so much.

So I understand why, despite the upward trends in human achievement, despite all the positive events of the last twelve months, 2014 feels like a year of dread and grief. I understand why so many people are happy to see the back of 2014. Good riddance, right?

But I still don’t want to let the bad—and boy, was it ever bad—crush the good. I’m seeing out the year as I mean to go on: eating good food, drinking good wine, reading, writing, and being alive.

It’s the last day of the year. I won’t be going out tonight. I’m going to stay in with Jessica in our cosy home.

Tuesday, December 16th, 2014

The Session trad tune machine

Most pundits call it “the Internet of Things” but there’s another phrase from Andy Huntington that I first heard from Russell Davies: “the Geocities of Things.” I like that.

I’ve never had much exposure to this world of hacking electronics. I remember getting excited about the possibilities at a Brighton BarCamp back in 2008:

I now have my own little arduino kit, a bread board and a lucky bag of LEDs. Alas, know next to nothing about basic electronics so I’m really going to have to brush up on this stuff.

I never did do any brushing up. But that all changed last week.

Seb is doing a new two-day workshop. He doesn’t call it Internet Of Things. He doesn’t call it Geocities Of Things. He calls it Stuff That Talks To The Interwebs, or STTTTI, or ST4I. He needed some guinea pigs to test his workshop material on, so Clearleft volunteered as tribute.

In short, it was great! And this time, I didn’t stop hacking when I got home.

First off, every workshop attendee gets a hand-picked box of goodies to play with and keep: an arduino mega, a wifi shield, sensors, screens, motors, lights, you name it. That’s the hardware side of things. There are also code samples and libraries that Seb has prepared in advance.

Getting ready to workshop with @Seb_ly. Unwrapping some Christmas goodies from Santa @Seb_ly.

Now, remember, I lack even the most basic knowledge of electronics, but after two days of fiddling with this stuff, it started to click.

Blinkenlights. Hello, little fella.

On the first workshop day, we all did the same exercises, connected things up, getting them to talk to the internet, that kind of thing. For the second workshop day, Seb encouraged us to think about what we might each like to build.

I was quite taken with the ability of the piezo buzzer to play rudimentary music. I started to wonder if there was a way to hook it up to The Session and have it play the latest jigs, reels, and hornpipes that have been submitted to the site in ABC notation. A little bit of googling revealed that someone had already taken a stab at writing an ABC parser for arduino. I didn’t end up using that code, but it convinced me that what I was trying to do wasn’t crazy.

So I built a machine that plays Irish traditional music from the internet.

Playing with hardware and software, making things that go beep in the night.

The hardware has a piezo buzzer, an “on” button, an “off” button, a knob for controlling the speed of the tune, and an obligatory LED.

The software has a countdown timer that polls a URL every minute or so. The URL is http://tune.adactio.com/. That in turn uses The Session’s read-only API to grab the latest tune activity and then get the ABC notation for whichever tune is at the top of that list. Then it does some cleaning up—removing some of the more advanced ABC stuff—and outputs a single line of notes to be played. I’m fudging things a bit: the device has the range of a tin whistle, and expects tunes to be in the key of D or G, but seeing as that’s at least 90% of Irish traditional music, it’s good enough.

Whenever there’s a new tune, it plays it. Or you can hit the satisfying “on” button to manually play back the latest tune (and yes, you can hit the equally satisfying “off” button to stop it). Being able to adjust the playback speed with a twiddly knob turns out to be particularly handy if you decide to learn the tune.

I added one more lo-fi modification. I rolled up a piece of paper and placed it over the piezo buzzer to amplify the sound. It works surprisingly well. It’s loud!

Rolling my own speaker cone, quite literally.

I’ll keep tinkering with it. It’s fun. I realise I’m coming to this whole hardware-hacking thing very late, but I get it now: it really does feel similar to that feeling you would get when you first figured out how to make a web page back in the days of Geocities. I’ve built something that’s completely pointless for most people, but has special meaning for me. It’s ugly, and it’s inefficient, but it works. And that’s a great feeling.

(P.S. Seb will be running his workshop again on the 3rd and 4th of February, and there will a limited amount of early-bird tickets available for one hour, between 11am and midday this Thursday. I highly recommend you grab one.)

Monday, December 8th, 2014

Responsible Web Components

Bruce has written a great article called On the accessibility of web components. Again. In it, he takes issue with the tone of a recent presentation on web components, wherein Dimitri Glazkov declares:

Custom elements is really neat. It basically says, “HTML it’s been a pleasure”.

Bruce paraphrases this as:

Bye-bye HTML; you weren’t useful enough. Hello, brave new world of custom elements.

Like Bruce, I’m worried about this year-zero thinking. First of all, I think it’s self-defeating. In my experience, the web technologies that succeed are the ones that build upon what already exists, rather than sweeping everything aside. Evolution, not revolution.

Secondly, web components—or more specifically, custom elements—already allow us to extend existing HTML elements. That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

But, as Bruce asks:

Snarking aside, why do so few people talk about extending existing HTML elements with web components? Why’s all the talk about brand new custom elements? I don’t know.

Patrick leaves a comment with his answer:

The issue of not extending existing HTML elements is exactly the same that we’ve seen all this time, even before web components: developers who are tip-top JavaScripters, who already plan on doing all the visual feedback/interactions (for mouse users like themselves) in script anyway themselves, so they just opt for the most neutral starting point…a div or a span. Web components now simply gives the option of then sweeping all that non-semantic junk under a nice, self-contained rug.

That’s a depressing thought. But it might very well be true.

Stuart also comments:

Why aren’t web components required to be created with is=“some-component” on an existing HTML element? This seems like an obvious approach; sure, someone who wants to make something meaningless will just do <div is=my-thing> or (worse) <body is=my-thing> but it would provide a pretty heavy hint that you’re supposed to be doing things The Right Way, and you’d get basic accessibility stuff thrown in for free.

That’s a good question. After all, writing <new-shiny></new-shiny> is basically the same as <span is=“new-shiny”></span>. It might not make much of a difference in the case of a span or div, but it could make an enormous difference in the case of, say, form elements.

Take a look at IBM’s library of web components. They’re well-written and they look good, but time and time again, they create new custom elements instead of extending existing HTML.

Although, as Bruce points out:

Of course, not every new element you’ll want to make can extend an existing HTML element.

But I still think that the majority of web components could, and should, extend existing elements. Addy Osmani has put together some design principles for web components and Steve Faulkner has created a handy punch-list for web components, but I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Rather than just complain about this kind of thing, I figured I’d try my hand at putting it into practice…

Dave recently made a really nice web component for playing back podcast audio files. I could imagine using something like this on Huffduffer. It’s called podcast-player and you use it thusly:

<podcast-player src="my.mp3"></podcast-player>

One option for providing fallback content would be to include it within the custom element tags:

<podcast-player src="my.mp3">
    <a href="my.mp3">Listen</a>
</podcast-player>

That would require minimum change to Dave’s code. I’d just need to make sure that the fallback content within podcast-player elements is removed in supporting browsers.

I forked Dave’s code to try out another idea. I figured that if the starting point was a regular link to the audio file, that would also be a way of providing fallback for browsers that don’t cut the web component mustard:

<a href="my.mp3" is="podcast-player">Listen</a>

It required surprisingly few changes to the code. I needed to remove the fallback content (that “Listen” text), and I needed to prevent the default behaviour (following the href), but it was fairly straightforward.

However, I’m sure it could be improved in one of two ways:

  1. I should probably supply an ARIA role to the extended link. I’m not sure what would be the right one, though …menu or menubar perhaps?
  2. Perhaps a link isn’t the right element to extend. Really I should be extending an audio element (which itself allows for fallback content). When I tried that, I found it too hard to overcome the default browser rules for hiding anything between the opening and closing tags. But you’re smarter than me, so I bet you could create <audio is=“podcast-player”>.

Fork the code and have at it.