Monday, March 2nd, 2015

Responsive Day Out tickets tomorrow

Tickets for the third and final Responsive Day Out go on sale at 11am tomorrow, Tuesday, March 3rd. Here’s the direct link to the ticket page.

I recommend getting in there pretty sharpish. Tickets are less than a hundred quid, which is a steal considering the amazing line-up of speakers who will be bursting your brain with their knowledge of design, process, CSS, JavaScript, user experience, performance, accessibility, and everything else associated with responsive web design (which, let’s face it, is pretty much everything).

Oh, and that line-up just got even better. The one and only Jason Grigsby will be speaking! If you’ve seen Jason speak before, then you know how fantastic his talks are. If you haven’t seen Jason speak before, you’re in for a real treat. I’m guessing he’ll be dropping knowledge bombs on responsive images. He’s the Jedi master when it comes to that stuff. He’s got a real knack for taking a complex subject and making it understandable …something that could be said of all the other fantastic speakers too.

So set your calendar alarm now. Get your ticket tomorrow morning. And I’ll see you here in Brighton on Friday, June 19th for Responsive Day Out 3: The Final Breakpoint!

Tuesday, February 17th, 2015

Cerf rocks

After I wrote about digital preservation and the need to save everything, not just the so-called “important” stuff, Jason wrote a lovely piece with his own thoughts on the matter:

In order to write a history, you need evidence of what happened. When we talk about preserving the stuff we make on the web, it isn’t because we think a Facebook status update, or those GeoCities sites have such significance now. It’s because we can’t know.

In a timely coincidence, Vint Cerf also spoke about the importance of digital preservation:

When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history.

He warns of the dangers of rapidly-obsoleting file formats:

We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised.

It was a little weird that the Guardian headline refers to Vint Cerf as “Google boss”. On the BBC he’s labelled as “Google’s Vint Cerf”. Considering he’s one of the creators of the internet itself, it’s a bit like referring to Neil Armstrong as a NASA employee.

I have to say, I just love listening to him talk. He’s so smooth. I’m sure that the character of The Architect from The Matrix Reloaded is modelled on him.

Vint Cerf knows a thing or two about long-term thinking when it comes to data formats. He has written many RFCs for the IETF (my favourite being RFC 2468). Back in 1969, he wrote RFC 20, proposing the ASCII format for network interchange. If you’ve ever used the keypress event in JavaScript and wondered why, for example, the number 13 corresponds to a carriage return, this is where all those numbers come from.

Last month, over 45 years after the RFC’s original publication, it became an official standard.

So when Vint Cerf warns about the dangers of digitising into file formats that could become unreadable, I think we should pay attention to him.

Monday, February 16th, 2015

Tickets for the last Responsive Day Out

When he was writing up the Clearleft weeknotes for last week, Jon described my activity thusly:

Jeremy—besides working alongside myself and Charlotte this week—has been scheming on Responsive Day Out, and he seems quite pleased with himself. Pretty sure I heard a sinister ‘my plans are coming together almost too well’-type laugh today.

Well, my dastardly schemes are working out perfectly. I’m ridiculously pleased to announce that Rosie Campbell and Aaron Gustafson have been added to the line up for Responsive Day Out 3: The Final Breakpoint.

That means that as well as Rosie and Aaron, you’ll also hear from Zoe, Jake, Alice, Peter, Rachel, Ruth, Heydon, and Alla …and that’s not even the final line-up! There are still more speaker announcements to come, and if my scheming pays off, they’re going to be quite special.

I hope that you’ve already added June 19th (the date of the conference) to your calendar, but I’ve got another date for your diary: March 3rd. That’s when tickets will go on sale.

As with last year’s event—Responsive Day Out 2: The Squishening—tickets will be a measly £80 plus VAT (a total of £96). All those fantastic talks for less than a hundred squid.

So make sure you’re at the ready on 11am on Tuesday, the 3rd of March.

And then I’ll see you for a packed day of knowledge bomb dropping on Friday, the 19th of June.

Wednesday, February 11th, 2015

Ordinary plenty

Aaron asked a while back “What do we own?”

I love the idea of owning your content and then syndicating it out to social networks, photo sites, and the like. It makes complete sense… Web-based services have a habit of disappearing, so we shouldn’t rely on them. The only Web that is permanent is the one we control.

But he quite rightly points out that we never truly own our own domains: we rent them. And when it comes to our servers, most of us are renting those too.

It looks like print is a safer bet for long-term storage. Although when someone pointed out that print isn’t any guarantee of perpetuity either, Aaron responded:

Sure, print pieces can be destroyed, but important works can be preserved in places like the Beinecke

Ah, but there’s the crux—that adjective, “important”. Print’s asset—the fact that it is made of atoms, not bits—is also its weak point: there are only so many atoms to go around. And so we pick and choose what we save. Inevitably, we choose to save the works that we deem to be important.

The problem is that we can’t know today what the future value of a work will be. A future president of the United States is probably updating their Facebook page right now. The first person to set foot on Mars might be posting a picture to her Instagram feed at this very moment.

One of the reasons that I love the Internet Archive is that they don’t try to prioritise what to save—they save it all. That’s in stark contrast to many national archival schemes that only attempt to save websites from their own specific country. And because the Internet Archive isn’t a profit-driven enterprise, it doesn’t face the business realities that caused Google to back-pedal from its original mission. Or, as Andy Baio put it, never trust a corporation to do a library’s job.

But even the Internet Archive, wonderful as it is, suffers from the same issue that Aaron brought up with the domain name system—it’s centralised. As long as there is just one Internet Archive organisation, all of our preservation eggs are in one magnificent basket:

Should we be concerned that the technical expertise and infrastructure for doing this work is becoming consolidated in a single organization?

Which brings us back to Aaron’s original question. Perhaps it’s less about “What do we own?” and more about “What are we responsible for?” If we each take responsibility for our own words, our own photos, our own hopes, our own dreams, we might not be able guarantee that they’ll survive forever, but we can still try everything in our power to keep them online. Maybe by acknowledging that responsibility to preserve our own works, instead of looking for some third party to do it for us, we’re taking the most important first step.

My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.

There was a beautiful moment in Cory Doctorow’s closing keynote at last year’s dConstruct. It was an aside to his main argument but it struck like a hammer. Listen in at the 20 minute mark:

They’re the raw stuff of communication. Same for tweets, and Facebook posts, and the whole bit. And this is where some cynic usually says, “Pah! This is about preserving all that rubbish on Facebook? All that garbage on Twitter? All those pictures of cats?” This is the emblem of people who want to dismiss all the stuff that happens on the internet.

And I’m supposed to turn around and say “No, no, there’s noble things on the internet too. There’s people talking about surviving abuse, and people reporting police violence, and so on.” And all that stuff is important but I’m going to speak for the banal and the trivial here for a moment.

Because when my wife comes down in the morning—and I get up first; I get up at 5am; I’m an early riser—when my wife comes down in the morning and I ask her how she slept, it’s not because I want to know how she slept. I sleep next to my wife. I know how my wife slept. The reason I ask how my wife slept is because it is a social signal that says:

I see you. I care about you. I love you. I’m here.

And when someone says something big and meaningful like “I’ve got cancer” or “I won” or “I lost my job”, the reason those momentous moments have meaning is because they’ve been built up out of this humus of a million seemingly-insignificant transactions. And if someone else’s insignificant transactions seem banal to you, it’s because you’re not the audience for that transaction.

The medieval scribes of Ireland, out on the furthermost edges of Europe, worked to preserve the “important” works. But occasionally they would also note down their own marginalia like:

Pleasant is the glint of the sun today upon these margins, because it flickers so.

Short observations of life in fewer than 140 characters. Like this lovely example written in ogham, a morse-like system of encoding the western alphabet in lines and scratches. It reads simply “latheirt”, which translates to something along the lines of “massive hangover.”

I’m glad that those “unimportant” words have also been preserved.

Centuries later, the Irish poet Patrick Kavanagh would write about the desire to “wallow in the habitual, the banal”:

Wherever life pours ordinary plenty.

Isn’t that a beautiful description of the web?

Saturday, February 7th, 2015

Hackfarming Blood Buddies

Every year at Clearleft, there’s a week where we step away from client work, go off the grid, and disappear into the countryside to work on something fun. We call it Hack Farm.

Hack Farm usually takes place around November, but due to various complexities, Hack Farm 2014 wound up getting pushed back to the start of 2015. Last week we formed a convoy, stocked up on the bare essentials (food, post-it notes, and booze), and drove west for four hours until we were in Herefordshire at a place called The Colloquy—a return to the site of the first ever Hack Farm.

Arrival at The Colloquy.

I kept notes on each day.

Day Zero

We arrive in the late afternoon, settle into our respective rooms, and eat some wonderful home-cooked food. After dinner, even though everyone’s pretty knackered, we agree that it’s best to figure out what everyone will be working on for the next few days.

Everyone gets a chance to pitch their ideas, and then we all do some dot-voting to whittle down the options. In short order, we arrive at four different projects for four different teams.

One of my ideas is chosen. This is something I’ve been pitching every single year at Hack Farm, and every single year it ends up narrowly missing out. This year, it’s finally going to happen!

On my team I’ve got Rich, Batesy, Andy P, and Tessa.

Day One

We choose a room to use as our home base and begin.

We start by agreeing on a hypothesis—more of an assumption, really—that we’ll be basing everything upon:

People are more like to give blood if they are not alone.

Hypothesis

We start writing down questions that people might ask related to giving blood. Some of these questions might well turn out to be out of scope for this project, or can already be better answered by an existing service like blood.co.uk e.g.:

  • Can I give blood?
  • How often can I give blood?
  • Will it hurt?
  • How long will it take?

Other questions are potentially open to us providing answers:

  • Where can I give blood?
  • When I can I give blood?
  • Who else is giving blood?

That last one is a question that doesn’t seem to be answered anywhere else.

We brain-dump potential data sources that answered the “who”, “when”, and “where” questions? The data from blood.co.uk could potentially answer the “when” and “where” questions e.g. when and where is the next donation? Data from Twitter, Facebook, or your address book could answer the “who” questions e.g. who are you, and who are your friends?

We brainstorm potential outputs of the project. The obvious choices are a website or a native app, but there could also potentially be email, SMS, or even posters and postcards.

We think about potential incentives for the users of this service: peer pressure, gamification, bragging rights, reassurance, etc.

So there’s a lot of divergent thinking going on: at this stage, there are no bad ideas (no, really!).

We also establish the goals of the project—what we would like to see happen as a result of this service existing. The very minimum success criteria is:

Someone gives blood who hasn’t given blood before.

There’s a follow-on criteria for measuring longer-term success:

A group gives blood regularly.

We split into two groups to work on a propositional statement, then come together to merge what we came up with. Here it is:

For people who want to give blood, who need encouragement and motivation, Blood Buddies brings together people you know to make it a shared experience. That way, you’re more likely to give blood.

Unlike blood.co.uk, it frames giving blood as a shared, rewarding activity.

Proposition James and Tessa

Blood Buddies is a codename for now. The final service might have a different name, like Bluddies maybe.

After lunch, we start to work on user stories and personas. After a while, we think we’ve got a pretty clear idea for the minimal viable user journey.

Now we take a little break and stretch our legs.

A stroll through the fields.

When we regroup, we start researching technical possibilities (like Twitter authentication, GMail address book, Facebook contacts, etc.), while also throwing ideas around to do with branding, tone of voice, etc. James Box comes in and helps us out with a handy branding exercise.

In an effort the name the thing, we create a page filled with relevant words that might be combined into a name. Eventually we reach the “just fucking end it!” moment. The service is called “Blood Buddies” after all. The tagline is …drumroll… “Get plastered together!”

Meanwhile, having investigated the technical possibilities, it looks like Twitter’s API will be the easiest (relatively) to start with.

Vocabulary Kanban

We write out our epics and create a little kanban board. We have our tasks figured out:

  • implement sign-in with Twitter,
  • create a style guide,
  • mock up the homepage,
  • mock up a sign-up form,
  • and more.

Tomorrow everyone can assign a task to themselves and get cracking (some people have started already).

Day Two

After a late Superbowl night, we arise and begin tackling the day’s tasks.

I managed to get a very rudimentary Twitter sign-in working (eventually!) so now my task is to do something with the data that Twitter is returning …namely, storing it in a database. And because this relies on signing in with Twitter to get any results, this needs to get on to an actual web server as soon as possible.

Cue a day of wrangling with PHP, MySQL, OAuth, Git, Apache, SSH keys, and DNS settings …with an intermittent internet connection that drops out at the most inconvenient of times.

Andy is storyboarding the promo video that will help sell the story of Blood Buddies.

Storyboard

Meanwhile James and Tessa are hammering out a visual language for Blood Buddies. So the work is being approached from two different ends: the server side (how it works behind the scenes) and the interface (how it looks to the end user). In the middle is the user flow, and that’s what Richard is working on, also looking ahead past the minimal viable product to include features that can be added later.

By late afternoon the most basic server-side functionality is done, and the site is live at bloodbuddies.co.uk. Of course, there’s very, very, very little to see there, but at least our team can start adding themselves to the database.

So now the task is to join up the back-end functionality with the visual design and copy. As these strands come together, it feels like we’re getting back to a more collaborative phase: whereas yesterday involved lots of group activity, today was more splintered. But that’s going to change now that we’re going to join up the individual pieces into a unified interface.

Today felt quite productive considering that three out of the five people on our team are on cooking duty.

Spaghetti and meatballs Dinnertime

Day Three

Today is a day of rest. It’s a beautiful day. We go for a drive through the countryside, pop into a pub for some grub, and go walking on the hills.

Walking west to Wales.

Day Four

We’re down to just three team members today. Tessa is working on a different project and Andy is spending the day sleeping, puking, and generally recovering from a heavy night. N00b.

We get cracking on with integrating the visual design with the back-end functionality. That means bashing out some CSS. After an hour or two, we’ve got something basic in place.

While James works on refining the visuals—including a kick-ass logo—Richard is writing lots and lots of copy, and figuring out user flows.

Meanwhile I’m trying to get server-side stuff in place, fiddling with DNS and email; not my favourite activity.

Once the DNS is pointed to the Digital Ocean server, and with the Twitter sign-in working okay, we realise that we’ve actually launched! Admittedly it’s very basic and it needs plenty of refinement, but it’s a start.

We head out for the evening meal together. Just one more day to go.

The Stagg Inn

Day Five

James starts the day by finishing up his kick-ass Blood Buddies logo.

Richard is writing and editing lots of witty copy.

Andy is storyboarding a promotional video.

Rich, me, James, and Andy

I’m trying to get emails working, so that when someone you know signs up to Blood Buddies, we can email you to let you know. By lunchtime, we’ve got it all working.

Lots of the details are in place now: the logo, web fonts, an error page, a favicon …it feels good to be iterating on a live site.

Kanban progress Final day tasks

Device testing

After lunch, James, Richard, and I work on expanding out the home page. Once everything is in pretty good shape, we all come together (with Andy and Tessa) to talk about what the next steps could be after this minimum viable product.

There’s consensus that the most important step would be adding more ways of signing into the site, instead of just Twitter. Also, there’s a lot of functionality we could add if we can scrape the data from blood.co.uk

But that’s for another day. Right now we’ve got a barebones site, but it’s working.

We shipped.

Friday, January 30th, 2015

Extensibility

I’ve said it before, but I’m going to reiterate my conflicted feelings about Web Components:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

There are broadly two ways that they could potentially be used:

  1. Web Components are used by developers to incrementally add more powerful elements to their websites. This evolutionary approach feels very much in line with the thinking behind the extensible web manifesto. Or:
  2. Web Components are used by developers as a monolithic platform, much like Angular or Ember is used today. The end user either gets everything or they get nothing.

The second scenario is a much more revolutionary approach—sweep aside the web that has come before, and usher in a new golden age of Web Components. Personally, I’m not comfortable with that kind of year-zero thinking. I prefer evolution over revolution:

Revolutions sometimes change the world to the better. Most often, however, it is better to evolve an existing design rather than throwing it away. This way, authors don’t have to learn new models and content will live longer. Specifically, this means that one should prefer to design features so that old content can take advantage of new features without having to make unrelated changes. And implementations should be able to add new features to existing code, rather than having to develop whole separate modes.

The evolutionary model is exemplified by the design of HTML 5.

The revolutionary model is exemplified by the design of XHTML 2.

I really hope that the Web Components model goes down the first route.

Up until recently, my inner Web Components pendulum was swinging towards the hopeful end of my spectrum of anticipation. That was mainly driven by the ability of custom elements to extend existing HTML elements.

So, for example, instead of creating a new element like this:

<taco-button>...</taco-button>

…you can piggyback off the existing semantics of the button element like this:

<button is="taco-button">...</button>

For a real-world example, see Github’s use of <time is="time-ago">.

I wrote about creating responsible Web Components:

That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Peter Gasston also has a great post on best practice for creating custom elements:

It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.

But now it looks like this superpower of custom elements is being nipped in the bud:

It also does not address subclassing normal elements. Again, while that seems desirable the current ideas are not attractive long term solutions. Punting on it in order to ship a v1 available everywhere seems preferable.

Now, I’m not particularly wedded to the syntax of using the is="" attribute to extend existing elements …but I do think that the ability to extend existing elements declaratively is vital. I’m not alone, although I may very well be in the minority.

Bruce has outlined some use cases and Steve Faulkner has enumerated the benefits of declarative extensibility:

I think being able to extend existing elements has potential value to developers far beyond accessibility (it just so happens that accessibility is helped a lot by re-use of existing HTML features.)

Bruce concurs:

Like Steve, I’ve no particularly affection (or enmity) towards the <input type="radio" is="luscious-radio"> syntax. But I’d like to know, if it’s dropped, how progressive enhancement can be achieved so we don’t lock out users of browsers that don’t have web components capabilities, JavaScript disabled or proxy browsers. If there is a concrete plan, please point me to it. If there isn’t, it’s irresponsible to drop a method that we can see working in the example above with nothing else to replace it.

He adds:

I also have a niggling worry that this may affect the uptake of web components.

I think he’s absolutely right. I think there are many developers out there in a similar position to me, uncertain exactly what to make of this new technology. I was looking forward to getting really stuck into Web Components and figuring out ways of creating powerful little extensions that I could start using now. But if Web Components turn out to be an all-or-nothing technology—a “platform”, if you will—then I will not only not be using them, I’ll be actively arguing against their use.

I really hope that doesn’t happen, but I must admit I’m not hopeful—my inner pendulum has swung firmly back towards the nervous end of my anticipation spectrum. That’s because I’m getting the distinct impression that the priorities being considered for Web Components are those of JavaScript framework creators, rather than web developers looking to add incremental improvements while maintaining backward compatibility.

If that’s the case, then Web Components will be made in the image of existing monolithic MVC frameworks that require JavaScript to do anything, even rendering content. To me, that’s a dystopian vision, one I can’t get behind.

Tuesday, January 27th, 2015

A question of timing

I’ve been updating my collection of design principles lately, adding in some more examples from Android and Windows. Coincidentally, Vasilis unveiled a neat little page that grabs one list of principles at random —just keep refreshing to see more.

I also added this list of seven principles of rich web applications to the collection, although they feel a bit more like engineering principles than design principles per se. That said, they’re really, really good. Every single one is rooted in performance and the user’s experience, not developer convenience.

Don’t get me wrong: developer convenience is very, very important. Nobody wants to feel like they’re doing unnecessary work. But I feel very strongly that the needs of the end user should trump the needs of the developer in almost all instances (you may feel differently and that’s absolutely fine; we’ll agree to differ).

That push and pull between developer convenience and user experience is, I think, most evident in the first principle: server-rendered pages are not optional. Now before you jump to conclusions, the author is not saying that you should never do client-side rendering, but instead points out the very important performance benefits of having the server render the initial page. After that—if the user’s browser cuts the mustard—you can use client-side rendering exclusively.

The issue with that hybrid approach—as I’ve discussed before—is that it’s hard. Isomorphic JavaScript (terrible name) can theoretically help here, but I haven’t seen too many examples of it in action. I suspect that’s because this approach doesn’t yet offer enough developer convenience.

Anyway, I found myself nodding along enthusiastically with that first of seven design principles. Then I got to the second one: act immediately on user input. That sounds eminently sensible, and it’s backed up with sound reasoning. But it finishes with:

Techniques like PJAX or TurboLinks unfortunately largely miss out on the opportunities described in this section.

Ah. See, I’m a big fan of PJAX. It’s essentially the same thing as the Hijax technique I talked about many years ago in Bulletproof Ajax, but with the new addition of HTML5’s History API. It’s a quick’n’dirty way of giving the illusion of a fat client: all the work is actually being done in the server, which sends back chunks of HTML that update the interface. But it’s true that, because of that round-trip to the server, there’s a bit of a delay and so you often end up briefly displaying a loading indicator.

I contend that spinners or “loading indicators” should become a rarity

I agree …but I also like using PJAX/Hijax. Now how do I reconcile what’s best for the user experience with what’s best for my own developer convenience?

I’ve come up with a compromise, and you can see it in action on The Session. There are multiple examples of PJAX in action on that site, like pretty much any page that returns paginated results: new tune settings, the latest events, and so on. The steps for initiating an Ajax request used to be:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Display a loading indicator,
  4. Request the new data from the server, and
  5. Update the page with the new data.

In one sense, I am acting immediately to user input, because I always display the loading indicator straight away. But because the loading indicator always appears, no matter how fast or slow the server responds, it sometimes only appears very briefly—just for a flash. In that situation, I wonder if it’s serving any purpose. It might even be doing the opposite to its intended purpose—it draws attention to the fact that there’s a round-trip to the server.

“What if”, I asked myself, “I only showed the loading indicator if the server is taking too long to send a response back?”

The updated flow now looks like this:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Start a timer, and
  4. Request the new data from the server.
  5. If the timer reaches an upper limit, show a loading indicator.
  6. When the server sends a response, cancel the timer and
  7. Update the page with the new data.

Even though there are more steps, there’s actually less happening from the user’s perspective. Where previously you would experience this:

  1. I click on a button,
  2. I briefly see a loading indicator,
  3. I see the new data.

Now your experience is:

  1. I click on a button,
  2. I see the new data.

…unless the server or the network is taking too long, in which case the loading indicator appears as an interim step.

The question is: how long is too long? How long do I wait before showing the loading indicator?

The Nielsen Norman group offers this bit of research:

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

So I should set my timer to 100 milliseconds. In practice, I found that I can set it to as high as 200 to 250 milliseconds and keep it feeling very close to instantaneous. Anything over that, though, and it’s probably best to display a loading indicator: otherwise the interface starts to feel a little sluggish, and slightly uncanny. (“Did that click do any—? Oh, it did.”)

You can test the response time by looking at some of the simpler pagination examples on The Session: new recordings or new discussions, for example. To see examples of when the server takes a bit longer to send a response, you can try paginating through search results. These take longer because, frankly, I’m not very good at optimising some of those search queries.

There you have it: an interface that—under optimal conditions—reacts to user input instantaneously, but falls back to displaying a loading indicator when conditions are less than ideal. The result is something that feels like a client-side web thang, even though the actual complexity is on the server.

Now to see what else I can learn from the rest of those design principles.

Friday, January 23rd, 2015

Angular momentum

I was chatting with some people recently about “enterprise software”, trying to figure out exactly what that phrase means (assuming it isn’t referring to the LCARS operating system favoured by the United Federation of Planets). I always thought of enterprise software as “big, bloated and buggy,” but those are properties of the software rather than a definition.

The more we discussed it, the clearer it became that the defining attribute of enterprise software is that it’s software you never chose to use: someone else in your organisation chose it for you. So the people choosing the software and the people using the software could be entirely different groups.

That old adage “No one ever got fired for buying IBM” is the epitome of the world of enterprise software: it’s about risk-aversion, and it doesn’t necessarily prioritise the interests of the end user (although it doesn’t have to be that way).

In his critique of AngularJS PPK points to an article discussing the framework’s suitability for enterprise software and says:

Angular is aimed at large enterprise IT back-enders and managers who are confused by JavaScript’s insane proliferation of tools.

My own anecdotal experience suggests that Angular is not only suitable for enterprise software, but—assuming the definition provided above—Angular is enterprise software. In other words, the people deciding that something should be built in Angular are not necessarily the same people who will be doing the actual building.

Like I said, this is just anecdotal, but it’s happened more than once that a potential client has approached Clearleft about a project, and made it clear that they’re going to be building it in Angular. Now, to me, that seems weird: making a technical decision about what front-end technologies you’ll be using before even figuring out what your website needs to do.

Ah, but there’s the rub! It’s only weird if you think of Angular as a front-end technology. The idea of choosing a back-end technology (PHP, Ruby, Python, whatever) before knowing what your website needs to do doesn’t seem nearly as weird to me—it shouldn’t matter in the least what programming language is running on the server. But Angular is a front-end technology, right? I mean, it’s written in JavaScript and it’s executed inside web browsers. (By the way, when I say “Angular”, I’m using it as shorthand for “Angular and its ilk”—this applies to pretty much all the monolithic JavaScript MVC frameworks out there.)

Well, yes, technically Angular is a front-end framework, but conceptually and philosophically it’s much more like a back-end framework (actually, I think it’s conceptually closest to a native SDK; something more akin to writing iOS or Android apps, while others compare it to ASP.NET). That’s what PPK is getting at in his follow-up post, Front end and back end. In fact, one of the rebuttals to PPKs original post basically makes the exactly same point as PPK was making: Angular is for making (possibly enterprise) applications that happen to be on the web, but are not of the web.

On the web, but not of the web. I’m well aware of how vague and hand-wavey that sounds so I’d better explain what I mean by that.

The way I see it, the web is more than just a set of protocols and agreements—HTTP, URLs, HTML. It’s also built with a set of principles that—much like the principles underlying the internet itself—are founded on ideas of universality and accessibility. “Universal access” is a pretty good rallying cry for the web. Now, the great thing about the technologies we use to build websites—HTML, CSS, and JavaScript—is that universal access doesn’t have to mean that everyone gets the same experience.

Yes, like a broken record, I am once again talking about progressive enhancement. But honestly, that’s because it maps so closely to the strengths of the web: you start off by providing a service, using the simplest of technologies, that’s available to anyone capable of accessing the internet. Then you layer on all the latest and greatest browser technologies to make the best possible experience for the most number of people. But crucially, if any of those enhancements aren’t available to someone, that’s okay; they can still accomplish the core tasks.

So that’s one view of the web. It’s a view of the web that I share with other front-end developers with a background in web standards.

There’s another way of viewing the web. You can treat the web as a delivery mechanism. It is a very, very powerful delivery mechanism, especially if you compare it to alternatives like CD-ROMs, USB sticks, and app stores. As long as someone has the URL of your product, and they have a browser that matches the minimum requirements, they can have instant access to the latest version of your software.

That’s pretty amazing, but the snag for me is that bit about having a browser that matches the minimum requirements. For me, that clashes with the universality that lies at the heart of the World Wide Web. Sites built in this way are on the web, but are not of the web.

This isn’t anything new. If you think about it, sites that used the Flash plug-in to deliver their experience were on the web, but not of the web. They were using the web as a delivery mechanism, but they weren’t making use of the capabilities of the web for universal access. As long as you have the Flash plug-in, you get 100% of the intended experience. If you don’t have the plug-in, you get 0% of the intended experience. The modern equivalent is using a monolithic JavaScript library like Angular. As longer as your browser (and network) fulfils the minimum requirements, you should get 100% of the experience. But if your browser falls short, you get nothing. In other words, Angular and its ilk treat the web as a platform, not a continuum.

If you’re coming from a programming environment where you have a very good idea of what the runtime environment will be (e.g. a native app, a server-side script) then this idea of having minimum requirements for the runtime environment makes total sense. But, for me, it doesn’t match up well with the web, because the web is accessed by web browsers. Plural.

It’s telling that we’ve fallen into the trap of talking about what “the browser” is capable of, as though it were indeed a single runtime environment. There is no single “browser”, there are multiple, varied, hostile browsers, with differing degrees of support for front-end technologies …and that’s okay. The web was ever thus, and despite the wishes of some people that we only code for a single rendering engine, the web will—I hope—always have this level of diversity and competition when it comes to web browsers (call it fragmentation if you like). I not only accept that the web is this messy, chaotic place that will be accessed by a multitude of devices, I positively welcome it!

The alternative is to play a game of “let’s pretend”: Let’s pretend that web browsers can be treated like a single runtime environment; Let’s pretend that everyone is using a capable browser on a powerful device.

The problem with playing this game of “let’s pretend” is that we’ve played it before and it never works out well: Let’s pretend that everyone has a broadband connection; Let’s pretend that everyone has a screen that’s at least 960 pixels wide.

I refused to play that game in the past and I still refuse to play it today. I’d much rather live with the uncomfortable truth of a fragmented, diverse landscape of web browsers than live with a comfortable delusion.

The alternative—to treat “the browser” as though it were a known quantity—reminds of the punchline to all those physics jokes that go “Assume a perfectly spherical cow…”

Monolithic JavaScript frameworks like Angular assume a perfectly spherical browser.

If you’re willing to accept that assumption—and say to hell with the 250,000,000 people using Opera Mini (to pick just one example)—then Angular is a very powerful tool for helping you build something that is on the web, but not of the web.

Now I’m not saying that this way of building is wrong, just that it is at odds with my own principles. That’s why Angular isn’t necessarily a bad tool, but it’s a bad tool for me.

We often talk about opinionated software, but the truth is that all software is opinionated, because all software is built by humans, and humans can’t help but imbue their beliefs and biases into what they build (Tim Berners-Lee’s World Wide Web being a good example of that).

Software, like all technologies, is inherently political. … Code inevitably reflects the choices, biases and desires of its creators.

—Jamais Cascio

When it comes to choosing software that’s supposed to help you work faster—a JavaScript framework, for example—there are many questions you can ask: Is the code well-written? How big is the file size? What’s the browser support? Is there an active community maintaining it? But all of those questions are secondary to the most important question of all, which is “Do the beliefs and assumptions of this software match my own beliefs and assumptions?”

If the answer to that question is “yes”, then the software will help you. But if the answer is “no”, then you will be constantly butting heads with the software. At that point it’s no longer a useful tool for you. That doesn’t mean it’s a bad tool, just that it’s not a good fit for your needs.

That’s the reason why you can have one group of developers loudly proclaiming that a particular framework “rocks!” and another group proclaiming equally loudly that it “sucks!”. Neither group is right …and neither group is wrong. It comes down to how well the assumptions of that framework match your own worldview.

Now when it comes to a big MVC JavaScript framework like Angular, this issue is hugely magnified because the software is based on such a huge assumption: a perfectly spherical browser. This is exemplified by the architectural decision to do client-side rendering with client-side templates (as opposed to doing server-side rendering with server-side templates, also known as serving websites). You could try to debate the finer points of which is faster or more efficient, but it’s kind of like trying to have a debate between an atheist and a creationist about the finer points of biology—the fundamental assumptions of both parties are so far apart that it makes a rational discussion nigh-on impossible.

(Incidentally, Brett Slatkin ran the numbers to compare the speed of client-side vs. server-side rendering. His methodology is very telling: he tested in Chrome and …another Chrome. “The browser” indeed.)

So …depending on the way you view the web—“universal access” or “delivery mechanism”—Angular is either of no use to you, or is an immensely powerful tool. It’s entirely subjective.

But the problem is that if Angular is indeed enterprise software—i.e. somebody else is making the decision about whether or not you will be using it—then you could end up in a situation where you are forced to use a tool that not only doesn’t align with your principles, but is completely opposed to them. That’s a nightmare scenario.

Tuesday, January 20th, 2015

Lining up Responsive Day Out 3

I’ve been scheming away for a little while now on the third and final Responsive Day Out, and things have been working out better than I could have hoped—my dream line-up is becoming a reality.

Two thirds of the line-up is assembled and ready to go:

See? It’s looking pretty darn good, if you ask me.

You can expect plenty of meaty front-end development topics around the latest in CSS and browser APIs, but also plenty of talk on process, accessibility, performance, and the design challenges of responsive design.

My plan is to go out with a bang for this last Responsive Day Out and, the way things are looking, that’s on the cards.

I’ll let you know when tickets will be available. It’ll probably be sometime in early March. They will, as with previous years, be ludicrously good value.

Oh, and to get you in the mood, this might be a good time to revisit the audio recordings from the first two years.

Friday, January 9th, 2015

Pointless

I’ve spoken at quite a few events over the last few years (2014 was a particularly busy year). Many—in fact, most—of those events were overseas. Quite a few were across the atlantic ocean, so I’ve partaken of quite a few transatlantic flights.

Most of the time, I’d fly British Airways. They generally have direct flights to most of the US destinations where those speaking engagements were happening. This means that I racked up quite a lot of frequent-flyer miles, or as British Airways labels them, “avios.”

Frequent-flyer miles were doing gamification before gamification was even a thing. You’re lured into racking up your count, even though it’s basically a meaningless number. With BA, for example, after I’d accumulated a hefty balance of avios points, I figured I’d try to the use them to pay for an upcoming flight. No dice. You can increase your avios score all you like; when it actually comes to spending them, computer says “no.”

So my frequent-flyer miles were basically like bitcoins—in one sense, I had accumulated all this wealth, but in another sense, it was utterly worthless.

(I’m well aware of just how first-world-problemy this sounds: “Oh, it’s simply frightful how inconvenient it is for one to spend one’s air miles these days!”)

Early in 2014, I decided to flip it on its head. Instead of waiting until I needed to fly somewhere and then trying to spend my miles to get there (which never worked), I instead looked at where I could possibly get to, given my stash of avios points. The BA website was able to tell me, “hey, you can fly to Japan and back …if you travel in the off-season …in about eight months’ time.”

Alrighty, then. Let’s do that.

Now, even if you can book a flight using avios points, you still have to pay all the taxes and surcharges for the flight (death and taxes remain the only certainties). The taxes for two people to fly from London to Tokyo and back are not inconsiderable.

But here’s the interesting bit: the taxes are a fixed charge; they don’t vary according to what class you’re travelling. So when I was booking the flight, I was basically presented with the option to spend X amount of unspendable imaginary currency to fly economy, or more of unspendable imaginary currency to fly business class, or even more of the same unspendable imaginary currency to fly—get this—first class!

Hmmm …well, let me think about that decision for almost no discernible length of time. Of course I’m going to use as many of those avios points as I can! After all, what’s the point of holding on to them—it’s not like they’re of any use.

The end result is that tomorrow, myself and Jessica are going to fly from Heathrow to Narita …and we’re going to travel in the first class cabin! Squee!

Not only that, but it turns out that there are other things you can spend your avios points on after all. One of those things is hotel rooms. So we’ve managed to spend even more of the remaining meaningless balance of imaginary currency on some really nice hotels in Tokyo.

We’ll be in Japan for just over a week. We’ll start in Tokyo, head down to Kyoto, do a day trip to Mount Kōya, and then end up back in Tokyo.

We are both ridiculously excited about this trip. I’m actually going somewhere overseas that doesn’t involve speaking at a conference—imagine that!

There’s so much to look forward to—Sushi! Ramen! Yakitori!

And all it cost us was a depletion of an arbitrary number of points in a made-up scoring mechanism.