Tags: eme

37

sparkline

An associative trail

Every now and then, I like to revisit Vannevar Bush’s classic article from the July 1945 edition of the Atlantic Monthly called As We May Think in which he describes a theoretical machine called the memex.

A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

1945! Apart from its analogue rather than digital nature, it’s a remarkably prescient vision. In particular, there’s the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Many decades later, Anne Washington ponders what a legal memex might look like:

My legal Memex builds a network of the people and laws available in the public records of politicians and organizations. The infrastructure for this vision relies on open data, free access to law, and instantaneously availability.

As John Sheridan from the UK’s National Archives points out, hypertext is the perfect medium for laws:

Despite the drafter’s best efforts to create a narrative structure that tells a story through the flow of provisions, legislation is intrinsically non-linear content. It positively lends itself to a hypertext based approach. The need for legislation to escape the confines of the printed form predates the all major innovators and innovations in hypertext, from Vannevar Bush’s vision in ” As We May Think“, to Ted Nelson’s coining of the term “hypertext”, through to and Berners-Lee’s breakthrough world wide web. I like to think that Nelson’s concept of transclusion was foreshadowed several decades earlier by the textual amendment (where one Act explicitly alters – inserts, omits or amends – the text of another Act, an approach introduced to UK legislation at the beginning of the 20th century).

That’s from a piece called Deeply Intertwingled Laws. The verb “to intertwingle” was another one of Ted Nelson’s neologisms.

There’s an associative trail from Vannevar Bush to Ted Nelson that takes some other interesting turns…

Picture a new American naval recruit in 1945, getting ready to ship out to the pacific to fight against the Japanese. Just as the ship as leaving the harbour, word comes through that the war is over. And so instead of fighting across the islands of the pacific, this young man finds himself in a hut on the Philippines, reading whatever is to hand. There’s a copy of The Atlantic Monthly, the one with an article called As We May Think. The sailor was Douglas Engelbart, and a few years later when he was deciding how he wanted to spend the rest of his life, that article led him to pursue the goal of augmenting human intellect. He gave the mother of all demos, featuring NLS, a working hypermedia system.

Later, thanks to Bill Atkinson, we’d get another system called Hypercard. It was advertised with the motto Freedom to Associate, in an advertising campaign that directly referenced Vannevar Bush.

And now I’m using the World Wide Web, a hypermedia system that takes in the whole planet, to create an associative trail. In this post, I’m linking (without asking anyone for permission) to six different sources, and in doing so, I’m creating a unique associative trail. And because this post has a URL (that won’t change), you are free to take it and make it part of your own associative trail on your digital memex.

Teaching in Porto, day four

Day one covered HTML (amongst other things), day two covered CSS, and day three covered JavaScript. Each one of those days involved a certain amount of hands-on coding, with the students getting their hands dirty with angle brackets, curly braces, and semi-colons.

Day four was a deliberate step away from all that. No more laptops, just paper. Whereas the previous days had focused on collaboratively working on a single document, today I wanted everyone to work on a separate site.

The sites were generated randomly. I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

For a bit of fun, the first brainstorming exercise (run as a 6-up) was to come with potential names for this service—4 minutes for 6 ideas. Then we went around the table, shared the ideas, got feedback, and settled on the names.

Now I asked everyone to come up with a one-sentence mission statement for their newly-named service. This was a good way of teasing out the most important verbs and nouns, which led nicely into the next task: answering the question “what is the core functionality?”

If that sounds familiar, it’s because it’s the first part of the three-step process I outlined in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

We did some URL design, figuring out what structures would make sense for straightforward GET requests, like:

  • /things
  • /things/ID

Then, once it was clear what the primary “thing” was (a car, a book, etc.), I asked them to write down all the pieces that might appear on such a page; one post-it note per item e.g. “title”, “description”, “img”, “rating”, etc.

The next step involved prioritisation. They took those post-it notes and put them on the wall, but they had to put them in a vertical line from top to bottom in decreasing order of importance. This can be a challenge, but it’s better to solve these problems now rather than later.

Okay. I know asked them to “mark up” those vertical lists of post-it notes: writing HTML tag names by each one. By doing this before doing any visual design, it meant they were thinking about the meaning of the content first.

After that, we did a good ol’ fashioned classic 6-up sketching exercise, followed by critique (including a “designated dissenter” for each round). At this point, I was encouraging them to go crazy with ideas—they already had the core functionality figured out (with plain ol’ client/server requests and responses) so they could all the bells and whistles they wanted on top of that.

We finished up with a discussion of some of those bells and whistles, and how they could be used to improve the user experience: Ajax, geolocation, service workers, notifications, background sync …the sky’s the limit.

It was a whirlwind tour for just one day but I think it helped emphasise the importance of thinking about the fundamentals before adding enhancements.

This marked the end of the structured masterclass lessons. Tomorrow I’m around to answer any miscellaneous questions (if I can) and chat to the students individually while they work on their term projects.

Resilience retires

I spoke at the GOTO conference in Berlin this week. It was the final outing of a talk I’ve been giving for about a year now called Resilience.

Looking back over my speaking engagements, I reckon I must have given this talk—in one form or another—about sixteen times. If by some statistical fluke or through skilled avoidance strategies you managed not to see the talk, you can still have it rammed down your throat by reading a transcript of the presentation.

That particular outing is from Beyond Tellerrand earlier this year in Düsseldorf. That’s one of the events that recorded a video of the talk. Here are all the videos of it I could find:

Or, if you prefer, here’s an audio file. And here are the slides but they won’t make much sense by themselves.

Resilience is a mixture of history lesson and design strategy. The history lesson is about the origins of the internet and the World Wide Web. The design strategy is a three-pronged approach:

  1. Identify core functionality.
  2. Make that functionality available using the simplest technology.
  3. Enhance!

And if you like that tweet-sized strategy, you can get it on a poster. Oh, and check this out: Belgian student Sébastian Seghers published a school project on the talk.

Now, you might be thinking that the three-headed strategy sounds an awful lot like progressive enhancement, and you’d be right. I think every talk I’ve ever given has been about progressive enhancement to some degree. But with this presentation I set myself a challenge: to talk about progressive enhancement without ever using the phrase “progressive enhancement”. This is something I wrote about last year—if the term “progressive enhancement” is commonly misunderstood by the very people who would benefit from hearing this message, maybe it’s best to not mention that term and talk about the benefits of progressive enhancement instead: robustness, resilience, and technical credit. I think that little semantic experiment was pretty successful.

While the time has definitely come to retire the presentation, I’m pretty pleased with it, and I feel like it got better with time as I adjusted the material. The most common format for the talk was 40 to 45 minutes long, but there was an extended hour-long “director’s cut” that only appeared at An Event Apart. That included an entire subplot about Arthur C. Clarke and the invention of the telegraph (I’m still pretty pleased with the segue I found to weave those particular threads together).

Anyway, with the Resilience talk behind me, my mind is now occupied with the sequel: Evaluating Technology. I recently shared my research material for this one and, as you may have gathered, it takes me a loooong time to put a presentation like this together (which, by the same token, is one of the reasons why I end up giving the same talk multiple times within a year).

This new talk had its debut at An Event Apart in San Francisco two weeks ago. Jeffrey wrote about it and I’m happy to say he liked it. This bodes well—I’m already booked in for An Event Apart Seattle in April. I’ll also be giving an abridged version of this new talk at next year’s Render conference.

But that’s it for my speaking schedule for now. 2016 is all done and dusted, and 2017 is looking wide open. I hope I’ll get some more opportunities to refine and adjust the Evaluating Technology talk at some more events. If you’re a conference organiser and it sounds like something you’d be interested in, get in touch.

In the meantime, it’s time for me to pack away the Resilience talk, and wheel down into the archives, just like the closing scene of Raiders Of The Lost Ark. The music swells. The credits roll. The image fades to black.

Choice

Laurie Voss has written a thoughtful article called Web development has two flavors of graceful degradation in response to Nolan Lawson’s recent article. But I’m afraid I don’t agree with Laurie’s central premise:

…web app development and web site development are so different now that they probably shouldn’t be called the same thing anymore.

This is an idea I keep returning to, and each time I do, I find that it just isn’t that simple. There are very few web thangs that are purely interactive without any content, and there are also very few web thangs that are purely passive without any interaction. Instead, it’s a spectrum. Quite often, the position on that spectrum changes according to the needs of the user at any particular time—are Twitter and Flicker web sites while I’m viewing text and images, but then transmogrify into web apps the moment I want add, update, or delete a piece of text or an image?

In any case, the more interesting question than “is something a web site or a web app?” is the question “why?” Why does it matter? In my experience, the answer to that question generally comes down to the kind of architectural approach that a developer will take.

That’s exactly what Laurie dives into in his post. For web apps, use one architectural approach—for web sites, use a different architectural approach. To summarise:

  • in a web app, front-load everything and rely on client-side JavaScript for all subsequent interaction,
  • in a web site, optimise for many page loads, and make sure you don’t rely on client-side JavaScript.

I’m oversimplifying here, but the general idea is:

  • build web apps with the single page app architecture,
  • build web sites with progressive enhancement.

That’s sensible advice, but I’m worried that it could lead to a tautological definition of what constitutes a web app:

  1. This is a web app so it’s built as a single page app.
  2. Why do you define it as a web app?
  3. Because it’s built as a single page app.

The underlying question of what makes something a web app is bypassed by the architectural considerations …but the architectural considerations should be based on that underlying question. Laurie says:

If you are developing an app, the user ideally loads the app exactly once — whether it’s over a slow connection or not.

And similarly:

But if you are developing a web site consisting of many discrete pages, the act of loading goes from a single event to the most common event.

I completely agree that the architectural approach of single page apps is better suited to some kinds of web thangs more than others. It’s a poor architectural choice for a content-based site like nasa.gov, for example. Progressive enhancement would make more sense there.

But I don’t think that the architectural choices need to be in opposition. It’s entirely possible to reconcile the two. It’s not always easy—and the further along that spectrum you are, the tougher it gets—but it’s doable. You can begin with progressive enhancement, and then build up to a single page app architecture for more capable browsers.

I think that’s going to get easier as frameworks adopt a more mixed approach. Almost all the major libraries are working on server-side rendering as a default. Ember is leading the way with FastBoot, and Angular Universal is following. Neither of them are doing it for reasons of progressive enhancement—they’re doing it for performance and SEO—but the upshot is that you can more easily build a web app that simultaneously uses progressive enhancement and a single-page app model.

I guess my point is that I don’t think we should get too locked into the idea of web apps and web sites requiring fundamentally different approaches, especially with the changes in the technologies we used to build them.

We’ve made the mistake in the past of framing problems as “either/or”, when in fact, the correct solution was “both!”:

  • you can either have a desktop site or a mobile site,
  • you can either have rich interactivity or accessibility,
  • you can either have a single page app or progressive enhancement.

We don’t have to choose. It might take more work, but we can have our web cake and eat it.

The false dichotomy that I’m most concerned about is the pernicious idea that offline functionality is somehow in opposition to progressive enhancement. Given the design of service workers, I find this proposition baffling.

This remark by Tom is the very definition of a false dichotomy:

People who say your site should work without JavaScript are actually hurting the people they think they’re helping.

He was also linking to Nolan’s article, which could indeed be read as saying that you should for offline instead of building with progressive enhancement. But I don’t think that’s what Nolan is saying (at least, I sincerely hope not). I think that Nolan is saying that we should prioritise the offline scenario over scenarios where JavaScript fails or isn’t available. That’s a completely reasonable thing to say. But the idea that we should build for the offline scenario instead of scenarios where JavaScript fails is absurdly reductionist. We don’t have to choose!

But I can certainly understand how developers might come to be believe that building a progressive web app is at odds with progressive enhancement. Having made a bunch of progressive web apps—Huffduffer, The Session, this site, I can testify that service workers work superbly as a layer on top of an existing site, but all the messaging around progressive web apps seems to fixated on the idea of the app-shell model (a small tweak to the single page app model, where a little bit of interface is available on the initial page load instead of requiring JavaScript for absolutely everything). Again, it’s entirely possible to reconcile the app-shell approach with server rendering and progressive enhancement, but nobody seems to be talking about that. Instead, all of the examples and demos are built with an assumption about JavaScript availability.

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

Tom’s tweet included a screenshot of this part of Nolan’s article:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

Those seem like a reasonable set of assumptions. But even there, things aren’t so simple. Will people really be using “an evergreen browser and WebView”? Millions of people use proxy browsers like Opera Mini, which means you can’t guarantee JavaScript availability beyond the initial page load. UC Browser—which can also run in proxy mode—is now the second most popular mobile browser in the world.

That’s just one nit-picky example, but what I’m getting at here is that it really isn’t safe to make any assumptions. When we must make assumptions, let’s try to make them a last resort.

And just to be clear here, I’m not saying that just because we can’t make assumptions about devices or browsers doesn’t mean that we can’t build rich interactive web apps that work offline. I’m saying that we can build rich interactive web apps that work offline and also work when JavaScript fails or isn’t supported.

You don’t have to choose between progressive enhancement and a single page app/progressive web app/app shell/other things with the word “app”.

Progressive enhancement is an architectural approach to building on the web. You don’t have to use it, but please try to remember that it is your choice to make. You can choose to build a web app using progressive enhancement or not—there is nothing inherent in the nature of the thing you’re building that precludes progressive enhancement.

Personally, I find progressive enhancement a sensible way to counteract any assumptions I might inadvertently make. Progressive enhancement increases the chances that the web site (or web app) I’m building is resilient to the kind of scenarios that I never would’ve predicted or anticipated.

That’s why I choose to use progressive enhancement …and build progressive web apps.

Extensible web components

Adam Onishi has written up his thoughts on web components and progressive enhancements, following on from a discussion we were having on Slack. He shares a lot of the same frustrations as I do.

Two years ago, I said:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

I still feel that way. In theory, web components are very exciting. In practice, web components are very worrying. The worrying aspect comes from the treatment of backwards compatibility.

It all comes down to the way custom elements work. When you make up a custom element, it’s basically a span.

<fancy-select></fancy-select>

Then, using JavaScript with ShadowDOM, templates, and the other specs that together make up the web components ecosystem, you turn that inert span-like element into something all-singing and dancing. That’s great if the browser supports those technologies, and the JavaScript executes successfully. But if either of those conditions aren’t met, what you’re left with is basically a span.

One of the proposed ways around this was to allow custom elements to extend existing elements (not just spans). The proposed syntax for this was an is attribute.

<select is="fancy-select">...</select>

Browser makers responded to this by saying “Nah, that’s too hard.”

To be honest, I had pretty much given up on the is functionality ever seeing the light of day, but Monica has rekindled my hope:

Still, I’m not holding my breath for this kind of declarative extensibility landing in browsers any time soon. Instead, a JavaScript-based way of extending existing existing elements is currently the only way of piggybacking on all the accessible behavioural goodies you get with native elements.

class FancySelect extends HTMLSelectElement

But this imperative approach fails completely if custom elements aren’t supported, or if the JavaScript fails to execute. Now you’re back to having spans.

The presentation on web components at the Progressive Web Apps Dev Summit referred to this JavaScript-based extensibility as “progressively enhancing what’s already available”, which is a bit of a stretch, given how completely it falls apart in older browsers. It was kind of a weird talk, to be honest. After fifteen minutes of talking about creating elements entirely from scratch, there was a minute or two devoted to the is attribute and extending existing elements …before carrying as though those two minutes never happened.

But even without any means of extending existing elements, it should still be possible to define custom elements that have some kind of fallback in non-supporting browsers:

<fancy-select>
 <select>...</select>
</fancy-select>

In that situation, you at least get a regular ol’ select element in older browsers (or in modern browsers before the JavaScript kicks in and uplifts the custom element).

Adam has a great example of this in his post:

I’ve been thinking of a gallery component lately, where you’d have a custom element, say <o-gallery> for want of a better example, and simply populate it with images you want to display, with custom elements and shadow DOM you can add all the rest, controls/layout etc. Markup would be something like:

<o-gallery>
 <img src="">
 <img src="">
 <img src="">
</o-gallery>

If none of the extra stuff loads, what do we get? Well you get 3 images on the page. You still get the content, but just none of the fancy interactivity.

Yes! This, in my opinion, is how we should be approaching the design of web components. This is what gets me excited about web components.

Then I look at pretty much all the examples of web components out there and my nervousness kicks in. Hardly any of them spare a thought for backwards-compatibility. Take a look, for example, at the entire contents of the body element for the Polymer Shop demo site:

<shop-app unresolved="">SHOP</shop-app>

This seems really odd to me, because I don’t think it’s a good way to “sell” a technology.

Compare service workers to web components.

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Service workers work well and fail well. If a browser supports service workers, the user gets all the benefits. If a browser doesn’t support service workers, the user get the same experience they would have always had.

Web components (will) work well, but fail badly. If a browser supports web components, the user gets the experience that the developer has crafted using these new technologies. If a browser doesn’t support web components, the user gets …probably nothing. It depends on how the web components have been designed.

It’s so much easier to get excited about implementing service workers. You’ve literally got nothing to lose and everything to gain. That’s not the case with web components. Or at least not with the way they are currently being sold.

See, this is why I think it’s so important to put some effort into designing web components that have some kind of fallback. Those web components will work well and fail well.

Look at the way new elements are designed for HTML. Think of complex additions like canvas, audio, video, and picture. Each one has been designed with backwards-compatibility in mind—there’s always a way to provide fallback content.

Web components give us developers the same power that, up until now, only belonged to browser makers. Web components also give us developers the same responsibilities as browser makers. We should take that responsibility seriously.

Web components are supposed to be the poster child for The Extensible Web Manifesto. I’m all for an extensible web. But the way that web components are currently being built looks more like an endorsement of The Replaceable Web Manifesto. I’m not okay with a replaceable web.

Here’s hoping that my concerns won’t be dismissed as “piffle and tosh” again by the very people who should be thinking about these issues.

The Progressive Web App Dev Summit

I was in Amsterdam again at the start of last week for the Progressive Web App Dev Summit, organised by Google. Most of the talks were given by Google employees, but not all—this wasn’t just a European version of Google I/O. Representatives from Opera, Mozilla, Samsung, and Microsoft were also there, and there were quite a few case studies from independent companies. That was very gratifying to see.

Almost all the talks were related to progressive web apps. I say, “almost all” because there were occasional outliers. There was a talk on web components, which don’t have anything directly to do with progressive web apps (and I hope there won’t be any attempts to suggest otherwise), and another on rendering performance that had good advice for anyone building any kind of website. Most of the talks were about the building blocks of progressive web apps: HTTPS, Service Workers, push notifications, and all that jazz.

I was very pleased to see that there was a move away from the suggesting that single-page apps with the app-shell architecture model were the only way of building progressive web apps.

There were lots of great examples of progressively enhancing existing sites into progressive web apps. Jeff Posnick’s talk was a step-by-step walkthrough of doing exactly that. Reading through the agenda, I was really happy to see this message repeated again and again:

In this session we’ll take an online-only site and turn it into a fully network-resilient, offline-first installable progressive web app. We’ll also break out of the app shell and look at approaches that better-suit traditional server-driven sites.

Progressive Web Apps should work everywhere for every user. But what happens when the technology and API’s are not available for in your users browser? In this talk we will show you how you can think about and build sites that work everywhere.

Progressive Web Apps should load fast, work great offline, and progressively enhance to a better experience in modern browsers.

How do you put the “progressive” into your current web app?

You can (and should!) build for the latest and greatest browsers, but through a collection of fallbacks and progressive enhancements you can bring a lot tomorrow’s web to yesterday’s browsers.

I think this is a really smart move. It’s a lot easier to sell people on incremental changes than it is to convince them to rip everything out and start from scratch (another reason why I’m dubious about any association between web components and progressive web apps—but I’ll save that for another post).

The other angle that I really liked was the emphasis on emerging markets, not just wealthy westerners. Tal Oppenheimer’s talk Building for Billions was superb, and Alex kicked the whole thing off with some great facts and figures on mobile usage.

In my mind, these two threads are very much related. Progressive enhancement allows us to have our progressive web app cake and eat it too: we can make websites that can be accessed on devices with limited storage and slow networks, while at the same time ensuring those same sites take advantage of all the newest features in the latest and greatest browsers. I talked to a lot of Google devs about ways to measure the quality of a progressive web app, and I’m coming to the conclusion that a truly high-quality site is one that can still be accessed by a proxy browser like Opera Mini, while providing a turbo-charged experience in the latest version of Chrome. If you think that sounds naive or unrealistic, then I think you might want to dive deeper into all the technologies that make progressive web apps so powerful—responsive design, Service Workers, a manifest file, HTTPS, push notifications; all of those features can and should be used in a layered fashion.

Speaking of Opera, Andreas kind of stole the show, demoing the latest interface experiments in Opera Mobile.

That ambient badging that Alex was talking about? Opera is doing it. The importance of being able to access URLs that I’ve been ranting about? Opera is doing it.

Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right.

Nice! I’m looking forward to seeing what other browsers come up with it. It’s genuinely exciting to see all these different browser makers in complete agreement on which standards they want to support, while at the same time differentiating their products by competing on user experience. Microsoft recently announced that progressive web apps will be indexed in their app store just like native apps—a really interesting move.

The Progressive Web App Dev Summit wrapped up with a closing panel, that I had the honour of hosting. I thought it was very brave of Paul to ask me to host this, considering my strident criticism of Google’s missteps.

Initially there were going to be six people on the panel. Then it became eight. Then I blinked and it suddenly became twelve. Less of a panel, more of a jury. Half the panelists were from Google and the other half were from Opera, Microsoft, Mozilla, and Samsung. Some of those representatives were a bit too media-trained for my liking: Ali from Microsoft tried to just give a spiel, and Alex Komoroske from Google wouldn’t give me a straight answer about whether he wants Android Instant apps to succeed—Jake was a bit more honest. I should have channelled my inner Paxman a bit more.

Needless to say, nobody from Apple was at the event. No surprise there. They’ve already promised to come to the next event. There won’t be an Apple representative on stage, obviously—that would be asking too much, wouldn’t it? But at least it looks like they’re finally making an effort to engage with the wider developer community.

All in all, the Progressive Web App Dev Summit was good fun. I found the event quite inspiring, although the sausage festiness of the attendees was depressing. It would be good if the marketing for these events reached a wider audience—I met a lot of developers who only found out about it a week or two before the event.

I really hope that people will come away with the message that they can get started with progressive web apps right now without having to re-architect their whole site. Right now the barrier to entry is having your site running on HTTPS. Once you’ve got that up and running, it’s pretty much a no-brainer to add a manifest file and a basic Service Worker—to boost performance if nothing else. From there, you’re in a great position to incrementally add more and more features—an offline-first approach with your Service Worker, perhaps? Or maybe start dabbling in push notifications. The great thing about all of these technologies (with the glaring exception of web components in their current state) is that you don’t need to bet the farm on any of them. Try them out. Use them as enhancements. You’ve literally got nothing to lose …and your users have everything to gain.

A web for everyone

I gave the closing talk at the Render conference in Oxford a few weeks back. It was a very smoothly-run event, the spiritual successor to jQuery UK.

In amongst the mix of talks there were a few emerging themes. Animation was covered from a few different angles by Val and Sara. Bruce, Jake, Ola, and I talked about Service Workers and offline functionality. But there were also some differences of opinion.

In her great talk—I’m Offline, Cool! Now What?—Ola outlined the many and varied offline use cases that drove the creation and philosophy of Hoodie. She described all the reasons why people need the web; for communication, for access to information, for empowerment, and for love. “Hell, yes!” I thought.

But then she said:

So since when is helping people to fulfil a basic need, progressive enhancement?

And even more forcefully:

This is why I think, putting offline first in the progressive enhancement slot is pure bullshit.

Strong words indeed! And I have to say I was a little puzzled by them.

Ola had demonstrated again and again just how fragile the network could be. That is absolutely correct. All too often, we make the assumption that people using our sites have a decent network connection. That’s not a safe assumption to make.

But the suggested solution—to rely on technologies like local storage, Service Workers, or other APIs—assumes a certain level of JavaScript capabilities in the devices and browsers out there. That’s an unsafe assumption to make.

I remember discussing this with Alex from Hoodie a while back. I was confused by the cognitive dissonance I was observing. It seems to me that, laudable as Hoodie’s offline-first goals are, they are swapping out one unstable dependency—the network—for a different unstable dependency—a set of JavaScript APIs.

(I remember Alex pointed out that Hoodie was intended primarily for web apps rather than web sites, and my response—predictably enough—was to say “Define web app”.)

I think I understand why Ola reacted so strongly to the suggestion that offline functionality should be added as an enhancement. I’ve seen the same reaction when I’ve said that beautiful typography on the web is an enhancement. I think that when I say something is an enhancement, what people hear is that something is just an enhancement. It sounds belittling. That’s not my intention, but I can understand how it could come across like that. Perhaps this is one reason why some people have a real issue with the term “progressive enhancement”.

I wish we could make offline functionality a requirement. But the reality is that not everyone is using a browser that supports the necessary technology. I wish we could make beautiful typography a requirement. But, again, the reality is that there will always be some browsers or devices that won’t be capable of executing that typography. Accepting these facets of reality might seem like admissions of defeat, but I actually find it quite liberating.

In her brilliant talk at Render, Ashley G. Williams channeled Carl Sagan, quoting from his book The Demon-Haunted World:

It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.

That’s how I feel we should approach building for the web. Let’s accept that network connections are unevenly distributed. Let’s also accept that browser features are unevenly distributed. Pretending that millions of Opera Mini users don’t exist isn’t a viable strategy. They too are people who want to communicate, to access information, to be empowered, and to love.

Pointing out that you can’t always rely on client-side JavaScript shouldn’t be taken as an admonishment. It’s an opportunity.

Karolina Szczur wrote a wonderful piece on Ev’s blog called The Web Isn’t Uniform. She noticed how many sites—Facebook, AirBnB, Basecamp—failed to even render some useful information if the JavaScript fails to load. It’s a situation that many of us—with our fast connections, capable browsers, and modern devices—might never even notice.

It’s a privilege to be able to use breaking edge technologies and devices, but let’s not forget basic accessibility and progressive enhancement. Ultimately, we’re building for the users, not for our own tastes or preferences.

Karolina asks that we, as makers of the web, have a little more empathy. If the comments on her article are anything to go by, that’s a tall order. All the usual tropes are rolled out—there’s the misunderstanding that progressive enhancement means making sure everything works without JavaScript (it doesn’t; it’s about the core functionality), and the evergreen argument that as soon as you’re building a web “app”, that best practices, good engineering, and empathy can go out the window…

I strongly disagree that this has anything to do at all about empathy. Instead, it’s all about resources and priorities. Making a JS app is already hard enough, duplicating all that work so that it also works without JS is quite often just not practical.—Sacha Greif

But requiring that a site be functional when JavaScript is disabled, may not be a valid requirement anymore. HTML and CSS were originally created and designed for documents, not applications. Many websites these days should be considered apps rather then docs.—Dan Shappir

What you’re suggesting is that all these companies should write all their software twice, once in javascript and again in good ol’ html with forms, to cater to that point-whatever-percentage that has decided to break their own web browser by turning one of the three fundamental web technologies off. In what universe is this a reasonable request?—Erlend Halvorsen

JavaScript is as important as HTML. This is modern internet. If someone doesn’t have JavaScript, they should not be using the new applications that were possible because of JavaScript.—HarshaL

I am a web developer. I build web applications not web sites. What you say may be true for web sites with static pages displaying images and text.—R. Fancsiki

Ah, Medium! Where the opinions of self-entitled dudes flow like rain from the tech heavens.

While they were so busy defending the lack of basic functionality in all the examples that Karolina listed, they failed to notice the most important development:

Let’s build a web that works for everyone. That doesn’t mean everyone has to have the same experience. Let’s accept that there are all sorts of people out there accessing the web with all sorts of browsers on all sorts of devices.

What a fantastic opportunity!

Moderating EnhanceConf 2016

Last year I met up with Simon McManus in a Brighton pub where he told me about his plan to run a conference dedicated to progressive enhancement. “Sounds like a great idea!”, I said, and offered him any help I could.

With the experience of organising three dConstructs and three Responsive Days Out, I was able to offer some advice on the practical side of things like curation, costs and considerations. Simon also asked me to MC his event. I was only too happy to oblige. After all, I was definitely going to be at the conference—wild horses wouldn’t keep me away—and when have I ever turned down an opportunity to hog the mic?

Simon chose a name: EnhanceConf. He found a venue: The RSA in London. He settled on a date: March 4th, 2016. He also decided on a format, the same one as Responsive Day Out: four blocks of talks, each block consisting of three back-to-back 20 minute presentations followed by a group discussion and questions.

With all those pieces in place, it was time to put together a line-up. I weighed in with my advice and opinions there too, but the final result was all Simon’s …and what a great result it was.

Yesterday was the big day. I’m happy to report that it was a most splendid event: an inspiring collection of brilliant talks, expertly curated like a mixtape for the web.

Nat got the day off to a rousing start. They gave an overview of just how fragile and unpredictable the World Wide Web can be. To emphasise this, Anna followed with detailed look at the many, many console browsers people are using. Then Stefan gave us a high-level view of sensible (and not-so-sensible) architectures for building on the web—a talk packed to the brim with ideas and connections to lessons from the past that really resonated with me.

Stefan, Nat and Anna

After that high-level view, the next section was a deep dive into strategies for building with progressive enhancement: building React apps that share code for rendering on the server and the client from Forbes; using Service Workers to create a delightful offline experience from Olly; taking a modular approach to how structure our code and cut the mustard from Stu.

Stu, Olly and Forbes

The after-lunch session was devoted to design. It started with good ol’ smackdown between Phil and Stephen, which I attempted to introduce in my best wrestling announcer voice. That was followed by a wonderfully thoughtful presentation by Adam Silver on Embracing Simplicity. Then Jen blew everyone away with a packed presentation of not just what’s possible with CSS now, but strategies for using the latest and greatest CSS today.

Adam, Stephen, Phil and Jen

Finally, the day finished with a look to the future. And the future is …words. Robin was as brilliant as ever, devising an exercise to get the audience to understand just how awful audio CAPTCHAs are, but also conveying his enthusiasm and optimism for voice interfaces. That segued perfectly into the next two talks. Stephanie gave us a crash course in crafting clear, concise copy, and Aaron tied that together with Robin’s musings on future interactions with voice in a great final presentation called Learn From the Past, Enhance for the Future (echoing the cyclical patterns that Stefan was talking about at the start of the day).

Closing panel

As the day wrapped up, I finished by pointing to a new site launched by Jamie on the very same day: progressiveenhancement.org. With that, my duties were fulfilled.

I thoroughly enjoyed listening to the talks and then quizzing the speakers afterwards. I really do enjoy moderating events. Some of the skills are basic (pronouncing people’s names correctly, using their preferred pronouns) and some are a little trickier (trying to quickly spot connections, turning those connections into questions for each speaker) but it’s very rewarding indeed.

I had a blast at EnhanceConf. I felt bad though; lots of people came up to me and started thanking me for a great day. “Don’t thank me!” I said, “Thank Simon.”

Thanks, Simon.

Enhance! Conf!

Two weeks from now there will be an event in London. You should go to it. It’s called EnhanceConf:

EnhanceConf is a one day, single track conference covering the state of the art in progressive enhancement. We will look at the tools and techniques that allow you to extend the reach of your website/application without incurring additional costs.

As you can probably guess, this is right up my alley. Wild horses wouldn’t keep me away from it. I’ve been asked to be Master of Ceremonies for the day, which is a great honour. Luckily I have some experience in that department from three years of hosting Responsive Day Out. In fact, EnhanceConf is going to run very much in the mold of Responsive Day Out, as organiser Simon explained in an interview with Aaron.

But the reason to attend is of course the content. Check out that line-up! Now that is going to be a knowledge-packed day: design, development, accessibility, performance …these are a few of my favourite things. Nat Buckley, Jen Simmons, Phil Hawksworth, Anna Debenham, Aaron Gustafson …these are a few of my favourite people.

Tickets are still available. Use the discount code JEREMYK to get a whopping 15% off the ticket price.

There’s also a scholarship:

The scholarships are available to anyone not normally able to attend a conference.

I’m really looking forward to EnhanceConf. See you at RSA House on March 4th!

Enhance’n’drag’n’drop

I’ve spent the last week implementing a new feature over at The Session. I had forgotten how enjoyable it is to get completely immersed in a personal project, thinking about everything from database structures right through to CSS animations,

I won’t bore you with the details of this particular feature—which is really only of interest if you play traditional Irish music—but I thought I’d make note of one little bit of progressive enhancement.

One of the interfaces needed for this feature was a form to re-order items in a list. So I thought to myself, “what’s the simplest technology to enable this functionality?” I came up with a series of select elements within a form.

Reordering

It’s not the nicest of interfaces, but it works pretty much everywhere. Once I had built that—and the back-end functionality required to make it all work—I could think about how to enhance it.

I brought it up at the weekly Clearleft front-end pow-wow (featuring special guest Jack Franklin). I figured that drag’n’drop would be the obvious enhancement, but I didn’t know if there were any “go-to” libraries for implementing it; I haven’t paid much attention to the state of drag’n’drop since the old IE implement was added to HTML5.

Nobody had any particular recommendations so I did a bit of searching. I came across Dragula, which looked pretty solid. It’s made by the super-smart Nicolás Bevacqua, who I know shares my feelings about progressive enhancement. To my delight, I was able to get it working within minutes.

Drag and drop

There’s a little bit of mustard-cutting going on: does the dragula object exist, and does the browser understand querySelector? If so, the select elements are hidden and the drag’n’drop is enabled. Then, whenever an item in the list is dragged and dropped, the corresponding (hidden) select element is updated …so that time I spent making the simpler non-drag’n’drop interface was time well spent: I didn’t need to do anything extra on the server to handle the data from the updated interface.

It’s a simple example but it demonstrates that the benefits of starting with the simpler universal interface before upgrading to the smoother experience.

Jerememe

Where to start?

A lot of the talks at this year’s Chrome Dev Summit were about progressive web apps. This makes me happy. But I think the focus is perhaps a bit too much on the “app” part on not enough on “progressive”.

What I mean is that there’s an inevitable tendency to focus on technologies—Service Workers, HTTPS, manifest files—and not so much on the approach. That’s understandable. The technologies are concrete, demonstrable things, whereas approaches, mindsets, and processes are far more nebulous in comparison.

Still, I think that the most important facet of building a robust, resilient website is how you approach building it rather than what you build it with.

Many of the progressive app demos use server-side and client-side rendering, which is great …but that aspect tends to get glossed over:

Browsers without service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options.

I think it’s vital to not think in terms of older browsers “falling back” but to think in terms of newer browsers getting a turbo-boost. That may sound like a nit-picky semantic subtlety, but it’s actually a radical difference in mindset.

Many of the arguments I’ve heard against progressive enhancement—like Tom’s presentation at Responsive Field Day—talk about the burdensome overhead of having to bolt on functionality for older or less-capable browsers (even Jake has done this). But the whole point of progressive enhancement is that you start with the simplest possible functionality for the greatest number of users. If anything gets bolted on, it’s the more advanced functionality for the newer or more capable browsers.

So if your conception of progressive enhancement is that it’s an added extra, I think you really need to turn that thinking around. And that’s hard. It’s hard because you need to rewire some well-engrained pathways.

There is some precedence for this though. It was really, really hard to convince people to stop using tables for layout and starting using CSS instead. That was a tall order—completely change the way you approach building on the web. But eventually we got there.

When Ethan came out with Responsive Web Design, it was an equally difficult pill to swallow, not because of the technologies involved—media queries, percentages, etc.—but because of the change in thinking that was required. But eventually we got there.

These kinds of fundamental changes are inevitably painful …at first. After years of building websites using tables for layout, creating your first CSS-based layout was demoralisingly difficult. But the second time was a bit easier. And the third time, easier still. Until eventually it just became normal.

Likewise with responsive design. After years of building fixed-width websites, trying to build in a fluid, flexible way was frustratingly hard. But the second time wasn’t quite as hard. And the third time …well, eventually it just became normal.

So if you’re used to thinking of the all-singing, all-dancing version of your site as the starting point, it’s going to be really, really hard to instead start by building the most basic, accessible version first and then work up to the all-singing, all-dancing version …at first. But eventually it will just become normal.

For now, though, it’s going to take work.

The recent redesign of Google+ is true case study in building a performant, responsive, progressive site:

With server-side rendering we make sure that the user can begin reading as soon as the HTML is loaded, and no JavaScript needs to run in order to update the contents of the page. Once the page is loaded and the user clicks on a link, we do not want to perform a full round-trip to render everything again. This is where client-side rendering becomes important — we just need to fetch the data and the templates, and render the new page on the client. This involves lots of tradeoffs; so we used a framework that makes server-side and client-side rendering easy without the downside of having to implement everything twice — on the server and on the client.

This took work. Had they chosen to rely on client-side rendering alone, they could have built something quicker. But I think it was worth laying that solid foundation. And the next time they need to build something this way, it’s going to be less work. Eventually it just becomes normal.

But it all starts with thinking of the server-side rendering as the default. Server-side rendering is not a fallback; client-side rendering is an enhancement.

That’s exactly the kind of mindset that enables Jack Franklin to build robust, resilient websites:

Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end.

I had a chance to chat briefly with Jack at the Edge conference in London and I congratulated him on the launch of a Go Cardless site that used exactly this technique. He told me that the decision to flip the switch and make it act as a single page app came right at the end of the project. Server-side rendering was the default; client-side rendering was added later.

The key to building modern, resilient, progressive sites doesn’t lie in browser technologies or frameworks; it lies in how we think about the task at hand; how we approach building from the ground up rather than the top down. Changing the way we fundamentally think about building for the web is inevitably going to be challenging …at first. But it will also be immensely rewarding.

Home screen

Remy posted a screenshot to Twitter last week.

A screenshot of adactio.com on an Android device showing an Add To Home Screen prompt.

That “Add To Home Screen” dialogue is not something that Remy explicitly requested (though, of course, you can—and should—choose to add adactio.com to your home screen). That prompt appears in Chrome on Android as the result of a fairly simple algorithm based on a few factors:

  1. The website is served over HTTPS. My site is.
  2. The website has a manifest file. Here’s my JSON manifest file.
  3. The website has a Service Worker. Here’s my site’s Service Worker script (although a little birdie told me that the Service Worker script can be as basic as a blank file).
  4. The user visits the website a few times over the course of a few days.

I think that’s a reasonable set of circumstances. I particularly like that there is no way of forcing the prompt to appear.

There are some carrots in there: Want to have the user prompted to add your site to their home screen? Well, then you need to be serving on a secure connection, and you’d better get on board that Service Worker train.

Speaking of which, after I published a walkthrough of my first Service Worker, I got an email bemoaning the lack of browser support:

I was very much interested myself in this topic, until I checked on the “Can I use…” site the availability of this technology. In one word “limited”. Neither Safari nor IOS Safari support it, at least now, so I cannot use it for implementing mobile applications.

I don’t think this is the right way to think about Service Workers. You don’t build your site on top of a Service Worker—you add a Service Worker on top of your existing site. It has been explicitly designed that way: you can’t make it the bedrock of your site’s functionality; you can only add it as an enhancement.

I think that’s really, really smart. It means that you can start implementing Service Workers today and as more and more browsers add support, your site will appear to get better and better. My site worked fine for fifteen years before I added a Service Worker, and on the day I added that Service Worker, it had no ill effect on non-supporting browsers.

Oh, and according to the Webkit five year plan, Service Worker support is on its way. This doesn’t surprise me. I can’t imagine that Apple would let Google upstage them for too long with that nice “add to home screen” flow.

Alas, Mobile Safari’s glacial update cycle means that the earliest we’ll see improvements like Service Workers will probably be September or October of next year. In the age of evergreen browsers, Apple’s feast-or-famine approach to releasing updates is practically indistinguishable from stagnation.

Still, slowly but surely, game-changing technologies are landing in browsers. At the same time, the long-term problems with betting on native apps are starting to become clearer. Native apps are still ahead of what can be accomplished on the web, but it was ever thus:

The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.

The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.

It’s interesting to see how the web could take the desirable features of native—offline support, smooth animations, an icon on the home screen—without sacrificing the strengths of the web—linking, responsiveness, the lack of App Store gatekeepers. That kind of future is what Alex is calling progressive apps:

Critically, these apps can deliver an even better user experience than traditional web apps. Because it’s also possible to build this performance in as progressive enhancement, the tangible improvements make it worth building this way regardless of “appy” intent.

Flipkart recently launched something along those lines, although it’s somewhat lacking in the “enhancement” department; the core content is delivered via JavaScript—a fragile approach.

What excites me is the prospect of building services that work just fine on low-powered devices with basic browsers, but that also take advantage of all the great possibilities offered by the latest browsers running on the newest devices. Backwards compatible and future friendly.

And if that sounds like a naïve hope, then I humbly suggest that Service Workers are a textbook example of exactly that approach.

My first Service Worker

I’ve made no secret of the fact that I’m really excited about Service Workers. I’m not alone. At the Coldfront conference in Copenhagen, pretty much every talk mentioned Service Workers.

Obviously I’m excited about what Service Workers enable: offline caching, background processes, push notifications, and all sorts of other goodies that allow the web to compete with native. But more than that, I’m really excited about the way that the Service Worker spec has been designed. Instead of being an all-or-nothing technology that you have to bet the farm on, it has been deliberately crafted to be used as an enhancement on top of existing sites (oh, how I wish that web components would follow a similar path).

I’ve got plenty of ideas on how Service Workers could be used to enhance a community site like The Session or the kind of events sites that we produce at Clearleft, but to begin with, I figured it would make sense to use my own personal site as a playground.

To start with, I’ve already conquered the first hurdle: serving my site over HTTPS. Service Workers require a secure connection. But you can play around with running a Service Worker locally if you run a copy of your site on localhost.

That’s how I started experimenting with Service Workers: serving on localhost, and stopping and starting my local Apache server with apachectl stop and apachectl start on the command line.

That reminds of another interesting use case for Service Workers: it’s not just about the user’s network connection failing (say, going into a train tunnel); it’s also about your web server not always being available. Both scenarios are covered equally.

I would never have even attempted to start if it weren’t for the existing examples from people who have been generous enough to share their work:

Also, I knew that Jake was coming to FF Conf so if I got stumped, I could pester him. That’s exactly what ended up happening (thanks, Jake!).

So if you decide to play around with Service Workers, please, please share your experience.

It’s entirely up to you how you use Service Workers. I figured for a personal site like this, it would be nice to:

  1. Explicitly cache resources like CSS, JavaScript, and some images.
  2. Cache the homepage so it can be displayed even when the network connection fails.
  3. For other pages, have a fallback “offline” page to display when the network connection fails.

So now I’ve got a Service Worker up and running on adactio.com. It will only work in Chrome, Android, Opera, and the forthcoming version of Firefox …and that’s just fine. It’s an enhancement. As more and more browsers start supporting it, this Service Worker will become more and more useful.

How very future friendly!

The code

If you’re interested in the nitty-gritty of what my Service Worker is doing, read on. If, on the other hand, code is not your bag, now would be a good time to bow out.

If you want to jump straight to the finished code, here’s a gist. Feel free to take it, break it, copy it, improve it, or do anything else you want with it.

To start with, let’s establish exactly what a Service Worker is. I like this definition by Matt Gaunt:

A service worker is a script that is run by your browser in the background, separate from a web page, opening the door to features which don’t need a web page or user interaction.

register

From inside my site’s global JavaScript file—or I could do this from a script element inside my pages—I’m going to do a quick bit of feature detection for Service Workers. If the browser supports it, then I’m going register my Service Worker by pointing to another JavaScript file, which sits at the root of my site:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js', {
    scope: '/'
  });
}

The serviceworker.js file sits in the root of my site so that it can act on any requests to my domain. If I put it somewhere like /js/serviceworker.js, then it would only be able to act on requests to the /js directory.

Once that file has been loaded, the installation of the Service Worker can begin. That means the script will be installed in the user’s browser …and it will live there even after the user has left my website.

install

I’m making the installation of the Service Worker dependent on a function called updateStaticCache that will populate a cache with the files I want to store:

self.addEventListener('install', function (event) {
  event.waitUntil(updateStaticCache());
});

That updateStaticCache function will be used for storing items in a cache. I’m going to make sure that the cache has a version number in its name, exactly as described in the Guardian’s use case. That way, when I want to update the cache, I only need to update the version number.

var staticCacheName = 'static';
var version = 'v1::';

Here’s the updateStaticCache function that puts the items I want into the cache. I’m storing my JavaScript, my CSS, some images referenced in the CSS, the home page of my site, and a page for displaying when offline.

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
};

Because those items are part of the return statement for the Promise created by caches.open, the Service Worker won’t install until all of those items are in the cache. So you might want to keep them to a minimum.

You can still put other items in the cache, and not make them part of the return statement. That way, they’ll get added to the cache in their own good time, and the installation of the Service Worker won’t be delayed:

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      cache.addAll([
        '/path/to/somefile',
        '/path/to/someotherfile'
      ]);
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
}

Another option is to use completely different caches, but I’ve decided to just use one cache for now.

activate

When the activate event fires, it’s a good opportunity to clean up any caches that are out of date (by looking for anything that doesn’t match the current version number). I copied this straight from Nicolas’s code:

self.addEventListener('activate', function (event) {
  event.waitUntil(
    caches.keys()
      .then(function (keys) {
        return Promise.all(keys
          .filter(function (key) {
            return key.indexOf(version) !== 0;
          })
          .map(function (key) {
            return caches.delete(key);
          })
        );
      })
  );
});

fetch

The fetch event is fired every time the browser is going to request a file from my site. The magic of Service Worker is that I can intercept that request before it happens and decide what to do with it:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  ...
});

POST requests

For a start, I’m going to just back off from any requests that aren’t GET requests:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
  );
  return;
}

That’s basically just replicating what the browser would do anyway. But even here I could decide to fall back to my offline page if the request doesn’t succeed. I do that using a catch clause appended to the fetch statement:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
          .catch(function () {
              return caches.match('/offline');
          })
  );
  return;
}

HTML requests

I’m going to treat requests for pages differently to requests for files. If the browser is requesting a page, then here’s the order I want:

  1. Try fetching the page from the network first.
  2. If that doesn’t work, try looking for the page in the cache.
  3. If all else fails, show the offline page.

First of all, I need to test to see if the request is for an HTML document. I’m doing this by sniffing the Accept headers, which probably isn’t the safest method:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {

Now I try to fetch the page from the network:

event.respondWith(
  fetch(request)
);

If the network is working fine, this will return the response from the site and I’ll pass that along.

But if that doesn’t work, I’m going to look for a match in the cache. Time for a catch clause:

.catch(function () {
  return caches.match(request);
})

So now the whole event.respondWith statement looks like this:

event.respondWith(
  fetch(request)
    .catch(function () {
      return caches.match(request)
    })
);

Finally, I need to take care of the situation when the page can’t be fetched from the network and it can’t be found in the cache.

Now, I first tried to do this by adding a catch clause to the caches.match statement, like this:

return caches.match(request)
  .catch(function () {
    return caches.match('/offline');
  })

That didn’t work and for the life of me, I couldn’t figure out why. Then Jake set me straight. It turns out that caches.match will always return a response …even if that response is undefined. So a catch clause will never be triggered. Instead I need to return the offline page if the response from the cache is falsey:

return caches.match(request)
  .then(function (response) {
    return response || caches.match('/offline');
  })

With that cleared up, my code for handing HTML requests looks like this:

event.respondWith(
  fetch(request, { credentials: 'include' })
    .catch(function () {
      return caches.match(request)
        .then(function (response) {
          return response || caches.match('/offline');
        })
    })
);

Actually, there’s one more thing I’m doing with HTML requests. If the network request succeeds, I stash the response in the cache.

Well, that’s not exactly true. I stash a copy of the response in the cache. That’s because you’re only allowed to read the value of a response once. So if I want to do anything with it, I have to clone it:

var copy = response.clone();
caches.open(version + staticCacheName)
  .then(function (cache) {
    cache.put(request, copy);
  });

I do that right before returning the actual response. Here’s how it fits together:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {
  event.respondWith(
    fetch(request, { credentials: 'include' })
      .then(function (response) {
        var copy = response.clone();
        caches.open(version + staticCacheName)
          .then(function (cache) {
            cache.put(request, copy);
          });
        return response;
      })
      .catch(function () {
        return caches.match(request)
          .then(function (response) {
            return response || caches.match('/offline');
          })
      })
  );
  return;
}

Okay. So that’s requests for pages taken care of.

File requests

I want to handle requests for files differently to requests for pages. Here’s my list of priorities:

  1. Look for the file in the cache first.
  2. If that doesn’t work, make a network request.
  3. If all else fails, and it’s a request for an image, show a placeholder.

Step one: try getting the file from the cache:

event.respondWith(
  caches.match(request)
);

Step two: if that didn’t work, go out to the network. Now remember, I can’t use a catch clause here, because caches.match will always return something: either a response or undefined. So here’s what I do:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request);
    })
);

Now that I’m back to dealing with a fetch statement, I can use a catch clause to take care of the third and final step: if the network request doesn’t succeed, check to see if the request was for an image, and if so, display a placeholder:

.catch(function () {
  if (request.headers.get('Accept').indexOf('image') !== -1) {
    return new Response('<svg>...</svg>',  { headers: { 'Content-Type': 'image/svg+xml' }});
  }
})

I could point to a placeholder image in the cache, but I’ve decided to send an SVG on the fly using a new Response object.

Here’s how the whole thing looks:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request)
        .catch(function () {
          if (request.headers.get('Accept').indexOf('image') !== -1) {
            return new Response('<svg>...</svg>', { headers: { 'Content-Type': 'image/svg+xml' }});
          }
        })
    })
);

The overall shape of my code to handle fetch events now looks like this:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  // Non-GET requests
  if (request.method !== 'GET') {
    event.respondWith(
      ... 
    );
    return;
  }
  // HTML requests
  if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    event.respondWith(
      ...
    );
    return;
  }
  // Non-HTML requests
  event.respondWith(
    ...
  );
});

Feel free to peruse the code.

Next steps

The code I’m running now is fine for a first stab, but there’s room for improvement.

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

If you fancy having a go at coding that up, let me know.

Lessons learned

There were a few gotchas along the way. I already mentioned the fact that caches.match will always return something so you can’t use catch clauses to handle situations where a file isn’t found in the cache.

Something else worth noting is that this:

fetch(request);

…is functionally equivalent to this:

fetch(request)
  .then(function (response) {
    return response;
  });

That’s probably obvious but it took me a while to realise. Likewise:

caches.match(request);

…is the same as:

caches.match(request)
  .then(function (response) {
    return response;
  });

Here’s another thing… you’ll notice that sometimes I’ve used:

fetch(request);

…but sometimes I’ve used:

fetch(request, { credentials: 'include' } );

That’s because, by default, a fetch request doesn’t include cookies. That’s fine if the request is for a static file, but if it’s for a potentially-dynamic HTML page, you probably want to make sure that the Service Worker request is no different from a regular browser request. You can do that by passing through that second (optional) argument.

But probably the trickiest thing is getting your head around the idea of Promises. Writing JavaScript is generally a fairly procedural affair, but once you start dealing with then clauses, you have to come to grips with the fact that the contents of those clauses will return asynchronously. So statements written after the then clause will probably execute before the code inside the clause. It’s kind of hard to explain, but if you find problems with your Service Worker code, check to see if that’s the cause.

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.

Updates

I got some very useful feedback from Jake after I published this…

Expires headers

By default, JavaScript files on my server are cached for a month. But a Service Worker script probably shouldn’t be cached at all (or cached for a very, very short time). I’ve updated my .htaccess rules accordingly:

<FilesMatch "serviceworker.js">
  ExpiresDefault "now"
</FilesMatch>
Credentials

If a request is initiated by the browser, I don’t need to say:

fetch(request, { credentials: 'include' } );

It’s enough to just say:

fetch(request);
Scope

I set the scope parameter of my Service Worker to be “/” …but because the Service Worker is sitting in the root directory anyway, I don’t really need to do that. I could just register it with:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js');
}

If, on the other hand, the Service Worker file were sitting in a folder, but I wanted it to act on the whole site, then I would need to specify the scope:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/path/to/serviceworker.js', {
    scope: '/'
  });
}

…and I’d also need to send a special header. So it’s probably easiest to just put Service Worker scripts in the root directory.

Links from a talk

I’m coming to a rest after a busy period of travelling and speaking. In the last five or six weeks I’ve been to Copenhagen, Freiburg, Prague, Portland, Seattle, and Austin.

The trip to Austin was lovely. It was so nice to be there when it wasn’t South by Southwest (the infrastructure of the whole town creaks under the sheer weight of the event). I wasn’t just there to eat tacos and drink beer in the sunshine. I was there to talk at An Event Apart.

Like I said months before the event:

Everyone in the line up is one of my heroes.

It was, as always, a great event. A personal highlight for me was getting to meet Lara Hogan for the first time. She was kind enough to sign my copy of her fantastic book. She gave an equally fantastic talk at the conference, featuring some of the most deftly-handled Q&A I’ve ever seen.

I spoke at the end of the conference (no pressure!), giving a brand new talk called Resilience—I gave a shortened version at Coldfront and Smashing Conference but this was my first chance to go all out with an hour long talk. It was my chance to go full James Burke.

I assembled some related links for the attendees. Here they are…

Books

References

Resources

Related posts on adactio.com

Here’s a readlist of those links.

Further reading

Here’s a readlist of those links.

See also: other links tagged with “progressive enhancement” on adactio.com

Baseline

Jake gave a great talk at Responsive Day Out 3 all about nuanced progressive enhancement, with a look at service workers in particular (a technology designed with progressive enhancement at its heart).

To illustrate the performance gains, Jake used his SVGOMG site as an example—a really terrific resource for optimising SVGs.

SVGOMG requires JavaScript for its core functionality (optimising an SVG file). That was a deliberate choice. Jake could’ve made the barrier-to-entry as low as any browser that supports input type="file" but he decided that for this audience (developers) it was a safe assumption that JavaScript would be available.

Jake talked about this in an interview with Paul about the site:

I’m a strong believer in progressive enhancement, but also that each phase of the enhancement needs a user.

I agree completely with this approach. It makes sense to have a valid reason for adding any enhancement. But there’s something about this particular example that wasn’t sitting right with me. It took me a while to figure it out, but I now realise what it is.

Jake is talking about making it work on the server as an enhancement. But that’s not an enhancement, it’s a fallback.

Thinking in terms of fallbacks is more of a “graceful degradation” approach (i.e. for every “full” feature, thinking of a corresponding fallback). That’s not how I like to think of progressive enhancement. I like to think in terms of a baseline. And that baseline, in my mind, does not require a user to justify its existence. That’s because the baseline isn’t there to cover the use cases we can think of, it’s there to cover the use cases we can’t predict.

That might seem like a minor difference in wording to the graceful degradation approach but I think it’s actually a fundamentally different way of approaching the situation.

When I was on the progressive enhancement panel at Edge Conf, Lyza asked how low the baseline should be. I said “as low as possible.” Some of my fellow panelists took issue with this saying it varies from project to project, and that’s completely true, but I think I should’ve clarified that when I talk about a baseline, I’m not talking about browsers. I don’t think about a baseline in terms of “IE4 and above, Android 2.1 and above, etc.”—I think about a baseline in terms of “the minimum required technology to allow a user to accomplish the core task” (that qualification about core tasks is important—the baseline does not need to cover tasks that are nice-to-have; those can safely require more sophisticated technology).

That “minimum required technology” often turns out to be a combination of a web server, HTTP, and some HTML.

So to take SVGOMG as an example, I would begin with the baseline of “allowing a user to optimise an SVG file”. The minimum required technology is a web server running a programme that does the optimisation, and an HTML document that contains a form element with input type="file". Once that’s in place, then I can start applying Jake’s very sensible approach of thinking about enhancements in terms of specific user benefits. In this case, it’s pretty clear that 99.99% of the users would benefit from not having that round-trip to the server and have the SVG optimisation happen in the browser using JavaScript.

There’s an enhancement provided for the use case that I can imagine. But—and this is the subtle but important distinction—there’s a baseline for all the use cases that I can’t think of. I need to recognise that I won’t be able to predict all the possible use cases, and that’s okay—as long there’s a solid baseline in place, I’ve got an insurance policy for unforeseen circumstances. It’s still not perfect, but it lowers the risk somewhat by reducing the number of assumptions being built in at that baseline level.

Going back to Jake’s chat with Paul, he says:

I thought about making the site work without JS by doing the SVG work on the server, but this would be slow and a maintenance burden.

The maintenance burden is a very valid point. This is something that Stuart talked about a while back:

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

Leaving aside the promise of isomorphic/universal/whatever JavaScript, this issue of developer convenience is big issue. When I use the term “developer convenience” to label this problem, I am not belittling it in any way—developer convenience is incredibly important (hence the appeal of so many tools and frameworks that make life easier for developers). I still believe that developer convenience should be lower on the list of priorities than having a rock-solid baseline, but I can totally understand if someone doesn’t share that opinion. It’s a personal decision and if the pain involved in making a more universal baseline is greater than the perceived—and, let’s face it, somewhat abstract—benefit, I can totally understand that.

Anyway, that’s my little brain dump about progressive enhancement and baseline experiences. Something about treating the baseline experience as an enhancement was itching at my brain and now that I’ve managed to scratch it, I can see what was troubling me: thinking about the baseline experience in the same way as thinking about enhancements doesn’t work for me.

Personally, I’m going to strive to keep the baseline as low as possible. I’m also going to strive to apply Jake’s maxim about every enhancement requiring a user.

Edge words

I really enjoyed last year’s Edge conference so I made sure not to miss this year’s event, which took place last weekend.

The format was a little different this time ‘round. Last year the whole day was taken up with panels. Now, panels are often rambling, cringeworthy affairs, but Edge Conf is one of the few events that does panels well: they’re run on a tight schedule and put together with lots of work in advance. At this year’s Edge, the morning was taken up with these tightly-run panels as usual, but the afternoon consisted of more Barcamp-like breakout sessions.

I’ve got to be honest: I don’t think the new format worked that well. The breakout sessions didn’t have the true flexibility that you get with an unconference schedule, so there was no opportunity to merge similarly-themed sessions. There was, for example, a session on components at the same time as a session on accessibility in web components.

That highlights the other issue: FOMO. I’m really not a fan of multi-track events; there were so many sessions that sounded really interesting, but I couldn’t clone myself and go to all of them at once.

But, like I said, the first half of the day was taken up with four sequential (rather than parallel) panels and they were all excellent. All of the moderators did a fantastic job, and I was fortunate enough to sit in on the progressive enhancement panel expertly moderated by Lyza.

The event is called Edge for a reason. There is a rarefied atmosphere—and not just because of the broken-down air conditioning. This is a room full of developers on the cutting edge of web development technologies. Being at Edge Conf means being in a bubble. And being in a bubble is absolutely fine as long as you’re aware you’re in a bubble. It would be problematic if anyone were to mistake the audience and the discussions at Edge as being in any way representative of typical working web devs.

One of the most insightful comments of the day came from Christian who said, “Yes, but this is Edge Conf.” You’re going to need some context for that quote, so here it is…

On the web components panel that Christian was moderating, Alex was making a point about the ubiquity of tools—”Tooling was save you”, he said—and he asked for a show of hands from the audience on who was not using some particular tooling technology; transpilers, package managers, build tools, I can’t remember the specific question. Nobody put their hand up. “See?” asked Alex. “Yes”, said Christian, “but this is Edge Conf.”

Now, while I wasn’t keen on the format of the afternoon with its multiple simultaneous breakout sessions, that doesn’t mean I didn’t enjoy the ones I plumped for. Quite the opposite. The last breakout session of the day, again expertly moderated by Lyza, was particularly great.

The discussion was all about progressive enhancement. There seemed to be a general consensus that we’re all 100% committed to the results of progressive enhancement—greater availability, wider reach, and better performance—but that the term itself is widely misunderstood as “making all of your functionality work even with JavaScript switched off”. This misunderstanding couldn’t be further from the truth:

  1. It’s not about making all of your functionality available; it’s making your core functionality available: everything else can be considered an enhancement and it’s perfectly fine if not everyone gets that enhancement.
  2. This isn’t about switching JavaScript off; it’s about any particular technology not being available for reasons we can’t foresee (network issues, browser issues, whatever it may be).

And yet the misunderstanding persists. For that reason, most of the people in the discussion at Edge Conf were in favour of simply dropping the term progressive enhancement and instead focusing on terms like availability and access. Tim writes:

I’m not sure what we call it now. Maybe we do need another term to get people to move away from the “progressive enhancement = working without JS” baggage that distracts from the real goal.

And Stuart writes:

So I’m not going to be talking about progressive enhancement any more. I’m going to be talking about availability. About reach. About my web apps being for everyone even when the universe tries to get in the way.

But Jason writes:

I completely disagree that we should change nomenclature because there exists some small segment of Web designers unwilling to expand their development toolbox. I think progressive enhancement—the term—remains useful, descriptive, and appropriate.

I’m torn. On the one hand, I agree with Jason. The term “progressive enhancement” is a great descriptor. But on the other hand, I don’t want to end up like that guy who’s made it his life’s work to change every instance of the phrase “comprises of” to “comprises” (or “consists of”) on Wikipedia. Technically, he’s correct. But it doesn’t sound like a fun way to spend your days.

I guess my worry is, if I write an article or give a presentation, and I title it something to do with progressive enhancement, am I going to alienate and put off the very audience I’m trying to reach? But if I title it something else, am I tricking people?

Words are hard.

Web! What is it good for?

You can listen to an audio version of Web! What is it good for?

I have a blind spot. It’s the web.

I just can’t get excited about the prospect of building something for any particular operating system, be it desktop or mobile. I think about the potential lifespan of what would be built and end up asking myself “why bother?” If something isn’t on the web—and of the web—I find it hard to get excited about it. I’m somewhat jealous of people who can get equally excited about the web, native, hardware, print …in my mind, if it hasn’t got a URL, it’s missing some vital spark.

I know that this is a problem, but I can’t help it. At the very least, I have enough presence of mind to recognise it as being my problem.

Given these unreasonable feelings of attachment towards the web, you might expect me to wish it to become the one technology to rule them all. But I’ve never felt that any such victory condition would make sense. If anything, I’ve always been grateful for alternative avenues of experimentation and expression.

When Flash was a thriving ecosystem for artists to push the boundaries of what was possible to deliver to a web browser, I never felt threatened by it. I never wished for web technologies to emulate those creations. Don’t get me wrong: I’m happy that we’ve got nice smooth animations in CSS, but I never thought the lack of animation was crippling the web’s potential.

Now we have native technologies that can do more than the web can do. iOS and Android apps can access device APIs that web browsers can’t (yet). And, once again, while I look forward to the day that websites will be able to do all the things that native apps can do today, I don’t think that the lack of those capabilities is dooming the web to irrelevance.

There will always be some alternative that is technologically more advanced than the web. First there were CD-ROMs. Then we had Flash. Now we have native apps. Each one of those platforms offered more power and functionality than you could get from a web browser. And yet the web persists. That’s because none of the individual creations made with those technologies could compete with the collective power of all of the web, hyperlinked together. A single native app will “beat” a single website every time …but an app store pales when compared to the incredible reach and scope of the entire World Wide Web.

The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.

The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.

Like I said, I’m okay with that. I’m okay with the web not being as advanced as some other technology stack at any particular moment. I can wait.

In fact, as PPK points out, we could do real damage to the web by attempting to make it mimic some platform that’s currently in the ascendent. I disagree with his framing of it as a battle—rather than conceding defeat, I see it more like waiting out a siege—but I agree completely with this assessment:

The web cannot emulate native perfectly, and it never will.

If we accept that, then we can play to the web’s strengths (while at the same time, playing a slow game of catch-up behind the scenes). The danger comes when we try to emulate the capabilities of something that isn’t the web:

Emulating native leads to bad UX (or, at least, a UX that’s clearly a sub-optimal copy of native UX).

Whenever a website tries to emulate something from an operating system—be it desktop or mobile—the result is invariably something that gets really, really close …but falls just a little bit short. It feels like entering an uncanny valley of interaction design.

Frank sums this up nicely:

I think you make what I call “bicycle bear websites.” Why? Because my response to both is the same.

“Listen bub,” I say, “it is very impressive that you can teach a bear to ride a bicycle, and it is fascinating and novel. But perhaps it’s cruel? Because that’s not what bears are supposed to do. And look, pal, that bear will never actually be good at riding a bicycle.”

This is how I feel about so many of the fancy websites I see. “It is fascinating that you can do that, but it’s really not what a website is supposed to do.”

Enough is enough, says PPK:

It’s time to recognise that this is the wrong approach. We shouldn’t try to compete with native apps in terms set by the native apps. Instead, we should concentrate on the unique web selling points: its reach, which, more or less by definition, encompasses all native platforms, URLs, which are fantastically useful and don’t work in a native environment, and its hassle-free quality.

This is something that Cennydd talked about recently on an episode of the Design Details podcast. The web, he argues, is great for the sharing of information, but not so great for applications.

I think PPK, Cennydd, and I are all in broad agreement, but we almost certainly differ in the details. PPK, for example, argues that maybe news sites should be native apps instead, but for me, those are exactly the kind of sites that benefit from belonging to no particular platform. And when Cennydd talks about applications on the web, it raises the whole issue of what constitutes a web app anyway. If we’re talking about having access to device APIs—cameras, microphones, accelerometers—then yes, native is the way to go. But if we’re talking about interface elements and motion design, then I think the web can hold its own …sometimes.

Of course not every web browser can match the capabilities of a native app—that’s why it’s so important to approach web development through the lens of progressive enhancement rather than treating it like software development no different than that of native platforms. The web is not a platform—that’s the whole point of the web; it’s cross-platform. As Baldur put it:

Treating the web like another app platform makes sense if app platforms are all you’re used to. But doing so means losing the reach, adaptability, and flexibility that makes the web peerless in both the modern media and software industries.

The price we pay for that incredible cross-platform reach is that features on the web will always be lagging behind, and even when do they do arrive, they won’t be available in all web browsers.

To paraphrase William Gibson: capabilities on the web will always be here, but they will never be evenly distributed.

But let’s take a step back from the surface-level differences between web and native. Just as happened with CD-ROMs and Flash, the web is catching up with native when it comes to motion design, visual feedback, and gestures like swiping and dragging. I don’t think those are where the fundamental differences lie. I don’t even think the fundamental differences lie in accessing device APIs like cameras, microphones, and offline storage—the web is (slowly) catching up in those areas too.

What if the fundamental differences lie deeper than the technical implementation? What if the web is suited to some things more than others, not because of technical limitations, but because of philosophical mismatches?

The web was born at CERN, an amazing environment that’s free of many of the economic and hierarchical pressures that shape technology decisions elsewhere. The web’s heritage as a hypertext document sharing system for pure scientific research is often treated as a handicap, something that must be overcome in this age of applications and monetisation. But I see this heritage as a feature, not a bug. It promotes ideals of universal access above individual convenience, creation above consumption, and sharing above financial gain.

In yet another great article by Baldur, called The new age of HTML: the web is being torn apart, he opens with this:

For web development to grow as a craft and as an industry, we have to follow the money. Without money the craft becomes a hobby and unmaintained software begins to rot.

But I think there’s a danger here. If we allow the web to be led by money-making, we may end up changing the fundamental nature of the web, and not for the better.

Now, personally, I believe that it’s entirely possible to run a profitable business on the web. There are plenty of them out there. But suppose we allow that other avenues are more profitable. Let’s assume that there’s more friction in making money on the web than there is in, say, making money on iOS (or Android, or Facebook, or some other monolithic stack). If that were the case …would that be so bad?

Suppose, to use PPK’s phrase, we “concede defeat” to Apple, Google, Microsoft, and Facebook. When you think about it, it makes sense that platforms borne from profit-driven companies are going to be better at generating profit than something created by a bunch of idealistic scientists trying to improve the knowledge of the human race. Suppose we acknowledged that the web isn’t that well-suited to capitalism.

I think I’d be okay with that.

Would the web become little more than a hobbyist’s playground? A place for amateurs rather than professional businesses?

Maybe.

I’d be okay with that too.

Y’see, what attracted me to the web—to the point where I have this blind spot—wasn’t the opportunity to make money. What attracted me to the web was its remarkable ability to allow anyone to share anything, not just for the here and now, but for the future too.

If you’ve been reading my journal or following my links for any time, you’ll be aware that two of my biggest interests are progressive enhancement and digital preservation. In my mind, these two things are closely intertwingled.

For me, progressive enhancement is a means of practicing universal design, a way of providing access to as many people as possible. That includes access across time, hence the crossover with digital preservation. I’ve noticed again and again that what’s good for accessibility is also good for longevity, and vice versa.

Bret Victor writes:

Whenever the ephemerality of the web is mentioned, two opposing responses tend to surface. Some people see the web as a conversational medium, and consider ephemerality to be a virtue. And some people see the web as a publication medium, and want to build a “permanent web” where nothing can ever disappear.

I don’t want a web where “nothing can ever disappear” but I also don’t want the default lifespan of a resource on the web to be ephemeral. I think that whoever published that resource should get to decide how long or short its lifespan is. The problem, as Maciej points out, is in the mismatch of expectations:

I’ve come to believe that a lot of what’s wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.

I completely agree with Bret’s woeful assessment of the web when it comes to link rot:

It is this common record of public thought — the “great conversation” — whose stability and persistence is crucial, both for us alive today and for those who will come after.

I believe we can and should do better. But I completely and utterly disagree with him when he says:

Photos from your friend’s party are not part of the common record.

Nor are most casual conversations. Nor are search histories, commercial transactions, “friend networks”, or most things that might be labeled “personal data”. These are not deliberate publications like a bound book; they are not intended to be lasting contributions to the public discourse.

We can agree when it comes to search histories and commercial transactions, but it makes no sense to lump those in with the ordinary plenty that I’ve written about before:

My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.

For me, this lies at the heart of what the web does. The web removes the need for tastemakers who get to decide what gets published. The web removes the need for gatekeepers who get to decide what gets saved.

Other avenues of expressions will always be more powerful than the web in the short term: CD-ROMs, Flash, and now native. But they all come with gatekeepers. The collective output of the human race—from the most important scholarly papers to the most trivial blog post—is too important to put in the hands of the gatekeepers of today who may not even be around tomorrow: Apple, Google, Microsoft, et al.

The web has no gatekeepers. The web has no quality control. The web is a mess. The web is for everyone.

I have a blind spot. It’s the web.

100 words 039

Charlotte and I came up with a fun exercise today to help a client’s dev team to think of patterns at the granular level—something that had been proving difficult to get across.

We print out page designs, hand them some scissors, and get them to cut up the pages into their smallest components. Mix them all up so you can’t even tell which components came from which pages.

Then—after grouping duplicate patterns together—everyone takes a component and codes it up in HTML and CSS. As soon as you’re finished with one pattern, grab another.

Rinse and repeat.

Extending

Contrary to popular belief, web standards aren’t created by a shadowy cabal and then handed down to browser makers to implement. Quite the opposite. Browser makers come together in standards bodies and try to come to an agreement about how to collectively create and implement standards. That keeps them very busy. They don’t tend to get out very often, but when they do, the browser/standards makers have one message for developers: “We want to make your life better, so tell us what you want and that’s what we’ll work on!”

In practice, this turns out not to be the case.

Case in point: responsive images. For years, this was the number one feature that developers were crying out for. And yet, the standards bodies—and, therefore, browser makers—dragged their heels. First they denied that it was even a problem worth solving. Then they said it was simply too hard. Eventually, thanks to the herculean efforts of the Responsive Images Community Group, the browser makers finally began to work on what developers had been begging for.

Now that same community group is representing the majority of developers once again. Element queries—or container queries—have been top of the wish list of working devs for quite a while now. The response from browser makers is the same as it was for responsive images. They say it’s simply too hard.

Here’s a third example: web components. There are many moving parts to web components, but one of the most exciting to developers who care about accessibility and backwards-compatibility is the idea of extending existing elements:

It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.

So instead of having to create a whole new element from scratch like this:

<taco-button>Click me!</taco-button>

…you could piggy-back on an existing element like this:

<button is="taco-button">Click me!</button>

That way, you get the best of both worlds: the native semantics of button topped with all the enhancements you want to add with your taco-button custom element. Brilliant! Github is using this to extend the time element, for example.

I’m not wedded to the is syntax, but I do think it’s vital that there is some declarative mechanism to extend existing elements instead of creating every custom element from scratch each time.

Now it looks like that’s the bit of web components that isn’t going to make the cut. Why? Because browser makers say it’s simply too hard.

As Bruce points out, this is in direct conflict with the design principles that are supposed to be driving the creation and implementation of web standards.

It probably wouldn’t bother me so much except that browser makers still trot out the party line, “We want to hear what developers want!” Their actions demonstrate that this claim is somewhat hollow.

I don’t hold out much hope that we’ll get the ability to extend existing elements for web components. I think we can still find ways to piggy-back on existing semantics, but it’s going to take more work:

<taco-button><button>Click me!</button></taco-button>

That isn’t very elegant and I can foresee a lot of trickiness trying to sift the fallback content (the button tags) from the actual content (the “Click me!” text).

But I guess that’s what we’ll be stuck with. The alternative is simply too hard.