Tags: script

79

sparkline

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Adoption

Tom wrote a post on Ev’s blog a while back called JavaScript Frameworks: Distribution Channels for Good Ideas (I’ve been hoping he’d publish it on his own site so I’d have a more permanent URL to point to, but so far, no joy). It’s well worth a read.

I don’t really have much of an opinion on his central point that browser makers should work more closely with framework makers. I’m not so sure I agree with the central premise that frameworks are going to be around for the long haul. I think good frameworks—like jQuery—should aim to make themselves redundant.

But anyway, along the way, Tom makes this observation:

Google has an institutional tendency to go it alone.

JavaScript not good enough? Let’s create Dart to replace it. HTML not good enough? Let’s create AMP to replace it. I’m just waiting for them to announce Google Style Sheets.

I don’t really mind these inventions. We’re not forced to adopt them, and generally, we don’t. Tom again:

They poured enormous time and money into Dart, even building an entire IDE, without much to show for it. Contrast Dart’s adoption with the adoption of TypeScript and Flow, which layer improvements on top of JavaScript instead of trying to replace it.

See, that’s a really, really good point. It’s so much easier to get people to adjust their behaviour than to change it completely.

Sass is a really good example of this. You can take any .css file, save it as a .scss file, and now you’re using Sass. Then you can start using features (or not) as needed. Very smart.

Incidentally, I’m very curious to know how many people use the scss syntax (which is the same as CSS) compared to how many people use the sass indented syntax (the one with significant whitespace). In his brilliant Sass for Web Designers book, I don’t think Dan even mentioned the indented syntax.

Or compare the adoption of Sass to the adoption of HAML. Now, admittedly, the disparity there might be because Sass adds new features, whereas HAML is a purely stylistic choice. But I think the more fundamental difference is that Sass—with its scss syntax—only requires you to slightly adjust your behaviour, whereas something like HAML requires you to go all in right from the start.

This is something that has been on my mind a lately while I’ve been preparing my new talk on evaluating technology (the talk went down very well at An Event Apart San Francisco, by the way—that’s a relief). In the talk, I made a reference to one of Grace Hopper’s famous quotes:

Humans are allergic to change.

Now, Grace Hopper subsequently says:

I try to fight that.

I contrast that with the approach that Tim Berners-Lee and Robert Cailliau took with their World Wide Web project. The individual pieces were built on what people were already familiar with. URLs use slashes so they’d be feel similar to UNIX file paths. And the first fledging version of HTML took its vocabulary almost wholesale from a version of SGML already in use at CERN. In fact, you could pretty much take an existing CERN SGML file and open it as an HTML file in a web browser.

Oh, and that browser would ignore any tags it didn’t understand—behaviour that, in my opinion, would prove crucial to the growth and success of HTML. Because of its familiarity, its simplicity, and its forgiving error handling, HTML turned to be more successful than Tim Berners-Lee expected, as he wrote in his book Weaving The Web:

I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content. It would turn out that HTML would become amazingly popular for the content as well.

HTML and SGML; Sass and CSS; TypeScript and JavaScript. The new technology builds on top of the existing technology instead of wiping the slate clean and starting from scratch.

Humans are allergic to change. And that’s okay.

Assumptions

Last year Benedict Evans wrote about the worldwide proliferation and growth of smartphones. Nolan referenced that post when he extrapolated the kind of experience people will be having:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

This is the same argument that Tom made in his presentation at Responsive Field Day. The main point is that network conditions are unreliable, and I absolutely agree that we need to be very, very mindful of that. But I’m not so sure about the other conditions either. They smell like assumptions:

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

It’s not necessarily true that all those new web users will be running WebView browser like Chrome—there are millions of Opera Mini users, and I would expect that number to rise, given all the speed and cost benefits that proxy browsing brings.

I also don’t think that just because a device is a smartphone it necessarily means that it’s a pocket supercomputer. It might seem like a reasonable assumption to make, given the specs of even a low-end smartphone, but the specs don’t tell the whole story.

Alex gave a great presentation at the recent Polymer Summit. He dives deep into exactly how smartphones at the lower end of the market deal with websites.

I don’t normally enjoy listening to talk of hardware and specs, but Alex makes the topic very compelling by tying it directly to how we build websites. In short, we’re using waaaaay too much JavaScript. The message here is not “don’t use JavaScript” but rather “use JavaScript wisely.” Alas, many of the current crop of monolithic frameworks aren’t well suited to this.

Alex’s talk prompted Michael Scharnagl to take a look back at past assumptions and lessons learned on the web, from responsive design to progressive web apps.

We are consistently improving and we often have to realize that our assumptions are wrong.

This is particularly true when we’re making assumptions about how people will access the web.

It’s not enough to talk about the “next billion” in abstract, like an opportunity to reach teeming masses of people ripe for monetization. We need to understand their lives and their priorities with the sort of detail that can build empathy for other people living under vastly different circumstances.

That’s from an article Ethan linked to, noting:

Choice

Laurie Voss has written a thoughtful article called Web development has two flavors of graceful degradation in response to Nolan Lawson’s recent article. But I’m afraid I don’t agree with Laurie’s central premise:

…web app development and web site development are so different now that they probably shouldn’t be called the same thing anymore.

This is an idea I keep returning to, and each time I do, I find that it just isn’t that simple. There are very few web thangs that are purely interactive without any content, and there are also very few web thangs that are purely passive without any interaction. Instead, it’s a spectrum. Quite often, the position on that spectrum changes according to the needs of the user at any particular time—are Twitter and Flicker web sites while I’m viewing text and images, but then transmogrify into web apps the moment I want add, update, or delete a piece of text or an image?

In any case, the more interesting question than “is something a web site or a web app?” is the question “why?” Why does it matter? In my experience, the answer to that question generally comes down to the kind of architectural approach that a developer will take.

That’s exactly what Laurie dives into in his post. For web apps, use one architectural approach—for web sites, use a different architectural approach. To summarise:

  • in a web app, front-load everything and rely on client-side JavaScript for all subsequent interaction,
  • in a web site, optimise for many page loads, and make sure you don’t rely on client-side JavaScript.

I’m oversimplifying here, but the general idea is:

  • build web apps with the single page app architecture,
  • build web sites with progressive enhancement.

That’s sensible advice, but I’m worried that it could lead to a tautological definition of what constitutes a web app:

  1. This is a web app so it’s built as a single page app.
  2. Why do you define it as a web app?
  3. Because it’s built as a single page app.

The underlying question of what makes something a web app is bypassed by the architectural considerations …but the architectural considerations should be based on that underlying question. Laurie says:

If you are developing an app, the user ideally loads the app exactly once — whether it’s over a slow connection or not.

And similarly:

But if you are developing a web site consisting of many discrete pages, the act of loading goes from a single event to the most common event.

I completely agree that the architectural approach of single page apps is better suited to some kinds of web thangs more than others. It’s a poor architectural choice for a content-based site like nasa.gov, for example. Progressive enhancement would make more sense there.

But I don’t think that the architectural choices need to be in opposition. It’s entirely possible to reconcile the two. It’s not always easy—and the further along that spectrum you are, the tougher it gets—but it’s doable. You can begin with progressive enhancement, and then build up to a single page app architecture for more capable browsers.

I think that’s going to get easier as frameworks adopt a more mixed approach. Almost all the major libraries are working on server-side rendering as a default. Ember is leading the way with FastBoot, and Angular Universal is following. Neither of them are doing it for reasons of progressive enhancement—they’re doing it for performance and SEO—but the upshot is that you can more easily build a web app that simultaneously uses progressive enhancement and a single-page app model.

I guess my point is that I don’t think we should get too locked into the idea of web apps and web sites requiring fundamentally different approaches, especially with the changes in the technologies we used to build them.

We’ve made the mistake in the past of framing problems as “either/or”, when in fact, the correct solution was “both!”:

  • you can either have a desktop site or a mobile site,
  • you can either have rich interactivity or accessibility,
  • you can either have a single page app or progressive enhancement.

We don’t have to choose. It might take more work, but we can have our web cake and eat it.

The false dichotomy that I’m most concerned about is the pernicious idea that offline functionality is somehow in opposition to progressive enhancement. Given the design of service workers, I find this proposition baffling.

This remark by Tom is the very definition of a false dichotomy:

People who say your site should work without JavaScript are actually hurting the people they think they’re helping.

He was also linking to Nolan’s article, which could indeed be read as saying that you should for offline instead of building with progressive enhancement. But I don’t think that’s what Nolan is saying (at least, I sincerely hope not). I think that Nolan is saying that we should prioritise the offline scenario over scenarios where JavaScript fails or isn’t available. That’s a completely reasonable thing to say. But the idea that we should build for the offline scenario instead of scenarios where JavaScript fails is absurdly reductionist. We don’t have to choose!

But I can certainly understand how developers might come to be believe that building a progressive web app is at odds with progressive enhancement. Having made a bunch of progressive web apps—Huffduffer, The Session, this site, I can testify that service workers work superbly as a layer on top of an existing site, but all the messaging around progressive web apps seems to fixated on the idea of the app-shell model (a small tweak to the single page app model, where a little bit of interface is available on the initial page load instead of requiring JavaScript for absolutely everything). Again, it’s entirely possible to reconcile the app-shell approach with server rendering and progressive enhancement, but nobody seems to be talking about that. Instead, all of the examples and demos are built with an assumption about JavaScript availability.

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

Tom’s tweet included a screenshot of this part of Nolan’s article:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

Those seem like a reasonable set of assumptions. But even there, things aren’t so simple. Will people really be using “an evergreen browser and WebView”? Millions of people use proxy browsers like Opera Mini, which means you can’t guarantee JavaScript availability beyond the initial page load. UC Browser—which can also run in proxy mode—is now the second most popular mobile browser in the world.

That’s just one nit-picky example, but what I’m getting at here is that it really isn’t safe to make any assumptions. When we must make assumptions, let’s try to make them a last resort.

And just to be clear here, I’m not saying that just because we can’t make assumptions about devices or browsers doesn’t mean that we can’t build rich interactive web apps that work offline. I’m saying that we can build rich interactive web apps that work offline and also work when JavaScript fails or isn’t supported.

You don’t have to choose between progressive enhancement and a single page app/progressive web app/app shell/other things with the word “app”.

Progressive enhancement is an architectural approach to building on the web. You don’t have to use it, but please try to remember that it is your choice to make. You can choose to build a web app using progressive enhancement or not—there is nothing inherent in the nature of the thing you’re building that precludes progressive enhancement.

Personally, I find progressive enhancement a sensible way to counteract any assumptions I might inadvertently make. Progressive enhancement increases the chances that the web site (or web app) I’m building is resilient to the kind of scenarios that I never would’ve predicted or anticipated.

That’s why I choose to use progressive enhancement …and build progressive web apps.

Class teacher

ES6 introduced a whole bunch of new features to JavaScript. One of those features is the class keyword. This introduction has been accompanied by a fair amount of concern and criticism.

Here’s the issue: classes in JavaScript aren’t quite the same as classes in other programming languages. In fact, technically, JavaScript doesn’t really have classes at all. But some say that technically isn’t important. If it looks like a duck, and quacks like a duck, shouldn’t we call it a duck even if technically it’s somewhat similar—but not quite the same—species of waterfowl?

The argument for doing this is that classes are so familiar from other programming languages, that having some way of using classes in JavaScript—even if it isn’t technically the same as in other languages—brings a lot of benefit for people moving over to JavaScript from other programming languages.

But that comes with a side-effect. Anyone learning about classes in JavaScript will basically be told “here’s how classes work …but don’t look too closely.”

Now if you believe that outcomes matter more than understanding, then this is a perfectly acceptable trade-off. After all, we use computers every day without needing to understand the inner workings of every single piece of code under the hood.

It doesn’t sit well with me, though. I think that understanding how something works is important (in most cases). That’s why I favour learning underlying technologies first—HTML, CSS, JavaScript—before reaching for abstractions like frameworks and libraries. If you understand the way things work first, then your choice of framework, library, or any other abstraction is an informed choice.

The most common way that people refer to the new class syntax in JavaScript is to describe it as syntactical sugar. In other words, it doesn’t fundamentally introduce anything new under the hood, but it gives you a shorter, cleaner, nicer way of dealing with objects. It’s an abstraction. But because it’s an abstraction taken from other programming languages that work differently to JavaScript, it’s a bit of fudge. It’s a little white lie. The class keyword in JavaScript will work just fine as long as you don’t try to understand it.

My personal opinion is that this isn’t healthy.

I’ve come across two fantastic orators who cemented this view in my mind. At Render Conf in Oxford earlier this year, I had the great pleasure of hearing Ashley Williams talk about the challenges of teaching JavaScript. Skip to the 15 minute mark to hear her introduce the issues thrown up classes in JavaScript.

More recently, the mighty Kyle Simpson was on an episode of the JavaScript Jabber podcast. Skip to the 17 minute mark to hear him talk about classes in JavaScript.

(Full disclosure: Kyle also some very kind things about some of my blog posts at the end of that episode, but you can switch it off before it gets to that bit.)

Both Ashley and Kyle bring a much-needed perspective to the discussion of language design. That perspective is the perspective of a teacher.

In his essay on W3C’s design principles, Bert Bos lists learnability among the fundamental driving forces (closely tied to readability). Learnability and teachability are two sides of the same coin, and I find it valuable to examine any language decisions through that lens. With that mind, introducing a new feature into a language that comes with such low teachability value as to warrant a teacher actively telling a student not to learn how things really work …well, that just doesn’t seem right.

Backdoor Service Workers

When I was moderating that panel at the Progressive Web App dev Summit, I brought up this point about twenty minutes in:

Alex, in your talk yesterday you were showing the AMP demo there with the Washington Post. You click through and there’s the Washington Post AMP thing, and it was able to install the Service Worker with that custom element. But I was looking at the URL bar …and that wasn’t the Washington Post. It was on the CDN from AMP. So I talked to Paul Backaus from the AMP team, and he explained that it’s an iframe, and using an iframe you can install a Service Worker from somewhere else.

Alex and Emily explained that, duh, that’s the way iframes work. It makes sense when you think about it—an iframe is pretty much the same as any other browser window. Still, it feels like it might violate the principle of least surprise.

Let’s say you followed my tongue-in-cheek advice to build a progressive web app store. Your homepage might have the latest 10 or 20 progressive web apps. You could also include 10 or 20 iframes so that those sites are “pre-installed” for the person viewing your page.

Enough theory. Here’s a practical example…

Suppose you’ve never visited the website for my book, html5forwebdesigners.com (if you have visited it, and you want to play along with this experiment, go to your browser settings and delete anything stored by that domain).

You happen to visit my website adactio.com. There’s a little blurb buried down on the home page that says “Read my book” with a link through to html5forwebdesigners.com. I’ve added this markup after the link:

<iframe src="https://html5forwebdesigners.com/iframe.html" style="width: 0; height: 0; border: 0">
</iframe>

That hidden iframe pulls in an empty page with a script element:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>HTML5 For Web Designers</title>
<script>
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/serviceworker.js');
}
</script>
</head>
</html>

That registers the Service Worker on my book’s site which then proceeds to install all the assets it needs to render the entire site offline.

There you have it. Without ever visiting the domain html5forwebdesigners.com, the site has been pre-loaded onto your device because you visited the domain adactio.com.

A few caveats:

  1. I had to relax the Content Security Policy for html5forwebdesigners.com to allow the iframe to be embedded on adactio.com:

    Header always set Access-Control-Allow-Origin: "https://adactio.com"
    
  2. If your browser’s settings has “Block third-party cookies and site data” selected in the preferences, the iframe-invoked Service Worker won’t install:

    Uncaught (in promise) DOMException: Failed to register a ServiceWorker: The user denied permission to use Service Worker.
    

The example I’ve put together here is relatively harmless. But it’s possible to imagine more extreme scenarios. Imagine there’s a publishing company that has 50 websites for 50 different publications. Each one of them could have an empty page waiting to be embedded via iframe from the other 49 sites. You only need to visit one page on one of those 50 sites to have 50 Service Workers spun up and caching assets in the background.

There’s the potential here for a tragedy of the commons. I hope we’ll be sensible about how we use this power.

Just don’t tell the advertising industry about this.

On the side

My role at Clearleft is something along the lines of being a technical director. I’m not entirely sure what that means, but it seems to be a way of being involved in front-end development, without necessarily writing much actual code. That’s probably for the best. My colleagues Mark, Graham, and Charlotte are far more efficient at doing that. In return, I do my best to support them and make sure that they’ve got whatever they need (in terms of resources, time, and space) to get on with their work.

I’m continuously impressed not only by the quality of their output on client projects, but also by their output on the side.

Mark is working a project called Fractal. It’s a tool for creating component libraries, something he has written about before. The next steps involve getting the code to version 1.0 and completing the documentation. Then you’ll be hearing a lot more about this. The tricky thing right now is fitting it in around client work. It’s going to be very exciting though—everyone who has been beta-testing Fractal has had very kind words to say. It’s quite an impressive piece of work, especially considering that it’s the work of one person.

Graham is continuing on his crazily-ambitious project to recreate the classic NES game Legend Of Zelda using web technology. His documentation of his process is practically a book:

  1. Introduction,
  2. The Game Loop,
  3. Drawing to the Screen,
  4. Handling User Input,
  5. Scaling the Canvas,
  6. Animation — Part 1,
  7. Levels & Collision — part 1, and most recently
  8. Levels — part 2.

It’s simultaneously a project that involves the past—retro gaming—and the future—playing with the latest additions to JavaScript in modern browsers (something that feeds directly back into client work).

Charlotte has been speaking up a storm. She spoke at the Up Front conference in Manchester about component libraries:

The process of building a pattern library or any kind of modular design system requires a different approach to delivering a set of finished pages. Even when the final deliverable is a pattern library, we often still have to design pages for approval. When everyone is so used to working with pages, it can be difficult to adopt a new way of thinking—particularly for those who are not designers and developers.

This talk will look at how we can help everyone in the team adopt pattern thinking. This includes anyone with a decision to make—not just designers and developers. Everyone in the team can start building a shared vocabulary and attempt to make the challenge of naming things a little easier.

Then she spoke at Dot York about her learning process:

As a web developer, I’m learning all the time. I need to know how to make my code work, but more importantly, I want to understand why my code works. I’ve learnt most of what I know from people sharing what they know and I love that I can now do the same. In my talk I want to share my highlights and frustrations of continuous learning, my experiences of working with a mentor and fitting it into my first year at Clearleft.

She’ll also be speaking at Beyond Tellerrand in Berlin later this year. Oh, and she’s also now a co-organiser of the brilliant Codebar events that happen every Tuesday here in Brighton.

Altogether that’s an impressive amount of output from Clearleft’s developers. And all of that doesn’t include the client work that Mark, Graham, and Charlotte are doing. They inspire me!

Sticky headers

I made a little tweak to The Session today. The navigation bar across the top is “sticky” now—it doesn’t scroll with the rest of the content.

I made sure that the stickiness only kicks in if the screen is both wide and tall enough to warrant it. Vertical media queries are your friend!

But it’s not enough to just put some position: fixed CSS inside a media query. There are some knock-on effects that I needed to mitigate.

I use the space bar to paginate through long pages. It drives me nuts when sites with sticky headers don’t accommodate this. I made use of Tim Murtaugh’s sticky pagination fixer. It makes sure that page-jumping with the keyboard (using the space bar or page down) still works. I remember when I linked to this script two years ago, thinking “I bet this will come in handy one day.” Past me was right!

The other “gotcha!” with having a sticky header is making sure that in-page anchors still work. Nicolas Gallagher covers the options for this in a post called Jump links and viewport positioning. Here’s the CSS I ended up using:

:target:before {
    content: '';
    display: block;
    height: 3em;
    margin: -3em 0 0;
}

I also needed to check any of my existing JavaScript to see if I was using scrollTo anywhere, and adjust the calculations to account for the newly-sticky header.

Anyway, just a few things to consider if you’re going to make a navigational element “sticky”:

  1. Use min-height in your media query,
  2. Take care of keyboard-initiated page scrolling,
  3. Adjust the positioning of in-page links.

Thank you, jQuery

I turned Huffduffer into a progressive web app recently. It was already running on HTTPS so I didn’t have much to do. One manifest file and one basic Service Worker did the trick.

Getting the 'add to home screen' prompt for https://huffduffer.com/ on Android Chrome.

I also did a bit of spring cleaning, refactoring some CSS. The site dates to 2008 so there’s plenty in there that I would do very differently today. Still, considering the age of the code, I wasn’t cursing my past self too much.

After that, I decided to refactor the JavaScript too. There, I had a clear goal: could I remove the dependency on jQuery?

It turned out to be pretty straightforward. I was able to bring my total JavaScript file size down to 3K (gzipped). Pretty much everything I was doing in jQuery could be just as easily accomplished with DOM methods like addEventListener and querySelectorAll, and objects like XMLHttpRequest and classList.

Of course, the reason why half of those handy helpers exist is because of jQuery. Certainly in the case of querySelector and querySelectorAll, jQuery blazed a trail for browsers and standards bodies to pave. In some cases, like animation, the jQuery-led solutions ended up in CSS instead of JavaScript, but the story was the same: developers used the heck of jQuery and browser makers paid attention to that. This is something that Jack spoke about at Render Conf a little while back.

Brian once said of PhoneGap that its ultimate purpose is to cease to exist. I think of jQuery in a similar way.

jQuery turned ten years old this year, and jQuery version 3.0 was just released. Congratulations, jQuery! You have served the web well.

A wager on the web

Jason has written a great post about progressive web apps. It’s also a post about whether fears of the death of the web are justified.

Lately, I vacillate on whether the web is endangered or poised for a massive growth due to the web’s new capabilities. Frankly, I think there are indicators both ways.

So he applies Pascal’s wager. The hypothesis is that the web is under threat and progressive web apps are a solution to fighting that threat.

  • If the hypothesis is incorrect and we don’t build progressive web apps, things continue as they are on the web (which is not great for users—they have to continue to put up with fragile, frustratingly slow sites).
  • If the hypothesis is incorrect and we do build progressive web apps, users get better websites.
  • If the hypothesis is correct and we do build progressive web apps, users get better websites and we save the web.
  • If the hypothesis is correct and we don’t build progressive web apps, the web ends up pining for the fjords.

Whether you see the web as threatened or see Chicken Little in people’s fears and whether you like progressive web apps or feel it is a stupid Google marketing thing, we can all agree that putting energy into improving the experience for the people using our sites is always a good thing.

Jason is absolutely correct. There are literally no downsides to us creating progressive web apps. Everybody wins.

But that isn’t the question that people have been tackling lately. None of these (excellent) blog posts disagree with the conclusion that building progressive web apps as originally defined would be a great move forward for the web:

The real question that comes out of those posts is whether it’s good or bad for the future of progressive web apps—and by extension, the web—to build stop-gap solutions that use some progressive web app technologies (Service Workers, for example) while failing to be progressive in other ways (only working on mobile devices, for example).

In this case, there are two competing hypotheses:

  1. In the short term, it’s okay to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because over time they’ll get improved and we’ll end up with proper progressive web apps in the long term.
  2. In the short term, we should build proper progressive web apps, and it’s a really bad idea to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because that encourages more people to build sub-par websites and progressive web apps become synonymous with door-slamming single-page apps in the long term.

The second hypothesis sounds pessimistic, and the first sounds optimistic. But the people arguing for the first hypothesis aren’t coming from a position of optimism. Take Christian’s post, for example, which I fundamentally disagree with:

End users deserve to have an amazing, form-factor specific experience. Let’s build those.

I think end users deserve to have an amazing experience regardless of the form-factor of their devices. Christian’s viewpoint—like Alex’s tweetstorm—is rooted in the hypothesis that the web is under threat and in danger. The conclusion that comes out of that—building mobile-only JavaScript-reliant progressive web apps is okay—is a conclusion reached through fear.

Never make any decision based on fear.

A little progress

I’ve got a fairly simple posting interface for my notes. A small textarea, an optional file upload, some checkboxes for syndicating to Twitter and Flickr, and a submit button.

Notes posting interface

It works fine although sometimes the experience of uploading a file isn’t great, especially if I’m on a slow connection out and about. I’ve been meaning to add some kind of Ajax-y progress type thingy for the file upload, but never quite got around to it. To be honest, I thought it would be a pain.

But then, in his excellent State Of The Gap hit parade of web technologies, Remy included a simple file upload demo. Turns out that all the goodies that have been added to XMLHttpRequest have made this kind of thing pretty easy (and I’m guessing it’ll be easier still once we have fetch).

I’ve made a little script that adds a progress bar to any forms that are POSTing data.

Feel free to use it, adapt it, and improve it. It isn’t using any ES6iness so there are some obvious candidates for improvement there.

It’s working a treat on my little posting interface. Now I can stare at a slowly-growing progress bar when I’m out and about on a slow connection.

A web for everyone

I gave the closing talk at the Render conference in Oxford a few weeks back. It was a very smoothly-run event, the spiritual successor to jQuery UK.

In amongst the mix of talks there were a few emerging themes. Animation was covered from a few different angles by Val and Sara. Bruce, Jake, Ola, and I talked about Service Workers and offline functionality. But there were also some differences of opinion.

In her great talk—I’m Offline, Cool! Now What?—Ola outlined the many and varied offline use cases that drove the creation and philosophy of Hoodie. She described all the reasons why people need the web; for communication, for access to information, for empowerment, and for love. “Hell, yes!” I thought.

But then she said:

So since when is helping people to fulfil a basic need, progressive enhancement?

And even more forcefully:

This is why I think, putting offline first in the progressive enhancement slot is pure bullshit.

Strong words indeed! And I have to say I was a little puzzled by them.

Ola had demonstrated again and again just how fragile the network could be. That is absolutely correct. All too often, we make the assumption that people using our sites have a decent network connection. That’s not a safe assumption to make.

But the suggested solution—to rely on technologies like local storage, Service Workers, or other APIs—assumes a certain level of JavaScript capabilities in the devices and browsers out there. That’s an unsafe assumption to make.

I remember discussing this with Alex from Hoodie a while back. I was confused by the cognitive dissonance I was observing. It seems to me that, laudable as Hoodie’s offline-first goals are, they are swapping out one unstable dependency—the network—for a different unstable dependency—a set of JavaScript APIs.

(I remember Alex pointed out that Hoodie was intended primarily for web apps rather than web sites, and my response—predictably enough—was to say “Define web app”.)

I think I understand why Ola reacted so strongly to the suggestion that offline functionality should be added as an enhancement. I’ve seen the same reaction when I’ve said that beautiful typography on the web is an enhancement. I think that when I say something is an enhancement, what people hear is that something is just an enhancement. It sounds belittling. That’s not my intention, but I can understand how it could come across like that. Perhaps this is one reason why some people have a real issue with the term “progressive enhancement”.

I wish we could make offline functionality a requirement. But the reality is that not everyone is using a browser that supports the necessary technology. I wish we could make beautiful typography a requirement. But, again, the reality is that there will always be some browsers or devices that won’t be capable of executing that typography. Accepting these facets of reality might seem like admissions of defeat, but I actually find it quite liberating.

In her brilliant talk at Render, Ashley G. Williams channeled Carl Sagan, quoting from his book The Demon-Haunted World:

It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.

That’s how I feel we should approach building for the web. Let’s accept that network connections are unevenly distributed. Let’s also accept that browser features are unevenly distributed. Pretending that millions of Opera Mini users don’t exist isn’t a viable strategy. They too are people who want to communicate, to access information, to be empowered, and to love.

Pointing out that you can’t always rely on client-side JavaScript shouldn’t be taken as an admonishment. It’s an opportunity.

Karolina Szczur wrote a wonderful piece on Ev’s blog called The Web Isn’t Uniform. She noticed how many sites—Facebook, AirBnB, Basecamp—failed to even render some useful information if the JavaScript fails to load. It’s a situation that many of us—with our fast connections, capable browsers, and modern devices—might never even notice.

It’s a privilege to be able to use breaking edge technologies and devices, but let’s not forget basic accessibility and progressive enhancement. Ultimately, we’re building for the users, not for our own tastes or preferences.

Karolina asks that we, as makers of the web, have a little more empathy. If the comments on her article are anything to go by, that’s a tall order. All the usual tropes are rolled out—there’s the misunderstanding that progressive enhancement means making sure everything works without JavaScript (it doesn’t; it’s about the core functionality), and the evergreen argument that as soon as you’re building a web “app”, that best practices, good engineering, and empathy can go out the window…

I strongly disagree that this has anything to do at all about empathy. Instead, it’s all about resources and priorities. Making a JS app is already hard enough, duplicating all that work so that it also works without JS is quite often just not practical.—Sacha Greif

But requiring that a site be functional when JavaScript is disabled, may not be a valid requirement anymore. HTML and CSS were originally created and designed for documents, not applications. Many websites these days should be considered apps rather then docs.—Dan Shappir

What you’re suggesting is that all these companies should write all their software twice, once in javascript and again in good ol’ html with forms, to cater to that point-whatever-percentage that has decided to break their own web browser by turning one of the three fundamental web technologies off. In what universe is this a reasonable request?—Erlend Halvorsen

JavaScript is as important as HTML. This is modern internet. If someone doesn’t have JavaScript, they should not be using the new applications that were possible because of JavaScript.—HarshaL

I am a web developer. I build web applications not web sites. What you say may be true for web sites with static pages displaying images and text.—R. Fancsiki

Ah, Medium! Where the opinions of self-entitled dudes flow like rain from the tech heavens.

While they were so busy defending the lack of basic functionality in all the examples that Karolina listed, they failed to notice the most important development:

Let’s build a web that works for everyone. That doesn’t mean everyone has to have the same experience. Let’s accept that there are all sorts of people out there accessing the web with all sorts of browsers on all sorts of devices.

What a fantastic opportunity!

Accessible progressive disclosure revisited

I wrote a little while back about making an accessible progressive disclosure pattern. It’s very basic—just a few ARIA properties and a bit of JavaScript sprinkled onto some basic HTML. The HTML contains a button element that toggles the aria-hidden property on a chunk of markup.

Earlier this week I had a chance to hang out with accessibility experts Derek Featherstone and Devon Persing so I took the opportunity to pepper them with questions about this pattern. My main question was “Should I automatically focus the toggled content?”

Derek’s response was very perceptive. He wanted to know why I was using a button. Good question. When you think about it, what I’m doing is pointing from one element to another. On the web, we point with links.

There are no hard’n’fast rules about this kind of thing, but as Derek put it, it helps to think about whether the action involves controlling something (use a button) or taking the user somewhere (use a link). At first glance, the progressive disclosure pattern seems to be about controlling something—toggling the appearance of another element. But if I’m questioning whether to automatically focus that element, then really I’m asking whether I want to take the user to that place in the document—in other words, linking to it.

I decided to update the markup. Here’s what I had before:

<button aria-controls="content">Reveal</button>
<div id="content"></div>

Here’s what I have now:

<a href="#content" aria-controls="content">Reveal</a>
<div id="content"></div>

The logic in the JavaScript remains exactly the same:

  1. Find any elements that have an aria-controls attribute (these were buttons, now they’re links).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated link (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those links.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it in action on CodePen.

See the Pen Accessible toggle (link) by Jeremy Keith (@adactio) on CodePen.

Bookmarklets

Someone at Clearleft asked me a question recently about making bookmarklets. I have a bit of experience in that department. As well as making a bookmarklet for adding links to my own site, there’s the Huffduffer bookmarklet that’s been chugging away since 2008.

I told them that there are basically two approaches:

  1. Have the bookmarklet pop open a new browser window at your service, passing in the URL of the current page. Then do all the heavy lifting on your server.
  2. Have the bookmarklet inject JavaScript to analyse and edit the DOM of the document in the current browser window. All the heavy lifting is done directly in client-side JavaScript.

I favour the first approach. Partly that’s because it makes it easier to update the functionality. As you improve your server-side script, the bookmarklet functionality gets better automatically. But also, if your server-side script doesn’t do its magic, you can always fall back to letting the end user fill in the details.

Here’s an example…

When you click the Huffduffer bookmarklet, it pops open this URL:

https://huffduffer.com/add?page=…

…with that page parameter filled in with whatever page you currently have open. Let’s say I’ve got this page currently open in my browser:

https://adactio.com/journal/6786

If I press the Huffduffer bookmarklet, that will spawn a new window with this URL:

https://huffduffer.com/add?page=https://adactio.com/journal/6786

And that’s all it does. Now it’s up to that page on Huffduffer to figure out what to do with the URL it has been given. In this case, it makes a CURL request to figure out what to use as a title, what to use as a description, what audio file to use, etc. If it can’t figure that out, I can always fill in those fields myself by hand.

I could’ve chosen to get at that information by injecting JavaScript directly into the page open in the browser. But that’s somewhat invasive.

Brian Donohue wrote on Ev’s blog a while back about one of the problems with that approach. Sites that—quite correctly—have a strict Content Security Policy will object to having arbitrary JavaScript injected into their documents.

But remember this only applies to some bookmarklets. If a bookmarklet just spawns a new window—like Huffduffer’s—then there’s no problem. That approach to bookmarklets was dismissed with this justification:

The crux of the issue for bookmarklets is that web authors can control the origin of the JavaScript, network calls, and CSS, all of which are necessary for any non-trivial bookmarklet.

Citation needed. I submit that Huffduffer and Instapaper provide very similar services: “listen later” and “read later”. Both use cases could be described as “non-trivial”. But only one of the bookmarklets works on sites with strict CSPs.

Time and time again, I see over-engineered technical solutions that are built with the justification that “this problem is very complex therefore the solution needs to be complex” (yes, I am talking about web thangs that rely on complex JavaScript). In my experience, it’s exactly the opposite way around. The more complex the problem, the more important it is to solve it in the simplest way possible. It’s the only way of making sure the solution is resilient to unexpected scenarios.

The situation with bookmarklets is a perfect example. It’s not just an issue with strict Content Security Policies either. I’ve seen JavaScript-injecting bookmarklets fail because someone has set their browser cookie preferences to only accept cookies from the originating server.

Bookmarklets are not dead. They may, however, be pining for the fjords. Nobody has a figured out a way to get bookmarklets to work on mobile. Now that might well be a death sentence.

Accessible progressive disclosure

Over on The Session I have a few instances of a progressive disclosure pattern. It’s just your basic show/hide toggle: click on a button; see some more content. For example, there’s a “download” button for every tune that displays options to download the tune in different formats (ABC and midi).

To begin with, I was using the :checked pseudo-class pattern that Charlotte has documented so well. I really like that pattern. It feels nice and straightforward. But then I got some feedback from someone using the site:

the link for midi files is no longer coming up on the tune pages. I am blind so I rely on the midi’s when finding tunes for my students.

I wrote back saying the link to download midi files was revealed by the “download” option. The response:

Excellent. I have it now, I was just looking for the midi button which wasn’t there. the actual download button doesn’t read as a button under each version of the tune but now I know it’s there I know what I am doing. I am using the JAWS screen reader.

This was just one person …one person who took the time to write to me. What about other screen reader users?

I dabbled around with adding role="button" to the checkbox or the label, but that felt really icky (contradicting the inherent role of those elements) and it didn’t seem to make much difference anyway.

I decided to refactor the progressive disclosure to use JavaScript instead just CSS. I wanted to make sure that accessibility was built into the functionality, rather than just bolted on. That’s why code I’ve written doesn’t rely on the buttons having a particular class value; instead the buttons must have an aria-controls attribute that associates the button with the element it toggles (in much the same way that a for attribute associates a label with a form field).

Here’s the logic:

  1. Find any elements that have an aria-controls attribute (these should be buttons).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated button (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those buttons.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it action on CodePen.

I’m still playing around with this. I think the :focus styles are probably far too subtle right now—see this excellent presentation from Laura Palmaro for more on that. I’m also not sure if the revealed content should automatically take focus. I’ll see if I can get some feedback from people on The Session using screen readers—there’s quite a few of them.

Feel free to use my code but you might want to check out Jason’s code to do the same thing—his is bound to be nicer to work with.

Update: In response to this discussion, I’ve decided not to automatically focus the expanded content.

Handling redirects with a Service Worker

When I wrote about implementing my first Service Worker, I finished with this plea:

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.

Well, I ran into a gotcha that was really frustrating but thanks to the generosity of others, I was able to sort it out.

It was all because of an issue in Chrome. Here’s the problem…

Let’s say you’ve got a Service Worker running that takes care of any requests to your site. Now on that site, you’ve got a URL that receives POST data, does something with it, and then redirects to another URL. That’s a fairly common situation—it’s how I handle webmentions here on adactio.com, and it’s how I handle most add/edit/delete actions over on The Session to help prevent duplicate form submissions.

Anyway, it turns out that Chrome’s Service Worker implementation would get confused by that. Instead of redirecting, it showed the offline page instead. The fetch wasn’t resolving.

I described the situation to Jake, but rather than just try and explain it in 140 characters, I built a test case.

There’s a Chromium issue filed on this, and it will get fixed, but it in the meantime, it was really bugging me recently when I was rolling out a new feature on The Session. Matthew pointed out that the Chromium bug report also contained a workaround that he’s been using on traintimes.org.uk. Adrian also posted his expanded workaround in there too. That turned out to be exactly what I needed.

I think the problem is that the redirect means that a body is included in the GET request, which is what’s throwing the Service Worker. So I need to create a duplicate request without the body:

request = new Request(url, {
    method: 'GET',
    headers: request.headers,
    mode: request.mode == 'navigate' ? 'cors' : request.mode,
    credentials: request.credentials,
    redirect: request.redirect
});

So here’s what I had in my Service Worker before:

// For HTML requests, try the network first, fall back to the cache, finally the offline page
if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    event.respondWith(
        fetch(request)
            .then( response => {
                // NETWORK
                // Stash a copy of this page in the pages cache
                let copy = response.clone();
                stashInCache(pagesCacheName, request, copy);
                return response;
            })
            .catch( () => {
                // CACHE or FALLBACK
                return caches.match(request)
                    .then( response => response || caches.match('/offline') );
                })
        );
    return;
}

And here’s what I have now:

// For HTML requests, try the network first, fall back to the cache, finally the offline page
if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    request = new Request(url, {
        method: 'GET',
        headers: request.headers,
        mode: request.mode == 'navigate' ? 'cors' : request.mode,
        credentials: request.credentials,
        redirect: request.redirect
    });
    event.respondWith(
        fetch(request)
            .then( response => {
                // NETWORK
                // Stash a copy of this page in the pages cache
                let copy = response.clone();
                stashInCache(pagesCacheName, request, copy);
                return response;
            })
            .catch( () => {
                // CACHE or FALLBACK
                return caches.match(request)
                    .then( response => response || caches.match('/offline') );
                })
        );
    return;
}

Now the test case is working just fine in Chrome.

On the off-chance that someone out there is struggling with the same issue, I hope that this is useful.

Share what you learn.

Enhance’n’drag’n’drop

I’ve spent the last week implementing a new feature over at The Session. I had forgotten how enjoyable it is to get completely immersed in a personal project, thinking about everything from database structures right through to CSS animations,

I won’t bore you with the details of this particular feature—which is really only of interest if you play traditional Irish music—but I thought I’d make note of one little bit of progressive enhancement.

One of the interfaces needed for this feature was a form to re-order items in a list. So I thought to myself, “what’s the simplest technology to enable this functionality?” I came up with a series of select elements within a form.

Reordering

It’s not the nicest of interfaces, but it works pretty much everywhere. Once I had built that—and the back-end functionality required to make it all work—I could think about how to enhance it.

I brought it up at the weekly Clearleft front-end pow-wow (featuring special guest Jack Franklin). I figured that drag’n’drop would be the obvious enhancement, but I didn’t know if there were any “go-to” libraries for implementing it; I haven’t paid much attention to the state of drag’n’drop since the old IE implement was added to HTML5.

Nobody had any particular recommendations so I did a bit of searching. I came across Dragula, which looked pretty solid. It’s made by the super-smart Nicolás Bevacqua, who I know shares my feelings about progressive enhancement. To my delight, I was able to get it working within minutes.

Drag and drop

There’s a little bit of mustard-cutting going on: does the dragula object exist, and does the browser understand querySelector? If so, the select elements are hidden and the drag’n’drop is enabled. Then, whenever an item in the list is dragged and dropped, the corresponding (hidden) select element is updated …so that time I spent making the simpler non-drag’n’drop interface was time well spent: I didn’t need to do anything extra on the server to handle the data from the updated interface.

It’s a simple example but it demonstrates that the benefits of starting with the simpler universal interface before upgrading to the smoother experience.

Service Worker notes

Here’s a disconnected hodge-podge of things related to Service Workers I’ve noticed recently…

Service Workers have landed in Firefox. When it was first unveiled in a nightly build a few people spotted some weird character issues on sites like mine that are using Service Workers, but that should all be fixed in the next release.

A while back I voted up Service Workers on Microsoft’s roadmap for Edge. I got an email today to say that the roadmap priority is high:

We intend to begin development soon.

We’re getting there.

Here’s a little gotcha that had me tearing my hair out until I tracked down the culprit: don’t use Header append Vary User-Agent in your site’s Apache config file. I don’t know why it snuck in there in the first place, but once I removed it, it fixed a weird issue that Aaron T. Grogg pointed out to me whereby my offline page would get cached, but not my CSS.

I really like this proposal for:

<link rel="serviceworker" href="/serviceworker.js">

It makes sense to me that I should be able to point to the Service Worker of a page in the same way that I point to a style sheet. It makes sense as a rel value too: “the linked resource has the relationship of ‘serviceworker’ to the current document.”

Also, I’m just generally a fan of declarative solutions. This feels like another good example of functionality that starts life in an imperative language (JavaScript) and then becomes declarative over time (see also: :hover, the required attribute, etc.).

Lyza wrote a fantastic article on Smashing Magazine that goes into all the details of her Service Worker. I must admit to feeling extremely gratified when she wrote:

First, I’m hugely indebted to Jeremy Keith for the implementation of service workers on his own website, which served as the starting point for my own code.

Going through her code, she made this remark:

Note: I use certain ECMAScript6 (or ES2015) features in the sample code for service workers because browsers that support service workers also support these features.

That’s a really good point. I haven’t messed around much with ES6 HipsterScript stuff partly because I haven’t yet set up a transpiler, so it was nice to know that my Service Worker is a “safe space” to try some stuff out in the browser. I refactored my JavaScript to use const, let, and =>.

Jake is looking for feedback on a specific part of Service Worker functionality around URLs. If I can wrap my head around what’s being described, I’ll chime in.

Finally, I had a nice little Service Worker moment earlier today. I was doing some updates on my web server that required a reboot. When I checked in Chrome to see how long adactio.com was down, I was surprised to see that the downtime appeared to be …zero. “That’s odd” I thought, “How can my site still be reachable if the server is …oh!” That’s when I realised I was seeing a cached version of my homepage. My Service Worker was doing it’s thing.

I had been thinking of Service Workers as a tool to help in situations where the user agent goes offline. But of course it’s an equally useful tool for when the server goes offline. This was a nice reminder of that.

Small lessons, loosely learned

When Charlotte published her end-of-year report, she outlined her plans for 2016, which included “Document my JavaScript learning journey.”

I want to get into the habit of writing one JavaScript post every week to make sure I keep up with learning it. Even if it’s just a few words about some relevant terminology; it can be as long or short as desired or time allows, as long as it happens.

An excellent plan! If you really want to make sure you’ve understood something, write down an explanation of it. There’s nothing quite like writing to really test your grasp of an idea. Even if nobody else ever reads it, it’s still an extremely valuable exercise for yourself.

Charlotte has already started. Here’s a short post on using JavaScript to pick a random an item at random from an array:

Math.round(Math.random() * (array.length - 1))

It might seem like a small thing, but look what you have to understand:

  • How Math.round works (pretty straightforward—it rounds a floating point number to the nearest whole number).
  • How Math.random works (less straightforward—it gives a random number between zero and one, meaning you have to multiply it to do anything useful with the result).
  • How array.length works (seems straightforward—it gives you the total number of items in an array, but then catches you out when you realise there can never be an index with that total value because the indices are counted from zero …which gives rise to an entire class of programming error).

I really like this approach to learning: document each small thing as you go instead of waiting until all those individual pieces click together. That’s the approach Andy took when he was learning CSS and it led to him writing a book on the subject.

When it comes to problem-solving in general (and JavaScript in particular), I have a similar bias towards single-purpose solutions. Rather than creating a monolithic framework that attempts to solve all possible problems, I prefer a collection of smaller scripts that only do one thing, but do it really well; the UNIX philosophy.

Take, for example, Remy’s bind.js. It does two-way data-binding and nothing else. If you only need one-way data-binding, there’s Simulacra.js, which takes a similar minimalist, hands-off approach.

This approach of breaking large problems down into a collection of smaller problems also came up in a completely unrelated discussion at work recently. I floated the idea of starting a book club at Clearleft. Quite a few people are into the idea, but they’re not sure they could commit the time to reading a book on a deadline. Fair enough. Perhaps we could have the book club on a chapter-by-chapter basis? I don’t think that would work all that well for novels, but it did make me think of something related to Charlotte’s stated goal of learning more JavaScript…

Graham has been raving about the You Don’t Know JS book series by the supersmart Kyle Simpson. I suggested to Charlotte that we read through the books at the rate of one chapter a week. The first book is called Up and Going and our first chapter will be Into Programming, starting this week. Then at the end of the week we’ll get together to compare notes.

I’m hoping that by doing this together, there’s more chance that we’ll actually see it through to completion:

Why can I hit deadlines imposed by others, but not those imposed by myself?

Where to start?

A lot of the talks at this year’s Chrome Dev Summit were about progressive web apps. This makes me happy. But I think the focus is perhaps a bit too much on the “app” part on not enough on “progressive”.

What I mean is that there’s an inevitable tendency to focus on technologies—Service Workers, HTTPS, manifest files—and not so much on the approach. That’s understandable. The technologies are concrete, demonstrable things, whereas approaches, mindsets, and processes are far more nebulous in comparison.

Still, I think that the most important facet of building a robust, resilient website is how you approach building it rather than what you build it with.

Many of the progressive app demos use server-side and client-side rendering, which is great …but that aspect tends to get glossed over:

Browsers without service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options.

I think it’s vital to not think in terms of older browsers “falling back” but to think in terms of newer browsers getting a turbo-boost. That may sound like a nit-picky semantic subtlety, but it’s actually a radical difference in mindset.

Many of the arguments I’ve heard against progressive enhancement—like Tom’s presentation at Responsive Field Day—talk about the burdensome overhead of having to bolt on functionality for older or less-capable browsers (even Jake has done this). But the whole point of progressive enhancement is that you start with the simplest possible functionality for the greatest number of users. If anything gets bolted on, it’s the more advanced functionality for the newer or more capable browsers.

So if your conception of progressive enhancement is that it’s an added extra, I think you really need to turn that thinking around. And that’s hard. It’s hard because you need to rewire some well-engrained pathways.

There is some precedence for this though. It was really, really hard to convince people to stop using tables for layout and starting using CSS instead. That was a tall order—completely change the way you approach building on the web. But eventually we got there.

When Ethan came out with Responsive Web Design, it was an equally difficult pill to swallow, not because of the technologies involved—media queries, percentages, etc.—but because of the change in thinking that was required. But eventually we got there.

These kinds of fundamental changes are inevitably painful …at first. After years of building websites using tables for layout, creating your first CSS-based layout was demoralisingly difficult. But the second time was a bit easier. And the third time, easier still. Until eventually it just became normal.

Likewise with responsive design. After years of building fixed-width websites, trying to build in a fluid, flexible way was frustratingly hard. But the second time wasn’t quite as hard. And the third time …well, eventually it just became normal.

So if you’re used to thinking of the all-singing, all-dancing version of your site as the starting point, it’s going to be really, really hard to instead start by building the most basic, accessible version first and then work up to the all-singing, all-dancing version …at first. But eventually it will just become normal.

For now, though, it’s going to take work.

The recent redesign of Google+ is true case study in building a performant, responsive, progressive site:

With server-side rendering we make sure that the user can begin reading as soon as the HTML is loaded, and no JavaScript needs to run in order to update the contents of the page. Once the page is loaded and the user clicks on a link, we do not want to perform a full round-trip to render everything again. This is where client-side rendering becomes important — we just need to fetch the data and the templates, and render the new page on the client. This involves lots of tradeoffs; so we used a framework that makes server-side and client-side rendering easy without the downside of having to implement everything twice — on the server and on the client.

This took work. Had they chosen to rely on client-side rendering alone, they could have built something quicker. But I think it was worth laying that solid foundation. And the next time they need to build something this way, it’s going to be less work. Eventually it just becomes normal.

But it all starts with thinking of the server-side rendering as the default. Server-side rendering is not a fallback; client-side rendering is an enhancement.

That’s exactly the kind of mindset that enables Jack Franklin to build robust, resilient websites:

Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end.

I had a chance to chat briefly with Jack at the Edge conference in London and I congratulated him on the launch of a Go Cardless site that used exactly this technique. He told me that the decision to flip the switch and make it act as a single page app came right at the end of the project. Server-side rendering was the default; client-side rendering was added later.

The key to building modern, resilient, progressive sites doesn’t lie in browser technologies or frameworks; it lies in how we think about the task at hand; how we approach building from the ground up rather than the top down. Changing the way we fundamentally think about building for the web is inevitably going to be challenging …at first. But it will also be immensely rewarding.