Tags: work

118

sparkline

Split

When I talk about evaluating technology for front-end development, I like to draw a distinction between two categories of technology.

On the one hand, you’ve got the raw materials of the web: HTML, CSS, and JavaScript. This is what users will ultimately interact with.

On the other hand, you’ve got all the tools and technologies that help you produce the HTML, CSS, and JavaScript: pre-processors, post-processors, transpilers, bundlers, and other build tools.

Personally, I’m much more interested and excited by the materials than I am by the tools. But I think it’s right and proper that other developers are excited by the tools. A good balance of both is probably the healthiest mix.

I’m never sure what to call these two categories. Maybe the materials are the “external” technologies, because they’re what users will interact with. Whereas all the other technologies—that mosty live on a developer’s machine—are the “internal” technologies.

Another nice phrase is something I heard during Chris’s talk at An Event Apart in Seattle, when he quoted Brad, who talked about the front of the front end and the back of the front end.

I’m definitely more of a front-of-the-front-end kind of developer. I have opinions on the quality of the materials that get served up to users; the output should be accessible and performant. But I don’t particularly care about the tools that produced those materials on the back of the front end. Use whatever works for you (or whatever works for your team).

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time. Like I said, from a user’s point of view, it’s irrelevant what text editor or version control system you use.

Now, you could make the argument that anything that is good for developer convenience is automatically good for user experience because faster, more efficient development should result in better output. While that’s true in theory, I highly recommend Alex’s post, The “Developer Experience” Bait-and-Switch.

Where it gets interesting is when a technology that’s designed for developer convenience is made out of the very materials being delivered to users. For example, a CSS framework like Bootstrap is made of CSS. That’s different to a tool like Sass which outputs CSS. Whether or not a developer chooses to use Sass is irrelevant to the user—the final output will be CSS either way. But if a developer chooses to use a CSS framework, that decision has a direct impact on the user experience. The user must download the framework in order for the developer to get the benefit.

So whereas Sass sits at the back of the front end—where I don’t care what you use—Bootstrap sits at the front of the front end. For tools like that, I don’t think saying “use whatever works for you” is good enough. It’s got to be weighed against the cost to the user.

Historically, it’s been a similar story with JavaScript libraries. They’re written in JavaScript, and so they’re going to be executed in the browser. If a developer wanted to use jQuery to make their life easier, the user paid the price in downloading the jQuery library.

But I’ve noticed a welcome change with some of the bigger JavaScript frameworks. Whereas the initial messaging around frameworks like React touted the benefits of state management and the virtual DOM, I feel like that’s not as prevalent now. You’re much more likely to hear people—quite rightly—talk about the benefits of modularity and componentisation. If you combine that with the rise of Node—which means that JavaScript is no longer confined to the browser—then these frameworks can move from the front of the front end to the back of the front end.

We’ve certainly seen that at Clearleft. We’ve worked on multiple React projects, but in every case, the output was server-rendered. Developers get the benefit of working with a tool that helps them. Users don’t pay the price.

For me, this question of whether a framework will be used on the client side or the server side is crucial.

Let me tell you about a Clearleft project that sticks in my mind. We were working with a big international client on a product that was going to be rolled out to students and teachers in developing countries. This was right up my alley! We did plenty of research into network conditions and typical device usage. That then informed a tight performance budget. Every design decision—from web fonts to images—was informed by that performance budget. We were producing lean, mean markup, CSS, and JavaScript. But we weren’t the ones implementing the final site. That was being done by the client’s offshore software team, and they insisted on using React. “That’s okay”, I thought. “React can be used server-side so we can still output just what’s needed, right?” Alas, no. These developers did everything client side. When the final site launched, the log-in screen alone required megabytes of JavaScript just to render a form. It was, in my opinion, entirely unfit for purpose. It still pains me when I think about it.

That was a few years ago. I think that these days it has become a lot easier to make the decision to use a framework on the back of the front end. Like I said, that’s certainly been the case on recent Clearleft projects that involved React or Vue.

It surprises me, then, when I see the question of server rendering or client rendering treated almost like an implementation detail. It might be an implementation detail from a developer’s perspective, but it’s a key decision for the user experience. The performance cost of putting your entire tech stack into the browser can be enormous.

Alex Sanders from the development team at The Guardian published a post recently called Revisiting the rendering tier . In it, he describes how they’re moving to React. Now, if this were a move to client-rendered React, that would make a big impact on the user experience. The thing is, I couldn’t tell from the article whether React was going to be used in the browser or on the server. The article talks about “rendering”—which is something that browsers do—and “the DOM”—which is something that only exists in browsers.

So I asked. It turns out that this plan is very much about generating HTML and CSS on the server before sending it to the browser. Excellent!

With that question answered, I’m cool with whatever they choose to use. In this case, they’re choosing to use CSS-in-JS (although, to be pedantic, there’s no C anymore so technically it’s SS-in-JS). As long as the “JS” part is JavaScript on a server, then it makes no difference to the end user, and therefore no difference to me. Not my circus, not my monkeys. For users, the end result is the same whether styling is applied via a selector in an external stylesheet or, for example, via an inline style declaration (and in some situations, a server-rendered CSS-in-JS solution might be better for performance). And so, as a user-centred developer, this is something that I don’t need to care about.

Except…

I have misgivings. But just to be clear, these misgivings have nothing to do with users. My misgivings are entirely to do with another group of people: the people who make websites.

There’s a second-order effect. By making React—or even JavaScript in general—a requirement for styling something on a web page, the barrier to entry is raised.

At least, I think that the barrier to entry is raised. I completely acknowledge that this is a subjective judgement. In fact, the reason why a team might decide to make JavaScript a requirement for participation might well be because they believe it makes it easier for people to participate. Let me explain…

It wasn’t that long ago that devs coming from a Computer Science background were deriding CSS for its simplicity, complaining that “it’s broken” and turning their noses up at it. That rhetoric, thankfully, is waning. Nowadays they’re far more likely to acknowledge that CSS might be simple, but it isn’t easy. Concepts like the cascade and specificity are real head-scratchers, and any prior knowledge from imperative programming languages won’t help you in this declarative world—all your hard-won experience and know-how isn’t fungible. Instead, it seems as though all this cascading and specificity is butchering the modularity of your nicely isolated components.

It’s no surprise that programmers with this kind of background would treat CSS as damage and find ways to route around it. The many flavours of CSS-in-JS are testament to this. From a programmer’s point of view, this solution has made things easier. Best of all, as long as it’s being done on the server, there’s no penalty for end users. But now the price is paid in the diversity of your team. In order to participate, a Computer Science programming mindset is now pretty much a requirement. For someone coming from a more declarative background—with really good HTML and CSS skills—everything suddenly seems needlessly complex. And as Tantek observed:

Complexity reinforces privilege.

The result is a form of gatekeeping. I don’t think it’s intentional. I don’t think it’s malicious. It’s being done with the best of intentions, in pursuit of efficiency and productivity. But these code decisions are reflected in hiring practices that exclude people with different but equally valuable skills and perspectives.

Rachel describes HTML, CSS and our vanishing industry entry points:

If we make it so that you have to understand programming to even start, then we take something open and enabling, and place it back in the hands of those who are already privileged.

I think there’s a comparison here with toxic masculinity. Toxic masculinity is obviously terrible for women, but it’s also really shitty for men in the way it stigmatises any male behaviour that doesn’t fit its worldview. Likewise, if the only people your team is interested in hiring are traditional programmers, then those programmers are going to resent having to spend their time dealing with semantic markup, accessibility, styling, and other disciplines that they never trained in. Heydon correctly identifies this as reluctant gatekeeping:

By assuming the role of the Full Stack Developer (which is, in practice, a computer scientist who also writes HTML and CSS), one takes responsibility for all the code, in spite of its radical variance in syntax and purpose, and becomes the gatekeeper of at least some kinds of code one simply doesn’t care about writing well.

This hurts everyone. It’s bad for your team. It’s even worse for the wider development community.

Last year, I was asked “Is there a fear or professional challenge that keeps you up at night?” I responded:

My greatest fear for the web is that it becomes the domain of an elite priesthood of developers. I firmly believe that, as Tim Berners-Lee put it, “this is for everyone.” And I don’t just mean it’s for everyone to use—I believe it’s for everyone to make as well. That’s why I get very worried by anything that raises the barrier to entry to web design and web development.

I’ve described a number of dichotomies here:

  • Materials vs. tools,
  • Front of the front end vs. back of the front end,
  • User experience vs. developer experience,
  • Client-side rendering vs. server-side rendering,
  • Declarative languages vs. imperative languages.

But the split that worries the most is this:

  • The people who make the web vs. the people who are excluded from making the web.

Dev perception

Chris put together a terrific round-up of posts recently called Simple & Boring. It links off to a number of great articles on the topic of complexity (and simplicity) in web development.

I had linked to quite a few of the articles myself already, but one I hadn’t seen was from David DeSandro who wrote New tech gets chatter:

You don’t hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

I think that’s a very good point.

It’s relatively easy to write and speak about new technologies. You’re excited about them, and there’s probably an eager audience who can learn from what you have to say.

It’s trickier to write something insightful about a tried and trusted (perhaps even boring) technology that’s been around for a while. You could maybe write little tips and tricks, but I bet your inner critic would tell you that nobody’s interested in hearing about that old tech. It’s boring.

The result is that what’s being written about is not a reflection of what’s being widely used. And that’s okay …as long as you know that’s the case. But I worry that theres’s a perception problem. Because of the outsize weighting of new and exciting technologies, a typical developer could feel that their skills are out of date and the technologies they’re using are passé …even if those technologies are actually in wide use.

I don’t know about you, but I constantly feel like I’m behind the curve because I’m not currently using TypeScript or GraphQL or React. Those are all interesting technologies, to be sure, but the time to pick any of them up is when they solve a specific problem I’m having. Learning a new technology just to mitigate a fear of missing out isn’t a scalable strategy. It’s reasonable to investigate a technology because you genuinely think it’s exciting; it’s quite another matter to feel like you must investigate a technology in order to survive. That way lies burn-out.

I find it very grounding to talk to Drew and Rachel about the people using their Perch CMS product. These are working developers, but they are far removed from the world of tools and frameworks forged in the startup world.

In a recent (excellent) article comparing the performance of Formula One websites, Jake made this observation at the end:

However, none of the teams used any of the big modern frameworks. They’re mostly Wordpress & Drupal, with a lot of jQuery. It makes me feel like I’ve been in a bubble in terms of the technologies that make up the bulk of the web.

I think this is very astute. I also think it’s completely understandable to form ideas about what matters to developers by looking at what’s being discussed on Twitter, what’s being starred on Github, what’s being spoken about at conferences, and what’s being written about on Ev’s blog. But it worries me when I see browser devrel teams focusing their efforts on what appears to be the needs of typical developers based on the amount of ink spilled and breath expelled.

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

Trys wrote a great blog post called City life, where he compares his experience of doing CMS-driven agency work with his experience working at a startup in Shoreditch:

I was chatting to one of the team about my previous role. “I built two websites a month in WordPress”.

They laughed… “WordPress! Who uses that anymore?!”

Nearly a third of the web as it turns out - but maybe not on the Silicon Roundabout.

I’m not necessarily suggesting that there should be more articles and talks about older, more established technologies. Conferences in particular are supposed to give audiences a taste of what’s coming—they can be a great way of quickly finding out what’s exciting in the world of development. But we shouldn’t feel bad if those topics don’t match our day-to-day reality.

Ultimately what matters is building something—a website, a web app, whatever—that best serves end users. If that requires a new and exciting technology, that’s great. But if it requires an old and boring technology, that’s also great. What matters here is appropriateness.

When we’re evaluating technologies for appropriateness, I hope that we will do so through the lens of what’s best for users, not what we feel compelled to use based on a gnawing sense of irrelevancy driven by the perceived popularity of newer technologies.

Going Offline—the talk of the book

I gave a new talk at An Event Apart in Seattle yesterday morning. The talk was called Going Offline, which the eagle-eyed amongst you will recognise as the title of my most recent book, all about service workers.

I was quite nervous about this talk. It’s very different from my usual fare. Usually I have some big sweeping arc of history, and lots of pretentious ideas joined together into some kind of narrative arc. But this talk needed to be more straightforward and practical. I wasn’t sure how well I would manage that brief.

I knew from pretty early on that I was going to show—and explain—some code examples. Those were the parts I sweated over the most. I knew I’d be presenting to a mixed audience of designers, developers, and other web professionals. I couldn’t assume too much existing knowledge. At the same time, I didn’t want to teach anyone to such eggs.

In the end, there was an overarching meta-theme to talk, which was this: logic is more important than code. In other words, figuring out what you’re trying to accomplish (and describing it clearly) is more important than typing curly braces and semi-colons. Programming is an act of translation. Before you can translate something, you need to be able to articulate it clearly in your own language first. By emphasising that point, I hoped to make the code less overwhelming to people unfamilar with it.

I had tested the talk with some of my Clearleft colleagues, and they gave me great feedback. But I never know until I’ve actually given a talk in front of a real conference audience whether the talk is any good or not. Now that I’ve given the talk, and received more feedback, I think I can confidentally say that it’s pretty damn good.

My goal was to explain some fairly gnarly concepts—let’s face it: service workers are downright weird, and not the easiest thing to get your head around—and to leave the audience with two feelings:

  1. This is exciting, and
  2. This is something I can do today.

I deliberately left time for questions, bribing people with free copies of my book. I got some great questions, and I may incorporate some of them into future versions of this talk (conference organisers, if this sounds like the kind of talk you’d like at your event, please get in touch). Some of the points brought up in the questions were:

  • Is there some kind of wizard for creating a typical service worker script for any site? I didn’t have a direct answer to this, but I have attempted to make a minimal viable service worker that could be used for just about any site. Mostly I encouraged the questioner to roll their sleeves up and try writing a bespoke script. I also mentioned the Workbox library, but I gave my opinion that if you’re going to spend the time to learn the library, you may as well spend the time to learn the underlying language.
  • What are some state-of-the-art progressive web apps for offline user experiences? Ooh, this one kind of stumped me. I mean, the obvious poster children for progressive webs apps are things like Twitter, Instagram, and Pinterest. They’re all great but the offline experience is somewhat limited. To be honest, I think there’s more potential for great offline experiences by publishers. I especially love the pattern on personal sites like Una’s and Sara’s where people can choose to save articles offline to read later—like a bespoke Instapaper or Pocket. I’d love so see that pattern adopted by some big publications. I particularly like that gives so much more control directly to the end user. Instead of trying to guess what kind of offline experience they want, we give them the tools to craft their own.
  • Do caches get cleaned up automatically? Great question! And the answer is mostly no—although browsers do have their own heuristics about how much space you get to play with. There’s a whole chapter in my book about being a good citizen and cleaning up your caches, but I didn’t include that in the talk because it isn’t exactly exciting: “Hey everyone! Now we’re going to do some housekeeping—yay!”
  • Isn’t there potential for abuse here? This is related to the previous question, and it’s another great question to ask of any technology. In short, yes. Bad actors could use service workers to fill up caches uneccesarily. I’ve written about back door service workers too, although the real problem there is with iframes rather than service workers—iframes and cookies are technologies that are already being abused by bad actors, and we’re going to see more and more interventions by ethical browser makers (like Mozilla) to clamp down on those technologies …just as browsers had to clamp down on the abuse of pop-up windows in the early days of JavaScript. The cache API could become a tragedy of the commons. I liken the situation to regulation: we should self-regulate, but if we prove ourselves incapable of that, then outside regulation (by browsers) will be imposed upon us.
  • What kind of things are in the future for service workers? Excellent question! If you think about it, a service worker is kind of a conduit that gives you access to different APIs: the Cache API and the Fetch API being the main ones now. A service worker is like an airport and the APIs are like the airlines. There are other APIs that you can access through service workers. Notifications are available now on desktop and on Android, and they’ll be coming to iOS soon. Background Sync is another powerful API accessed through service workers that will get more and more browser support over time. The great thing is that you can start using these APIs today even if they aren’t universally supported. Then, over time, more and more of your users will benefit from those enhancements.

If you attended the talk and want to learn more about about service workers, there’s my book (obvs), but I’ve also written lots of blog posts about service workers and I’ve linked to lots of resources too.

Finally, here’s a list of links to all the books, sites, and articles I referenced in my talk…

Books

Sites

Progressive Web Apps

Move Fast and Don’t Break Things by Scott Jehl

Scott Jehl is speaking at An Event Apart in Seattle—yay! His talk is called Move Fast and Don’t Break Things:

Performance is a high priority for any site of scale today, but it can be easier to make a site fast than to keep it that way. As a site’s features and design evolves, its performance is often threatened for a number of reasons, making it hard to ensure fast, resilient access to services. In this session, Scott will draw from real-world examples where business goals and other priorities have conflicted with page performance, and share some strategies and practices that have helped major sites overcome those challenges to defend their speed without compromises.

The title is a riff on the “move fast and break things” motto, which comes from a more naive time on the web. But Scott finds part of it relatable. Things break. We want to move fast without breaking things.

This is a performance talk, which is another kind of moving fast. Scott starts with a brief history of not breaking websites. He’s been chipping away at websites for 20 years now. Remember Positioning Is Everything? How about Quirksmode? That one's still around.

In the early days, building a website that was "not broken" was difficult, but it was difficult for different reasons. We were focused on consistency. We had deal with differences between browsers. There were two ways of dealing with browsers: browser detection and feature detection.

The feature-based approach was more sustainable but harder. It fits nicely with the practice of progressive enhancement. It's a good mindset for dealing with the explosion of devices that kicked off later. Touch screens made us rethink our mouse and hover-centric matters. That made us realise how much keyboard-driven access mattered all along.

Browsers exploded too. And our data networks changed. With this explosion of considerations, it was clear that our early ideas of “not broken” didn’t work. Our notion of what constituted “not broken” was itself broken. Consistency just doesn’t cut it.

But there was a comforting part to this too. It turned out that progressive enhancement was there to help …even though we didn’t know what new devices were going to appear. This is a recurring theme throughout Scott’s career. So given all these benefits of progressive enhancement, it shouldn’t be surprising that it turns out to be really good for performance too. If you practice progressive enhancement, you’re kind of a performance expert already.

People started talking about new performance metrics that we should care about. We’ve got new tools, like Page Speed Insights. It gives tangible advice on how to test things. Web Page Test is another great tool. Once you prove you’re a human, Web Page Test will give you loads of details on how a page loaded. And you get this great visual timeline.

This is where we can start to discuss the metrics we want to focus on. Traditionally, we focused on file size, which still matters. But for goal-setting, we want to focus on user-perceived metrics.

First Meaningful Content. It’s about how soon appears to be useful to a user. Progressive enhancement is a perfect match for this! When you first make request to a website, it’s usually for a web page. But to render that page, it might need to request more files like CSS or JavaScript. All of this adds up. From a user perspective, if the HTML is downloaded, but the browser can’t render it, that’s broken.

The average time for this on the web right now is around six seconds. That’s broken. The render blockers are the problem here.

Consider assets like scripts. Can you get the browser to load them without holding up the rendering of the page? If you can add async or defer to a script element in the head, you should do that. Sometimes that’s not an option though.

For CSS, it’s tricky. We’ve delivered the HTML that we need but we’ve got to wait for the CSS before rendering it. So what can you bundle into that initial payload?

You can user server push. This is a new technology that comes with HTTP2. H2, as it’s called, is very performance-focused. Just turning on H2 will probably make your site faster. Server push allows the server to send files to the browser before the browser has even asked for them. You can do this with directives in Apache, for example. You could push CSS whenever an HTML file is requested. But we need to be careful not to go too far. You don’t want to send too much.

Server push is great in moderation. But it is new, and it may not even be supported by your server.

Another option is to inline CSS (well, actually Scott, this is technically embedding CSS). It’s great for first render, but isn’t it wasteful for caching? Scott has a clever pattern that uses the Cache API to grab the contents of the inlined CSS and put a copy of its contents into the cache. Then it’s ready to be served up by a service worker.

By the way, this isn’t just for CSS. You could grab the contents of inlined SVGs and create cached versions for later use.

So inlining CSS is good, but again, in moderation. You don’t want to embed anything bigger than 15 or 20 kilobytes. You might want separate out the critical CSS and only embed that on first render. You don’t need to go through your CSS by hand to figure out what’s critical—there are tools that to do this that integrate with your build process. Embed that critical CSS into the head of your document, and also start preloading the full CSS. Here’s a clever technique that turns a preload link into a stylesheet link:

<link rel="preload" href="site.css" as="style" onload="this.rel='stylesheet'">

Also include this:

<noscript><link rel="stylesheet" href="site.css"></noscript>

You can also optimise for return visits. It’s all about the cache.

In the past, we might’ve used a cookie to distinguish a returning visitor from a first-time visitor. But cookies kind of suck. Here’s something that Scott has been thinking about: service workers can intercept outgoing requests. A service worker could send a header that matches the current build of CSS. On the server, we can check for this header. If it’s not the latest CSS, we can server push the latest version, or inline it.

The neat thing about service workers is that they have to install before they take over. Scott makes use of this install event to put your important assets into a cache. Only once that is done to we start adding that extra header to requests.

Watch out for an article on the Filament Group blog on this technique!

With performance, more weight doesn’t have to mean more wait. You can have a heavy page that still appears to load quickly by altering the prioritisation of what loads first.

Web pages are very heavy now. There’s a real cost to every byte. Tim’s WhatDoesMySiteCost.com shows that the CNN home page costs almost fifty cents to load for someone in America!

Time to interactive. This is is the time before a user can use what’s on the screen. The issue is almost always with JavaScript. The page looks usable, but you can’t use it yet.

Addy Osmani suggests we should get to interactive in under five seconds on a 3G network on a median mobile device. Your iPhone is not a median mobile device. A typical phone takes six seconds to process a megabyte of JavaScript after it has downloaded. So even if the network is fast, the time to interactive can still be very long.

This all comes down to our industry’s increasing reliance on JavaScript just to render content. There seems to be pendulum shifts between client-side and server-side rendering. It’s been great to see libraries like Vue and Ember embrace server-side rendering.

But even with server-side rendering, there’s still usually a rehydration step where all the JavaScript gets parsed and that really affects time to interaction.

Code splitting can help. Webpack can do this. That helps with first-party JavaScript, but what about third-party JavaScript?

Scott believes easier to make a fast website than to keep a fast website. And that’s down to all the third-party scripts that people throw in: analytics, ads, tracking. They can wreak havoc on all your hard work.

These scripts apparently contribute to the business model, so it can be hard for us to make the case for removing them. Tools like SpeedCurve can help people stay informed on the impact of these scripts. It allows you to set up performance budgets and it shows you when pages go over budget. When that happens, we have leverage to step in and push back.

Assuming you lose that battle, what else can we do?

These days, lots of A/B testing and personalisation happens on the client side. The tooling is easy to use. But they are costly!

A typical problematic pattern is this: the server sends one version of the page, and once the page is loaded, the whole page gets replaced with a different layout targeted at the user. This leads to a terrifying new metric that Scott calls Second Meaningful Content.

Assuming we can’t remove the madness, what can we do? We could at least not do this for first-time visits. We could load the scripts asyncronously. We can preload the scripts at the top of the page. But ideally we want to move these things to the server. Server-side A/B testing and personalisation have existed for a while now.

Scott has been experimenting with a middleware solution. There’s this idea of server workers that Cloudflare is offering. You can manipulate the page that gets sent from the server to the browser—all the things you would do for an A/B test. Scott is doing this by using comments in the HTML to demarcate which portions of the page should be filtered for testing. The server worker then deletes a block for some users, and deletes a different block for other users. Scott has written about this approach.

The point here isn’t about using Cloudflare. The broader point is that it’s much faster to do these things on the server. We need to defend our user’s time.

Another issue, other than third-party scripts, is the page weight on home pages and landing pages. Marketing teams love to fill these things with enticing rich imagery and carousels. They’re really difficult to keep performant because they change all the time. Sometimes we’re not even in control of the source code of these pages.

We can advocate for new best practices like responsive images. The srcset attribute on the img element; the picture element for when you need more control. These are great tools. What’s not so great is writing the markup. It’s confusing! Ideally we’d have a CMS drive this, but a lot of the time, landing pages fall outside of the purview of the CMS.

Scott has been using Vue.js to make a responsive image builder—a form that people can paste their URLs into, which spits out the markup to use. Anything we can do by creating tools like these really helps to defend the performance of a site.

Another thing we can do is lazy loading. Focus on the assets. The BBC homepage uses some lazy loading for images—they blink into view as your scroll down the page. They use LazySizes, which you can find on Github. You use data- attributes to list your image sources. Scott realises that LazySizes is not progressive enhancement. He wouldn’t recommend using it on all images, just some images further down the page.

But thankfully, we won’t need these workarounds soon. Soon we’ll have lazy loading in browsers. There’s a lazyload attribute that we’ll be able to set on img and iframe elements:

<img src=".." alt="..." lazyload="on">

It’s not implemented yet, but it’s coming in Chrome. It might be that this behaviour even becomes the default way of loading images in browsers.

If you dig under the hood of the implementation coming in Chrome, it actually loads all the images, but the ones being lazyloaded are only sent partially with a 206 response header. That gives enough information for the browser to lay out the page without loading the whole image initially.

To wrap up, Scott takes comfort from the fact that there are resilient patterns out there to help us. And remember, it is our job to defend the user’s experience.

Marty’s mashup

While the Interaction 19 event was a bit of a mixed bag overall, there were some standout speakers.

Marty Neumeier was unsurprisingly excellent. I’d seen him speak before, at UX London a few years back, so I knew he’d be good. He has a very reassuring, avuncular manner when he’s speaking. You know the way that there are some people you could just listen to all day? He’s one of those.

Marty’s talk at Interaction 19 was particularly interesting because it was about his new book. Now, why would that be of particular interest? Well, this new book—Scramble—is a business book, but it’s written in the style of a thriller. He wanted it to be like one of those airport books that people read as a guilty pleasure.

One rainy night in December, young CEO David Stone is inexplicably called back to the office. The company’s chairman tells him that the board members have reached the end of their patience. If David can’t produce a viable turnaround plan in five weeks, he’s out of a job. His only hope is to try something new. But what?

I love this idea!

I’ve talked before about borrowing narrative structures from literature and film and applying them to blog posts and conference talks—techniques like flashback, in media res, etc.—so I really like the idea of taking an entire genre and applying it to a technical topic.

The closest I’ve seen is the comic that Scott McCloud wrote for the release of Google Chrome back in 2008. But how about a romantic comedy about service workers? Or a detective novel about CSS grid?

I have a feeling I’ll be thinking about Marty Neumeier’s book next time I’m struggling to put a conference talk together.

In the meantime, if you want to learn from the master storyteller himself, Clearleft are running a two-day Brand Master Workshop with Marty on March 14th and 15th at The Barbican in London. Early bird tickets are on sale until this Thursday, so don’t dilly-dally if you were thinking about nabbing your spot.

New Adventures 2019

My trip to Nottingham for the New Adventures conference went very well indeed.

First of all, I had an all-day workshop to run. I was nervous. Because I no longer prepare slides for workshops—and instead rely on exercises and discussions—I always feel like I’m winging it. I’m not winging it, but without the security blanket of a slide deck, I don’t have anything to fall back on.

As it turned out, I needn’t have worried. The workshop went great. Well, I thought it went great but you’d really have to ask the attendees to know for sure. One of the workshop participants, Westley Knight, wrote about his experience:

The workshop itself was fluid enough to cater to the topics that the attendees were interested in; from over-arching philosophy to technical detail around service workers and new APIs. It has helped me to understand that learning in this kind of environment doesn’t have to be rigorously structured, and can be shaped as the day progresses.

(By the way, if you’d like me to run this workshop at your company, get in touch.)

With the workshop done, it was time for me to freak out fully about my conference talk. I was set to open the show. No pressure.

Actually, I felt pretty damn good about what I had been preparing for the past few months (it takes me aaages to put a talk together), but I always get nervous about presenting new material—until I’ve actually given the talk in front of a real audience, I don’t actually know if it’s any good or not.

Clare was speaking right after me, but she was having some technical issues. It’s funny; as soon as she had a problem, I immediately switched modes from conference speaker to conference organiser. Instead of being nervous, I flipped into being calm and reassuring, getting Clare’s presentation—and fonts—onto my laptop, and making sure her talk would go as smoothly as possible (it did!).

My talk went down well. The audience was great. Everyone paid attention, laughed along with the jokes, and really listened to what I was trying to say. For a speaker, you can’t ask for better than that. And people said very nice things about the talk afterwards. Sam Goddard wrote about how it resonated with him.

Wearing my eye-watering loud paisley shirt on stage at New Adventures.

You can peruse the slides from my presentation but they make very little sense out of context. But video of the talk is forthcoming.

The advantage to being on first was that I got my talk over with at the start of the day. Then I could relax and enjoy all the other talks. And enjoy them I did! I think all of the speakers were feeling the same pressure I was, and everybody brought their A-game. There were some recurring themes throughout the day: responsibility; hope; diversity; inclusion.

So New Adventures was already an excellent event by the time we got to Ethan, who was giving the closing talk. His talk elevated the day into something truly sublime.

Look, I could gush over how good Ethan’s talk was, or try to summarise it, but there’s really no point. I’ll just say that I felt the same sense of being present at something genuinely important that I felt when I was in the room for his original responsive web design talk at An Event Apart back in 2010. When the video is released, you really must watch it. In the meantime, you can read through the articles and books that Ethan cited in his presentation.

New Adventures 2019 was worth attending just for that one talk. I was very grateful I had the opportunity to attend, and I still can’t quite believe that I also had the opportunity to speak.

Writing for hiring

Cassie joined Clearleft as a junior front-end developer last year. It’s really wonderful having her around. It’s a win-win situation: she’s enthusiastic and eager to learn; I’m keen to help her skill up in any way I can. And it’s working out great for the company—she has already demonstrated that she can produce quality HTML and CSS.

I’m very happy about Cassie’s success, not just on a personal level, but also from a business perspective. Hiring people into junior roles—when you’ve got the time and ability to train them—is an excellent policy. Hiring Charlotte back in 2014 was Clearleft’s first foray into hiring for a junior front-end dev position and it was a huge success. Cassie is demonstrating that it wasn’t just a fluke.

Alas, we can’t only hire junior developers. We’ve got a lot of work in the pipeline right now and we’re going to need a full-time seasoned developer who can hit the ground running. That’s why Clearleft is recruiting for a senior front-end developer.

As lead developer, Danielle will make the hiring decision, but because she’s so busy on project work right now—hence the need to hire more people—I’m trying to help her out any way I can. I offered to write the job description.

Seeing as I couldn’t just write “A clone of Danielle, please”, I had to think about what makes for a great front-end developer who uses their experience wisely. But I didn’t want to create a list of requirements, and I certainly didn’t want to create a list of specific technologies.

My first instinct was to look at other job ads and take my cue from them. But, let’s face it, most job ads are badly written, and prone to turning into laundry lists. So I decided to just write like I normally would. You know, like a human.

Here’s what I wrote. I hope it’s okay. I don’t really have much to compare it to, other than what I don’t want it to be.

Have a read of it and see what you think. And if you’re an experienced front-end developer who’d like to work by the seaside, you should apply for the role.

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Service workers and videos in Safari

Alright, so I’ve already talked about some gotchas when debugging service worker issues. But what if you don’t even realise the problem has anything to do with your service worker?

This is not a hypothetical situation. I encountered this very thing myself. Gather ‘round the campfire, children…

One of the latest case studies on the Clearleft site is a nice write-up by Luke of designing a mobile app for Virgin Holidays. The case study includes a lovely video that demonstrates the log-in flow. I implemented that using a video element (with a poster image). Nice and straightforward. Super easy. All good.

But I hadn’t done my due diligence in browser testing (I guess I didn’t even think of it in this case). Hana informed me that the video wasn’t working at all in Safari. The poster image appeared just fine, but when you clicked on it, the video didn’t load.

I ducked, ducked, and went, uncovering what appeared to be the root of the problem. It seems that Safari is fussy about having servers support something called “byte-range requests”.

I had put the video in question on an Amazon S3 server. I came to the conclusion that S3 mustn’t support these kinds of headers correctly, or something.

Now I had a diagnosis. The next step was figuring out a solution. I thought I might have to move the video off of S3 and onto a server that I could configure a bit more.

Luckily, I never got ‘round to even starting that process. That’s good. Because it turns out that my diagnosis was completely wrong.

I came across a recent post by Phil Nash called Service workers: beware Safari’s range request. The title immediately grabbed my attention. Safari: yes! Video: yes! But service workers …wait a minute!

There’s a section in Phil’s post entitled “Diagnosing the problem”, in which he says:

I first thought it could have something to do with the CDN I’m using. There were some false positives regarding streaming video through a CDN that resulted in some extra research that was ultimately fruitless.

That described my situation exactly. Except Phil went further and nailed down the real cause of the problem:

Nginx was serving correct responses to Range requests. So was the CDN. The only other problem? The service worker. And this broke the video in Safari.

Doh! I hadn’t even thought about service workers!

Phil came up with a solution, and he has kindly shared his code.

I decided to go for a dumber solution:

if ( request.url.match(/\.(mp4)$/) ) {
  return;
}

That tells the service worker to just step out of the way when it comes to video requests. Now the video plays just fine in Safari. It’s a bit of a shame, because I’m kind of penalising all browsers for Safari’s bug, but the Clearleft site isn’t using much video at all, and in any case, it might be good not to fill up the cache with large video files.

But what’s more important than any particular solution is correctly identifying the problem. I’m quite sure I never would’ve been able to fix this issue if Phil hadn’t gone to the trouble of sharing his experience. I’m very, very grateful that he did.

That’s the bigger lesson here: if you solve a problem—even if you think it’s hardly worth mentioning—please, please share your solution. It could make all the difference for someone out there.

Service workers and browser extensions

I quite enjoy a good bug hunt. Just yesterday, myself and Cassie were doing some bugfixing together. As always, the first step was to try to reproduce the problem and then isolate it. Which reminds me…

There’ve been a few occasions when I’ve been trying to debug service worker issues. The problem is rarely in reproducing the issue—it’s isolating the cause that can be frustrating. I try changing a bit of code here, and a bit of code there, in an attempt to zero in on the problem, butwith no luck. Before long, I’m tearing my hair out staring at code that appears to have nothing wrong with it.

And that’s when I remember: browser extensions.

I’m currently using Firefox as my browser, and I have extensions installed to stop tracking and surveillance (these technologies are usually referred to as “ad blockers”, but that’s a bit of a misnomer—the issue isn’t with the ads; it’s with the invasive tracking).

If you think about how a service worker does its magic, it’s as if it’s sitting in the browser, waiting to intercept any requests to a particular domain. It’s like the service worker is the first port of call for any requests the browser makes. But then you add a browser extension. The browser extension is also waiting to intercept certain network requests. Now the extension is the first port of call, and the service worker is relegated to be next in line.

This, apparently, can cause issues (presumably depending on how the browser extension has been coded). In some situations, network requests that should work just fine start to fail, executing the catch clauses of fetch statements in your service worker.

So if you’ve been trying to debug a service worker issue, and you can’t seem to figure out what the problem might be, it’s not necessarily an issue with your code, or even an issue with the browser.

From now on when I’m troubleshooting service worker quirks, I’m going to introduce a step zero, before I even start reproducing or isolating the bug. I’m going to ask myself, “Are there any browser extensions installed?”

I realise that sounds as basic as asking “Are you sure the computer is switched on?” but there’s nothing wrong with having a checklist of basic questions to ask before moving on to the more complicated task of debugging.

I’m going to make a checklist. Then I’m going to use it …every time.

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

Console methods

Whenever I create a fetch event inside a service worker, my code roughly follows the same pattern. There’s a then clause which gets executed if the fetch is successful, and a catch clause in case anything goes wrong:

fetch( request)
.then( fetchResponse => {
    // Yay! It worked.
})
.catch( fetchError => {
    // Boo! It failed.
});

In my book—Going Offline—I’m at pains to point out that those arguments being passed into each clause are yours to name. In this example I’ve called them fetchResponse and fetchError but you can call them anything you want.

I always do something with the fetchResponse inside the then clause—either I want to return the response or put it in a cache.

But I rarely do anything with fetchError. Because of that, I’ve sometimes made the mistake of leaving it out completely:

fetch( request)
.then( fetchResponse => {
    // Yay! It worked.
})
.catch( () => {
    // Boo! It failed.
});

Don’t do that. I think there’s some talk of making the error argument optional, but for now, some browsers will get upset if it’s not there.

So always include that argument, whether you call it fetchError or anything else. And seeing as it’s an error, this might be a legitimate case for outputing it to the browser’s console, even in production code.

And yes, you can output to the console from a service worker. Even though a service worker can’t access anything relating to the document object, you can still make use of window.console, known to its friends as console for short.

My muscle memory when it comes to sending something to the console is to use console.log:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.log(fetchError);
});

But in this case, the console.error method is more appropriate:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.error(fetchError);
});

Now when there’s a connectivity problem, anyone with a console window open will see the error displayed bold and red.

If that seems a bit strident to you, there’s always console.warn which will still make the output stand out, but without being quite so alarmist:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.warn(fetchError);
});

That said, in this case, console.error feels like the right choice. After all, it is technically an error.

Altering expectations

Luke has written up the selection process he went through when Clearleft was designing the Virgin Holidays app. When it comes to deploying on mobile, there were three options:

  1. Native apps
  2. A progressive web app
  3. A hybrid app

The Virgin Holidays team went with that third option.

Now, it will come as no surprise that I’m a big fan of the second option: building a progressive web app (or turning an existing site into a progressive web app). I think a progressive web app is a great solution for travel apps, and the use-case that Luke describes sounds perfect:

Easy access to resort staff and holiday details that could be viewed offline to help as many customers as possible travel without stress and enjoy a fantastic holiday

Luke explains why they choice not to go with a progressive web app.

The current level of support and leap in understanding meant we’d risk alienating many of our customers.

The issue of support is one that is largely fixed at this point. When Clearleft was working on the Virgin Holidays app, service workers hadn’t landed in iOS. Hence, the risk of alienating a lot of customers. But now that Mobile Safari has offline capabilities, that’s no longer a problem.

But it’s the second reason that’s trickier:

Simply put, customers already expected to find us in the App Store and are familiar with what apps can historically offer over websites.

I think this is the biggest challenge facing progressive web apps: battling expectations.

For over a decade, people have formed ideas about what to expect from the web and what to expect from native. From a technical perspective, native and web have become closer and closer in capabilities. But people’s expectations move slower than technological changes.

First of all, there’s the whole issue of discovery: will people understand that they can “install” a website and expect it to behave exactly like a native app? This is where install prompts and ambient badging come in. I think ambient badging is the way to go, but it’s still a tricky concept to explain to people.

But there’s another way of looking at the current situation. Instead of seeing people’s expectations as a negative factor, maybe it’s an opportunity. There’s an opportunity right now for companies to be as groundbreaking and trendsetting as Wired.com when it switched to CSS for layout, or The Boston Globe when it launched its responsive site.

It makes for a great story. Just look at the Pinterest progressive web app for an example (skip to the end to get to the numbers):

Weekly active users on mobile web have increased 103 percent year-over-year overall, with a 156 percent increase in Brazil and 312 percent increase in India. On the engagement side, session length increased by 296 percent, the number of Pins seen increased by 401 percent and people were 295 percent more likely to save a Pin to a board. Those are amazing in and of themselves, but the growth front is where things really shined. Logins increased by 370 percent and new signups increased by 843 percent year-over-year. Since we shipped the new experience, mobile web has become the top platform for new signups. And for fun, in less than 6 months since fully shipping, we already have 800 thousand weekly users using our PWA like a native app (from their homescreen).

Now admittedly their previous mobile web experience was a dreadful doorslam, but still, those are some amazing statistics!

Maybe we’re underestimating the malleability of people’s expectations when it comes to the web on mobile. Perhaps the inertia we think we’re battling against isn’t such a problem as long as we give people a fast, reliable, engaging experience.

If you build that, they will come.

CSS grid in Internet Explorer 11

When I was in Boston, speaking on a lunchtime panel with Rachel at An Event Apart, we took some questions from the audience about CSS grid. Inevitably, a question about browser support came up—specifically about support in Internet Explorer 11.

(Technically, you can use CSS grid in IE11—in fact it was the first browser to ship a version of grid—but the prefixed syntax is different to the standard and certain features are missing.)

Rachel gave a great balanced response, saying that you need to look at your site’s stats to determine whether it’s worth the investment of your time trying to make a grid work in IE11.

My response was blunter. I said I just don’t consider IE11 as a browser that supports grid.

Now, that might sound harsh, but what I meant was: you’re already dividing your visitors into browsers that support grid, and browsers that don’t …and you’re giving something to those browsers that don’t support grid. So I’m suggesting that IE11 falls into that category and should receive the layout you’re giving to browsers that don’t support grid …because really, IE11 doesn’t support grid: that’s the whole reason why the syntax is namespaced by -ms.

You could jump through hoops to try to get your grid layout working in IE11, as detailed in a three-part series on CSS Tricks, but at that point, the amount of effort you’re putting in negates the time-saving benefits of using CSS grid in the first place.

Frankly, the whole point of prefixed CSS is that is not used after a reasonable amount of time (originally, the idea was that it would not be used in production, but that didn’t last long). As we’ve moved away from prefixes to flags in browsers, I’m seeing the amount of prefixed properties dropping, and that’s very, very good. I’ve stopped using autoprefixer on new projects, and I’ve been able to remove it from some existing ones—please consider doing the same.

And when it comes to IE11, I’ll continue to categorise it as a browser that doesn’t support CSS grid. That doesn’t mean I’m abandoning users of IE11—far from it. It means I’m giving them the layout that’s appropriate for the browser they’re using.

Remember, websites do not need to look exactly the same in every browser.

Twitter and Instagram progressive web apps

Since support for service workers landed in Mobile Safari on iOS, I’ve been trying a little experiment. Can I replace some of the native apps I use with progressive web apps?

The two major candidates are Twitter and Instagram. I added them to my home screen, and banished the native apps off to a separate screen. I’ve been using both progressive web apps for a few months now, and I have to say, they’re pretty darn great.

There are a few limitations compared to the native apps. On Twitter, if you follow a link from a tweet, it pops open in Safari, which is fine, but when you return to Twitter, it loads anew. This isn’t any fault of Twitter—this is the way that web apps have worked on iOS ever since they introduced their weird web-app-capable meta element. I hope this behaviour will be fixed in a future update.

Also, until we get web notifications on iOS, I need to keep the Twitter native app around if I want to be notified of a direct message (the only notification I allow).

Apart from those two little issues though, Twitter Lite is on par with the native app.

Instagram is also pretty great. It too suffers from some navigation issues. If I click through to someone’s profile, and then return to the main feed, it also loads it anew, losing my place. It would be great if this could be fixed.

For some reason, the Instagram web app doesn’t allow uploading multiple photos …which is weird, because I can upload multiple photos on my own site by adding the multiple attribute to the input type="file" in my posting interface.

Apart from that, though, it works great. And as I never wanted notifications from Instagram anyway, the lack of web notifications doesn’t bother me at all. In fact, because the progressive web app doesn’t keep nagging me about enabling notifications, it’s a more pleasant experience overall.

Something else that was really annoying with the native app was the preponderance of advertisements. It was really getting out of hand.

Well …(looks around to make sure no one is listening)… don’t tell anyone, but the Instagram progressive web app—i.e. the website—doesn’t have any ads at all!

Here’s hoping it stays that way.

The trimCache function in Going Offline

Paul Yabsley wrote to let me know about an error in Going Offline. It’s rather embarrassing because it’s code that I’m using in the service worker for adactio.com but for some reason I messed it up in the book.

It’s the trimCache function in Chapter 7: Tidying Up. That’s the reusable piece of code that recursively reduces the number of items in a specified cache (cacheName) to a specified amount (maxItems). On page 95 and 96 I describe the process of creating the function which, in the book, ends up like this:

 function trimCache(cacheName, maxItems) {
   cacheName.open( cache => {
     cache.keys()
     .then( items => {
       if (items.length > maxItems) {
         cache.delete(items[0])
         .then(
           trimCache(cacheName, maxItems)
         ); // end delete then
       } // end if
     }); // end keys then
   }); // end open
 } // end function

See the problem? It’s right there at the start when I try to open the cache like this:

cacheName.open( cache => {

That won’t work. The open method only works on the caches object—I should be passing the name of the cache into the caches.open method. So the code should look like this:

caches.open( cacheName )
.then( cache => {

Everything else remains the same. The corrected trimCache function is here:

function trimCache(cacheName, maxItems) {
  caches.open(cacheName)
  .then( cache => {
    cache.keys()
    .then(items => {
      if (items.length > maxItems) {
        cache.delete(items[0])
        .then(
          trimCache(cacheName, maxItems)
        ); // end delete then
      } // end if
    }); // end keys then
  }); // end open then
} // end function

Sorry about that! I must’ve had some kind of brainfart when I was writing (and describing) that one line of code.

You may want to deface your copy of Going Offline by taking a pen to that code example. Normally I consider the practice of writing in books to be barbarism, but in this case …go for it.

New tools for art direction on the web

I’m in Boston right now, getting ready to speak at An Event Apart. This will be my second (and last) Event Apart of the year—the other time was in Seattle back in April. After that event, I wrote about how inspired I was:

It was interesting to see repeating, overlapping themes. From a purely technical perspective, three technologies that were front and centre were:

  • CSS grid,
  • variable fonts, and
  • service workers.

From listening to other attendees, the overwhelming message received was “These technologies are here—they’ve arrived.”

I was itching to combine those technologies on a project. Coincidentally, it was around that time that I started planning to publish The Gęsiówka Story. I figured I could use that as an opportunity to tinker with those front-end technologies that I was so excited about.

But I was cautious. I didn’t want to use the latest exciting technology just for the sake of it. I was very aware of the gravity of the material I was dealing with. Documenting the story of Gęsiówka was what mattered. Any front-end technologies I used had to be in support of that.

First of all, there was the typesetting. I don’t know about you, but I find choosing the right typefaces to be overwhelming. Despite all the great tips and techniques out there for choosing and pairing typefaces, I still find myself agonising over the choice—what if there’s a better choice that I’m missing?

In this case, because I wanted to use a variable font, I had a constraint that helped reduce the possibility space. I started to comb through v-fonts.com to find a suitable typeface—I was fairly sure I wanted a serious serif.

I had one other constraint. The font file had to include English, Polish, and German glyphs. That pretty much sealed the deal for Source Serif. That only has one variable axis—weight—but I decided that this could also be an interesting constraint: how much could I wrangle out of a single typeface just using various weights?

I ended up using font weights of 75, 250, 315, 325, 340, 350, 400, and 525. Most of them were for headings or one-off uses, with a font-weight of 315 for the body copy.

(And can I just say once again how impressed I am that the founding fathers of CSS were far-sighted enough to keep those font weight ranges free for future use?)

Getting the typography right posed an interesting challenge. This was a fairly long piece of writing, so it really needed to be readable without getting tiring. But at the same time, I didn’t want it to be exactly pleasant to read—that wouldn’t do the subject matter justice. I wanted the reader to feel the seriousness of the story they were reading, without being fatigued by its weight.

Colour and type went a long way to communicating that feeling. The grid sealed the deal.

The Gęsiówka Story is mostly one single column of text, so on the face of it, there isn’t much opportunity to go crazy with CSS Grid. But I realised I could use a grid to create a winding effect for the text. I had to be careful though: I didn’t want it to become uncomfortable to read. I wanted to create a slightly unsettling effect.

Every section element is turned into a seven-column grid container:

section {
    display: grid;
    grid-column-gap: 2em;
    grid-template-columns: 2em repeat(5, 1fr) 2em;
}

The first and last columns are the same width as the gutters (2em), effectively creating “outer” gutters for the grid. Each paragraph within the section takes up six of the seven columns. I use nth-of-type to alternate which six columns are used (the first six or the last six). That creates the staggered indendation:

section > p {
    grid-column: 1/7;
}
section > p:nth-of-type(even) {
    grid-column: 2/8;
}

Staggered grid.

That might seem like overkill just to indent every second paragraph by 4em, but I then used the same grid dimensions to layout figure elements with images and captions.

section > figure {
    display: grid;
    grid-column-gap: 2em;
    grid-template-columns: 2em repeat(5, 1fr) 2em;
}

Then I can lay out differently proportioned images across different ranges of the grid:

section > figure.landscape > img {
    grid-column: 1/5;
}
section > figure.landscape > figcaption {
    grid-column: 5/8;
}
section > figure.portrait > img {
    grid-column: 1/4;
}
section > figure.portrait > figcaption {
    grid-column: 4/8;
}

Because they’re positioned on the same grid as the paragraphs, everything lines up nicely (and yes, if subgrid existed, I wouldn’t have to redeclare the grid dimensions for the figures).

Finally, I wanted to make sure that the whole thing could be read offline. After all, once you’ve visited the URL once, there’s really no reason to make any more requests to the server. Static documents—and books—are the perfect candidates for an “offline first” approach: always look in the cache, and only go to the network as a last resort.

In this case I used a variation of my minimal viable service worker script, and the result is a very short set of instructions. There’s a little bit of pre-caching going on: I grab the variable font and the HTML page itself (which includes the CSS inlined).

So there you have it: variable fonts, CSS grid, and service workers: three exciting front-end technologies, all of which can be applied as progressive enhancements on top of the core content.

Once again, I find that it’s personal projects that offer the most opportunities to try out new or interesting techniques. And The Gęsiówka Story is a very personal project indeed.

Praise for Going Offline

I’m very, very happy to see that my new book Going Offline is proving to be accessible and unintimidating to a wide audience—that was very much my goal when writing it.

People have been saying nice things on their blogs, which is very gratifying. It’s even more gratifying to see people use the knowledge gained from reading the book to turn those blogs into progressive web apps!

Sara Soueidan:

It doesn’t matter if you’re a designer, a junior developer or an experienced engineer — this book is perfect for anyone who wants to learn about Service Workers and take their Web application to a whole new level.

I highly recommend it. I read the book over the course of two days, but it can easily be read in half a day. And as someone who rarely ever reads a book cover to cover (I tend to quit halfway through most books), this says a lot about how good it is.

Eric Lawrence:

I was delighted to discover a straightforward, very approachable reference on designing a ServiceWorker-backed application: Going Offline by Jeremy Keith. The book is short (I’m busy), direct (“Here’s a problem, here’s how to solve it“), opinionated in the best way (landmine-avoiding “Do this“), and humorous without being confusing. As anyone who has received unsolicited (or solicited) feedback from me about their book knows, I’m an extremely picky reader, and I have no significant complaints on this one. Highly recommended.

Ben Nadel:

If you’re interested in the “offline first” movement or want to learn more about Service Workers, Going Offline by Jeremy Keith is a really gentle and highly accessible introduction to the topic.

Daniel Koskine:

Jeremy nails it again with this beginner-friendly introduction to Service Workers and Progressive Web Apps.

Donny Truong

Jeremy’s technical writing is as superb as always. Similar to his first book for A Book Apart, which cleared up all my confusions about HTML5, Going Offline helps me put the pieces of the service workers’ puzzle together.

People have been saying nice things on Twitter too…

Aaron Gustafson:

It’s a fantastic read and a simple primer for getting Service Workers up and running on your site.

Ethan Marcotte:

Of course, if you’re looking to take your website offline, you should read @adactio’s wonderful book

Lívia De Paula Labate:

Ok, I’m done reading @adactio’s Going Offline book and as my wife would say, it’s the bomb dot com.

If that all sounds good to you, get yourself a copy of Going Offline in paperbook, or ebook (or both).

Detecting image requests in service workers

In Going Offline, I dive into the many different ways you can use a service worker to handle requests. You can filter by the URL, for example; treating requests for pages under /blog or /articles differently from other requests. Or you can filter by file type. That way, you can treat requests for, say, images very differently to requests for HTML pages.

One of the ways to check what kind of request you’re dealing with is to see what’s in the accept header. Here’s how I show the test for HTML pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Handle your page requests here.
}

So, logically enough, I show the same technique for detecting image requests:

if (request.headers.get('Accept').includes('image')) {
    // Handle your image requests here.
}

That should catch any files that have image in the request’s accept header, like image/png or image/jpeg or image/svg+xml and so on.

But there’s a problem. Both Safari and Firefox now use a much broader accept header: */*

My if statement evaluates to false in those browsers. Sebastian Eberlein wrote about his workaround for this issue, which involves looking at file extensions instead:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/)) {
    // Handle your image requests here.
}

So consider this post a patch for chapter five of Going Offline (page 68 specifically). Wherever you see:

if (request.headers.get('Accept').includes('image'))

Swap it out for:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/))

And feel to add any other image file extensions (like webp) in there too.