Tags: development

102

sparkline

Amber

I really enjoyed teaching in Porto last week. It was like having a week-long series of CodeBar sessions.

Whenever I’m teaching at CodeBar, I like to be paired up with people who are just starting out. There’s something about explaining the web and HTML from first principles that I really like. And people often have lots and lots of questions that I enjoy answering (if I can). At CodeBar—and at The New Digital School—I found myself saying “Great question!” multiple times. The really great questions are the ones that I respond to with “I don’t know …let’s find out!”

CodeBar is always a very rewarding experience for me. It has given me the opportunity to try teaching. And having tried it, I can now safely say that I like it. It’s also a great chance to meet people from all walks of life. It gets me out of my bubble.

I can’t remember when I was first paired up with Amber at CodeBar. It must have been sometime last year. I do remember that she had lots of great questions—at some point I found myself explaining how hexadecimal colours work.

I was impressed with Amber’s eagerness to learn. I also liked that she was making her own website. I told her about Homebrew Website Club and she started coming along to that (along with other CodeBar people like Cassie and Alice).

I’ve mentioned to multiple CodeBar students that there’s pretty much an open-door policy at Clearleft when it comes to shadowing: feel free to come along and sit with a front-end developer while they’re working on client projects. A few people have taken up the offer and enjoyed observing myself or Charlotte at work. Amber was one of those people. Again, I was very impressed with her drive. She’s got a full-time job (with sometimes-crazy hours) but she’s so determined to get into the world of web design and development that she’s willing to spend her free time visiting Clearleft to soak up the atmosphere of a design studio.

We’ve decided to turn this into something more structured. Amber and I will get together for a couple of hours once a week. She’s given me a list of some of the areas she wants to explore, and I think it’s a fine-looking list:

  • I want to gather base, structural knowledge about the web and all related aspects. Things seem to float around in a big cloud at the moment.
  • I want to adhere to best practices.
  • I want to learn more about what direction I want to go in, find a niche.
  • I’d love to opportunity to chat with the brilliant people who work at Clearleft and gain a broad range of knowledge from them.

My plan right now is to take a two-track approach: one track about the theory, and another track about the practicalities. The practicalities will be HTML, CSS, JavaScript, and related technologies. The theory will be about understanding the history of the web and its strengths and weaknesses as a medium. And I want to make sure there’s plenty of UX, research, information architecture and content strategy covered too.

Seeing as we’ll only have a couple of hours every week, this won’t be quite like the masterclass I just finished up in Porto. Instead I imagine I’ll be laying some groundwork and then pointing to topics to research. I guess it’s a kind of homework. For example, after we talked today, I set Amber this little bit of research for the next time we meet: “What is the difference between the internet and the World Wide Web?”

I’m excited to see where this will lead. I find Amber’s drive and enthusiasm very inspiring. I also feel a certain weight of responsibility—I don’t want to enter into this lightly.

I’m not really sure what to call this though. Is it mentorship? Or is it coaching? Or training? All of the above?

Whatever it is, I’m looking forward to documenting the journey. Amber will be writing about it too. She is already demonstrating a way with words.

Teaching in Porto, day four

Day one covered HTML (amongst other things), day two covered CSS, and day three covered JavaScript. Each one of those days involved a certain amount of hands-on coding, with the students getting their hands dirty with angle brackets, curly braces, and semi-colons.

Day four was a deliberate step away from all that. No more laptops, just paper. Whereas the previous days had focused on collaboratively working on a single document, today I wanted everyone to work on a separate site.

The sites were generated randomly. I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

For a bit of fun, the first brainstorming exercise (run as a 6-up) was to come with potential names for this service—4 minutes for 6 ideas. Then we went around the table, shared the ideas, got feedback, and settled on the names.

Now I asked everyone to come up with a one-sentence mission statement for their newly-named service. This was a good way of teasing out the most important verbs and nouns, which led nicely into the next task: answering the question “what is the core functionality?”

If that sounds familiar, it’s because it’s the first part of the three-step process I outlined in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

We did some URL design, figuring out what structures would make sense for straightforward GET requests, like:

  • /things
  • /things/ID

Then, once it was clear what the primary “thing” was (a car, a book, etc.), I asked them to write down all the pieces that might appear on such a page; one post-it note per item e.g. “title”, “description”, “img”, “rating”, etc.

The next step involved prioritisation. They took those post-it notes and put them on the wall, but they had to put them in a vertical line from top to bottom in decreasing order of importance. This can be a challenge, but it’s better to solve these problems now rather than later.

Okay. I know asked them to “mark up” those vertical lists of post-it notes: writing HTML tag names by each one. By doing this before doing any visual design, it meant they were thinking about the meaning of the content first.

After that, we did a good ol’ fashioned classic 6-up sketching exercise, followed by critique (including a “designated dissenter” for each round). At this point, I was encouraging them to go crazy with ideas—they already had the core functionality figured out (with plain ol’ client/server requests and responses) so they could all the bells and whistles they wanted on top of that.

We finished up with a discussion of some of those bells and whistles, and how they could be used to improve the user experience: Ajax, geolocation, service workers, notifications, background sync …the sky’s the limit.

It was a whirlwind tour for just one day but I think it helped emphasise the importance of thinking about the fundamentals before adding enhancements.

This marked the end of the structured masterclass lessons. Tomorrow I’m around to answer any miscellaneous questions (if I can) and chat to the students individually while they work on their term projects.

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Vertical limit

When I was first styling Resilient Web Design, I made heavy use of vh units. The vertical spacing between elements—headings, paragraphs, images—was all proportional to the overall viewport height. It looked great!

Then I tested it on real devices.

Here’s the problem: when a page loads up in a mobile browser—like, say, Chrome on an Android device—the URL bar is at the top of the screen. The height of that piece of the browser interface isn’t taken into account for the viewport height. That makes sense: the viewport height is the amount of screen real estate available for the content. The content doesn’t extend into the URL bar, therefore the height of the URL bar shouldn’t be part of the viewport height.

But then if you start scrolling down, the URL bar scrolls away off the top of the screen. So now it’s behaving as though it is part of the content rather than part of the browser interface. At this point, the value of the viewport height changes: now it’s the previous value plus the height of the URL bar that was previously there but which has now disappeared.

I totally understand why the URL bar is squirrelled away once the user starts scrolling—it frees up some valuable vertical space. But because that necessarily means recalculating the viewport height, it effectively makes the vh unit in CSS very, very limited in scope.

In my initial implementation of Resilient Web Design, the one where I was styling almost everything with vh, the site was unusable. Every time you started scrolling, things would jump around. I had to go back to the drawing board and remove almost all instances of vh from the styles.

I’ve left it in for one use case and I think it’s the most common use of vh: making an element take up exactly the height of the viewport. The front page of the web book uses min-height: 100vh for the title.

Scrolling.

But as soon as you scroll down from there, that element changes height. The content below it suddenly moves.

Let’s say the overall height of the browser window is 600 pixels, of which 50 pixels are taken up by the URL bar. When the page loads, 100vh is 550 pixels. But as soon as you scroll down and the URL bar floats away, the value of 100vh becomes 600 pixels.

(This also causes problems if you’re using vertical media queries. If you choose the wrong vertical breakpoint, then the media query won’t kick in when the page loads but will kick in once the user starts scrolling …or vice-versa.)

There’s a mixed message here. On the one hand, the browser is declaring that the URL bar is part of its interface; that the space is off-limits for content. But then, once scrolling starts, that is invalidated. Now the URL bar is behaving as though it is part of the content scrolling off the top of the viewport.

The result of this messiness is that the vh unit is practically useless for real-world situations with real-world devices. It works great for desktop browsers if you’re grabbing the browser window and resizing, but that’s not exactly a common scenario for anyone other than web developers.

I’m sure there’s a way of solving it with JavaScript but that feels like using an atomic bomb to crack a walnut—the whole point of having this in CSS is that we don’t need to use JavaScript for something related to styling.

It’s such a shame. A piece of CSS that’s great in theory, and is really well supported, just falls apart where it matters most.

Update: There’s a two-year old bug report on this for Chrome, and it looks like it might actually get fixed in February.

Fractal ways

24 Ways is back! That’s how we web nerds know that the Christmas season is here. It kicked off this year with a most excellent bit of hardware hacking from Seb: Internet of Stranger Things.

The site is looking lovely as always. There’s also a component library to to accompany it: Bits, the front-end component library for 24 ways. Nice work, courtesy of Paul. (I particularly like the comment component example).

The component library is built with Fractal, the magnificent tool that Mark has open-sourced. We’ve been using at Clearleft for a while now, but we haven’t had a chance to make any of the component libraries public so it’s really great to be able to point to the 24 Ways example. The code is all on Github too.

There’s a really good buzz around Fractal right now. Lots of people in the design systems Slack channel are talking about it. There’s also a dedicated Fractal Slack channel for people getting into the nitty-gritty of using the tool.

If you’re currently wrestling with the challenges of putting a front-end component library together, be sure to give Fractal a whirl.

Between the braces

In a post called Side Effects in CSS that he wrote a while back, Philip Walton talks about different kinds of challenges in writing CSS:

There are two types of problems in CSS: cosmetic problems and architectural problems.

The cosmetic problems are solved by making something look the way you want it to. The architectural problems are trickier because they have more long-term effects—maintainability, modularity, encapsulation; all that tricky stuff. Philip goes on to say:

If I had to choose between hiring an amazing designer who could replicate even the most complicated visual challenges easily in code and someone who understood the nuances of predictable and maintainable CSS, I’d choose the latter in a heartbeat.

This resonates with something I noticed a while back while I was doing some code reviews. Most of the time when I’m analysing CSS and trying to figure out how “good” it is—and I know that’s very subjective—I’m concerned with what’s on the outside of the curly braces.

selector {
    property: value;
}

The stuff inside the curly braces—the properties and values—that’s where the cosmetic problems get solved. It’s also the stuff that you can look up; I certainly don’t try to store all possible CSS properties and values in my head. It’s also easy to evaluate: Does it make the thing look like you want it to look? Yes? Good. It works.

The stuff outside the curly braces—the selectors—that’s harder to judge. It needs to be evaluated with lots of “what ifs”: What if this selects something you didn’t intend to? What if the markup changes? What if someone else writes some CSS that negates this?

I find it fascinating that most of the innovation in CSS from the browser makers and standards bodies arrives in the form of new properties and values—flexbox, grid, shapes, viewport units, and so on. Meanwhile there’s a whole other world of problems to be solved outside the curly braces. There’s not much that the browser makers or standards bodies can do to help us there. I think that’s why most of the really interesting ideas and thoughts around CSS in recent years have focused on that challenge.

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Assumptions

Last year Benedict Evans wrote about the worldwide proliferation and growth of smartphones. Nolan referenced that post when he extrapolated the kind of experience people will be having:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

This is the same argument that Tom made in his presentation at Responsive Field Day. The main point is that network conditions are unreliable, and I absolutely agree that we need to be very, very mindful of that. But I’m not so sure about the other conditions either. They smell like assumptions:

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

It’s not necessarily true that all those new web users will be running WebView browser like Chrome—there are millions of Opera Mini users, and I would expect that number to rise, given all the speed and cost benefits that proxy browsing brings.

I also don’t think that just because a device is a smartphone it necessarily means that it’s a pocket supercomputer. It might seem like a reasonable assumption to make, given the specs of even a low-end smartphone, but the specs don’t tell the whole story.

Alex gave a great presentation at the recent Polymer Summit. He dives deep into exactly how smartphones at the lower end of the market deal with websites.

I don’t normally enjoy listening to talk of hardware and specs, but Alex makes the topic very compelling by tying it directly to how we build websites. In short, we’re using waaaaay too much JavaScript. The message here is not “don’t use JavaScript” but rather “use JavaScript wisely.” Alas, many of the current crop of monolithic frameworks aren’t well suited to this.

Alex’s talk prompted Michael Scharnagl to take a look back at past assumptions and lessons learned on the web, from responsive design to progressive web apps.

We are consistently improving and we often have to realize that our assumptions are wrong.

This is particularly true when we’re making assumptions about how people will access the web.

It’s not enough to talk about the “next billion” in abstract, like an opportunity to reach teeming masses of people ripe for monetization. We need to understand their lives and their priorities with the sort of detail that can build empathy for other people living under vastly different circumstances.

That’s from an article Ethan linked to, noting:

Choice

Laurie Voss has written a thoughtful article called Web development has two flavors of graceful degradation in response to Nolan Lawson’s recent article. But I’m afraid I don’t agree with Laurie’s central premise:

…web app development and web site development are so different now that they probably shouldn’t be called the same thing anymore.

This is an idea I keep returning to, and each time I do, I find that it just isn’t that simple. There are very few web thangs that are purely interactive without any content, and there are also very few web thangs that are purely passive without any interaction. Instead, it’s a spectrum. Quite often, the position on that spectrum changes according to the needs of the user at any particular time—are Twitter and Flicker web sites while I’m viewing text and images, but then transmogrify into web apps the moment I want add, update, or delete a piece of text or an image?

In any case, the more interesting question than “is something a web site or a web app?” is the question “why?” Why does it matter? In my experience, the answer to that question generally comes down to the kind of architectural approach that a developer will take.

That’s exactly what Laurie dives into in his post. For web apps, use one architectural approach—for web sites, use a different architectural approach. To summarise:

  • in a web app, front-load everything and rely on client-side JavaScript for all subsequent interaction,
  • in a web site, optimise for many page loads, and make sure you don’t rely on client-side JavaScript.

I’m oversimplifying here, but the general idea is:

  • build web apps with the single page app architecture,
  • build web sites with progressive enhancement.

That’s sensible advice, but I’m worried that it could lead to a tautological definition of what constitutes a web app:

  1. This is a web app so it’s built as a single page app.
  2. Why do you define it as a web app?
  3. Because it’s built as a single page app.

The underlying question of what makes something a web app is bypassed by the architectural considerations …but the architectural considerations should be based on that underlying question. Laurie says:

If you are developing an app, the user ideally loads the app exactly once — whether it’s over a slow connection or not.

And similarly:

But if you are developing a web site consisting of many discrete pages, the act of loading goes from a single event to the most common event.

I completely agree that the architectural approach of single page apps is better suited to some kinds of web thangs more than others. It’s a poor architectural choice for a content-based site like nasa.gov, for example. Progressive enhancement would make more sense there.

But I don’t think that the architectural choices need to be in opposition. It’s entirely possible to reconcile the two. It’s not always easy—and the further along that spectrum you are, the tougher it gets—but it’s doable. You can begin with progressive enhancement, and then build up to a single page app architecture for more capable browsers.

I think that’s going to get easier as frameworks adopt a more mixed approach. Almost all the major libraries are working on server-side rendering as a default. Ember is leading the way with FastBoot, and Angular Universal is following. Neither of them are doing it for reasons of progressive enhancement—they’re doing it for performance and SEO—but the upshot is that you can more easily build a web app that simultaneously uses progressive enhancement and a single-page app model.

I guess my point is that I don’t think we should get too locked into the idea of web apps and web sites requiring fundamentally different approaches, especially with the changes in the technologies we used to build them.

We’ve made the mistake in the past of framing problems as “either/or”, when in fact, the correct solution was “both!”:

  • you can either have a desktop site or a mobile site,
  • you can either have rich interactivity or accessibility,
  • you can either have a single page app or progressive enhancement.

We don’t have to choose. It might take more work, but we can have our web cake and eat it.

The false dichotomy that I’m most concerned about is the pernicious idea that offline functionality is somehow in opposition to progressive enhancement. Given the design of service workers, I find this proposition baffling.

This remark by Tom is the very definition of a false dichotomy:

People who say your site should work without JavaScript are actually hurting the people they think they’re helping.

He was also linking to Nolan’s article, which could indeed be read as saying that you should for offline instead of building with progressive enhancement. But I don’t think that’s what Nolan is saying (at least, I sincerely hope not). I think that Nolan is saying that we should prioritise the offline scenario over scenarios where JavaScript fails or isn’t available. That’s a completely reasonable thing to say. But the idea that we should build for the offline scenario instead of scenarios where JavaScript fails is absurdly reductionist. We don’t have to choose!

But I can certainly understand how developers might come to be believe that building a progressive web app is at odds with progressive enhancement. Having made a bunch of progressive web apps—Huffduffer, The Session, this site, I can testify that service workers work superbly as a layer on top of an existing site, but all the messaging around progressive web apps seems to fixated on the idea of the app-shell model (a small tweak to the single page app model, where a little bit of interface is available on the initial page load instead of requiring JavaScript for absolutely everything). Again, it’s entirely possible to reconcile the app-shell approach with server rendering and progressive enhancement, but nobody seems to be talking about that. Instead, all of the examples and demos are built with an assumption about JavaScript availability.

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

Tom’s tweet included a screenshot of this part of Nolan’s article:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

Those seem like a reasonable set of assumptions. But even there, things aren’t so simple. Will people really be using “an evergreen browser and WebView”? Millions of people use proxy browsers like Opera Mini, which means you can’t guarantee JavaScript availability beyond the initial page load. UC Browser—which can also run in proxy mode—is now the second most popular mobile browser in the world.

That’s just one nit-picky example, but what I’m getting at here is that it really isn’t safe to make any assumptions. When we must make assumptions, let’s try to make them a last resort.

And just to be clear here, I’m not saying that just because we can’t make assumptions about devices or browsers doesn’t mean that we can’t build rich interactive web apps that work offline. I’m saying that we can build rich interactive web apps that work offline and also work when JavaScript fails or isn’t supported.

You don’t have to choose between progressive enhancement and a single page app/progressive web app/app shell/other things with the word “app”.

Progressive enhancement is an architectural approach to building on the web. You don’t have to use it, but please try to remember that it is your choice to make. You can choose to build a web app using progressive enhancement or not—there is nothing inherent in the nature of the thing you’re building that precludes progressive enhancement.

Personally, I find progressive enhancement a sensible way to counteract any assumptions I might inadvertently make. Progressive enhancement increases the chances that the web site (or web app) I’m building is resilient to the kind of scenarios that I never would’ve predicted or anticipated.

That’s why I choose to use progressive enhancement …and build progressive web apps.

Someday

In the latest issue of Justin’s excellent Responsive Web Design weekly newsletter, he includes a segment called “The Snippet Show”:

This is what tells all our browsers on all our devices to set the viewport to be the same width of the current device, and to also set the initial scale to 1 (not scaled at all). This essentially allows us to have responsive design consistently.

<meta name="viewport" content="width=device-width, initial-scale=1">

The viewport value for the meta element was invented by Apple when the iPhone was released. Back then, it was a safe bet that most websites were wider than the iPhone’s 320 pixel wide display—most of them were 960 pixels wide …because reasons. So mobile Safari would automatically shrink those sites down to fit within the display. If you wanted to over-ride that behaviour, you had to use the meta viewport gubbins that they made up.

That was nine years ago. These days, if you’re building a responsive website, you still need to include that meta element.

That seems like a shame to me. I’m not suggesting that the default behaviour should switch to assuming a fluid layout, but maybe the browser could just figure it out. After all, the CSS will already be parsed by the time the HTML is rendering. Perhaps a quick test for the presence of a crawlbar could be used to trigger the shrinking behaviour. No crawlbar, no shrinking.

Maybe someday the assumption behind the current behaviour could be flipped—assume a website is responsive unless the author explicitly requests the shrinking behaviour. I’d like to think that could happen soon, but I suspect that a depressingly large number of sites are still fixed-width (I don’t even want to know—don’t tell me).

There are other browser default behaviours that might someday change. Right now, if I type example.com into a browser, it will first attempt to contact http://example.com rather than https://example.com. That means the example.com server has to do a redirect, costing the user valuable time.

You can mitigate this by putting your site on the HSTS preload list but wouldn’t it be nice if browsers first checked for HTTPS instead of HTTP? I don’t think that will happen anytime soon, but someday …someday.

The imitation game

Jason shared some thoughts on designing progressive web apps. One of the things he’s pondering is how much you should try make your web-based offering look and feel like a native app.

This was prompted by an article by Owen Campbell-Moore over on Ev’s blog called Designing Great UIs for Progressive Web Apps. He begins with this advice:

Start by forgetting everything you know about conventional web design, and instead imagine you’re actually designing a native app.

This makes me squirm. I mean, I’m all for borrowing good ideas from other media—native apps, TV, print—but I don’t think that inspiration should mean imitation. For me, that always results in an interface that sits in a kind of uncanny valley of being almost—but not quite—like the thing it’s imitating.

With that out of the way, most of the recommendations in Owen’s article are sensible ideas about animation, input, and feedback. But then there’s recommendation number eight: Provide an easy way to share content:

PWAs are often shown in a context where the current URL isn’t easily accessible, so it is important to ensure the user can easily share what they’re currently looking at. Implement a share button that allows users to copy the URL to the clipboard, or share it with popular social networks.

See, when a developer has to implement a feature that the browser should be providing, that seems like a bad code smell to me. This is a problem that Opera is solving (and Google says it is solving, while meanwhile penalising developers who expose the URL to end users).

Anyway, I think my squeamishness about all the advice to imitate native apps is because it feels like a cargo cult. There seems to be an inherent assumption that native is intrinsically “better” than the web, and that the only way that the web can “win” is to match native apps note for note. But that misses out on all the things that only the web can do—instant distribution, low-friction sharing, and the ability to link to any other resource on the web (and be linked to in turn). Turning our beautifully-networked nodes into standalone silos just because that’s the way that native apps have to work feels like the cure that kills the patient.

If anything, my advice for building a progressive web app would be the exact opposite of Owen’s: don’t forget everything you’ve learned about web design. In my opinion, the term “progressive web app” can be read in order of priority:

  1. Progressive—build in a layered way so that anyone can access your content, regardless of what device or browser they’re using, rewarding the more capable browsers with more features.
  2. Web—you’re building for the web. Don’t lose sight of that. URLs matter. Accessibility matters. Performance matters.
  3. App—sure, borrow what works from native apps if it makes sense for your situation.

Jason asks questions about how your progressive web app will behave when it’s added to the home screen. How much do you match the platform? How do you manage going chromeless? And the big one: what do users expect?

Will people expect an experience that maps to native conventions? Or will they be more accepting of deviation because they came to the app via the web and have already seen it before installing it?

These are good questions and I share Jason’s hunch:

My gut says that we can build great experiences without having to make it feel exactly like an iOS or Android app because people will have already experienced the Progressive Web App multiple times in the browser before they are asked to install it.

In all the messaging from Google about progressive web apps, there’s a real feeling that the ability to install to—and launch from—the home screen is a real game changer. I’m not so sure that we should be betting the farm on that feature (the offline possibilities opened up by service workers feel like more of a game-changer to me).

People have been gleefully passing around the statistic that the average number of native apps installed per month is zero. So how exactly will we measure the success of progressive web apps against native apps …when the average number of progressive web apps installed per month is zero?

I like Android’s add-to-home-screen algorithm (although it needs tweaking). It’s a really nice carrot to reward the best websites with. But let’s not carried away. I think that most people are not going to click that “add to home screen” prompt. Let’s face it, we’ve trained people to ignore prompts like that. When someone is trying to find some information or complete a task, a prompt that pops up saying “sign up to our newsletter” or “download our native app” or “add to home screen” is a distraction to be dismissed. The fact that only the third example is initiated by the operating system, rather than the website, is irrelevant to the person using the website.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome.

My hunch is that the majority of people will still interact with your progressive web app via a regular web browser view. If, then, only a minority of people are going to experience your site launched from the home screen in a native-like way, I don’t think it makes sense to prioritise that use case.

The great thing about progressive web apps is that they are first and foremost websites. Literally everyone who interacts with your progressive web app is first going to do so the old-fashioned way, by following a link or typing in a URL. They may later add it to their home screen, but that’s just a bonus. I think it’s important to build progressive web apps accordingly—don’t pretend that it’s just like building a native app just because some people will be visiting via the home screen.

I’m worried that developers are going to think that progressive web apps are something that need to built from scratch; that you have to start with a blank slate and build something new in a completely new way. Now, there are some good examples of these kind of one-off progressive web apps—The Guardian’s RioRun is nicely done. But I don’t think that the majority of progressive web apps should fall into that category. There’s nothing to stop you taking an existing website and transforming it step-by-step into a progressive web app:

  1. Switch over to HTTPS if you aren’t already.
  2. Use a service worker, even if it’s just to provide a custom offline page and cache some static assets.
  3. Make a manifest file to point to an icon and specify some colours.

See? Not exactly a paradigm shift in how you approach building for the web …but those deceptively straightforward steps will really turbo-boost your site.

I’m really excited about progressive web apps …but mostly for the “progressive” and “web” parts. Maybe I’ll start calling them progressive web sites. Or progressive web thangs.

Extensible web components

Adam Onishi has written up his thoughts on web components and progressive enhancements, following on from a discussion we were having on Slack. He shares a lot of the same frustrations as I do.

Two years ago, I said:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

I still feel that way. In theory, web components are very exciting. In practice, web components are very worrying. The worrying aspect comes from the treatment of backwards compatibility.

It all comes down to the way custom elements work. When you make up a custom element, it’s basically a span.

<fancy-select></fancy-select>

Then, using JavaScript with ShadowDOM, templates, and the other specs that together make up the web components ecosystem, you turn that inert span-like element into something all-singing and dancing. That’s great if the browser supports those technologies, and the JavaScript executes successfully. But if either of those conditions aren’t met, what you’re left with is basically a span.

One of the proposed ways around this was to allow custom elements to extend existing elements (not just spans). The proposed syntax for this was an is attribute.

<select is="fancy-select">...</select>

Browser makers responded to this by saying “Nah, that’s too hard.”

To be honest, I had pretty much given up on the is functionality ever seeing the light of day, but Monica has rekindled my hope:

Still, I’m not holding my breath for this kind of declarative extensibility landing in browsers any time soon. Instead, a JavaScript-based way of extending existing existing elements is currently the only way of piggybacking on all the accessible behavioural goodies you get with native elements.

class FancySelect extends HTMLSelectElement

But this imperative approach fails completely if custom elements aren’t supported, or if the JavaScript fails to execute. Now you’re back to having spans.

The presentation on web components at the Progressive Web Apps Dev Summit referred to this JavaScript-based extensibility as “progressively enhancing what’s already available”, which is a bit of a stretch, given how completely it falls apart in older browsers. It was kind of a weird talk, to be honest. After fifteen minutes of talking about creating elements entirely from scratch, there was a minute or two devoted to the is attribute and extending existing elements …before carrying as though those two minutes never happened.

But even without any means of extending existing elements, it should still be possible to define custom elements that have some kind of fallback in non-supporting browsers:

<fancy-select>
 <select>...</select>
</fancy-select>

In that situation, you at least get a regular ol’ select element in older browsers (or in modern browsers before the JavaScript kicks in and uplifts the custom element).

Adam has a great example of this in his post:

I’ve been thinking of a gallery component lately, where you’d have a custom element, say <o-gallery> for want of a better example, and simply populate it with images you want to display, with custom elements and shadow DOM you can add all the rest, controls/layout etc. Markup would be something like:

<o-gallery>
 <img src="">
 <img src="">
 <img src="">
</o-gallery>

If none of the extra stuff loads, what do we get? Well you get 3 images on the page. You still get the content, but just none of the fancy interactivity.

Yes! This, in my opinion, is how we should be approaching the design of web components. This is what gets me excited about web components.

Then I look at pretty much all the examples of web components out there and my nervousness kicks in. Hardly any of them spare a thought for backwards-compatibility. Take a look, for example, at the entire contents of the body element for the Polymer Shop demo site:

<shop-app unresolved="">SHOP</shop-app>

This seems really odd to me, because I don’t think it’s a good way to “sell” a technology.

Compare service workers to web components.

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Service workers work well and fail well. If a browser supports service workers, the user gets all the benefits. If a browser doesn’t support service workers, the user get the same experience they would have always had.

Web components (will) work well, but fail badly. If a browser supports web components, the user gets the experience that the developer has crafted using these new technologies. If a browser doesn’t support web components, the user gets …probably nothing. It depends on how the web components have been designed.

It’s so much easier to get excited about implementing service workers. You’ve literally got nothing to lose and everything to gain. That’s not the case with web components. Or at least not with the way they are currently being sold.

See, this is why I think it’s so important to put some effort into designing web components that have some kind of fallback. Those web components will work well and fail well.

Look at the way new elements are designed for HTML. Think of complex additions like canvas, audio, video, and picture. Each one has been designed with backwards-compatibility in mind—there’s always a way to provide fallback content.

Web components give us developers the same power that, up until now, only belonged to browser makers. Web components also give us developers the same responsibilities as browser makers. We should take that responsibility seriously.

Web components are supposed to be the poster child for The Extensible Web Manifesto. I’m all for an extensible web. But the way that web components are currently being built looks more like an endorsement of The Replaceable Web Manifesto. I’m not okay with a replaceable web.

Here’s hoping that my concerns won’t be dismissed as “piffle and tosh” again by the very people who should be thinking about these issues.

Class teacher

ES6 introduced a whole bunch of new features to JavaScript. One of those features is the class keyword. This introduction has been accompanied by a fair amount of concern and criticism.

Here’s the issue: classes in JavaScript aren’t quite the same as classes in other programming languages. In fact, technically, JavaScript doesn’t really have classes at all. But some say that technically isn’t important. If it looks like a duck, and quacks like a duck, shouldn’t we call it a duck even if technically it’s somewhat similar—but not quite the same—species of waterfowl?

The argument for doing this is that classes are so familiar from other programming languages, that having some way of using classes in JavaScript—even if it isn’t technically the same as in other languages—brings a lot of benefit for people moving over to JavaScript from other programming languages.

But that comes with a side-effect. Anyone learning about classes in JavaScript will basically be told “here’s how classes work …but don’t look too closely.”

Now if you believe that outcomes matter more than understanding, then this is a perfectly acceptable trade-off. After all, we use computers every day without needing to understand the inner workings of every single piece of code under the hood.

It doesn’t sit well with me, though. I think that understanding how something works is important (in most cases). That’s why I favour learning underlying technologies first—HTML, CSS, JavaScript—before reaching for abstractions like frameworks and libraries. If you understand the way things work first, then your choice of framework, library, or any other abstraction is an informed choice.

The most common way that people refer to the new class syntax in JavaScript is to describe it as syntactical sugar. In other words, it doesn’t fundamentally introduce anything new under the hood, but it gives you a shorter, cleaner, nicer way of dealing with objects. It’s an abstraction. But because it’s an abstraction taken from other programming languages that work differently to JavaScript, it’s a bit of fudge. It’s a little white lie. The class keyword in JavaScript will work just fine as long as you don’t try to understand it.

My personal opinion is that this isn’t healthy.

I’ve come across two fantastic orators who cemented this view in my mind. At Render Conf in Oxford earlier this year, I had the great pleasure of hearing Ashley Williams talk about the challenges of teaching JavaScript. Skip to the 15 minute mark to hear her introduce the issues thrown up classes in JavaScript.

More recently, the mighty Kyle Simpson was on an episode of the JavaScript Jabber podcast. Skip to the 17 minute mark to hear him talk about classes in JavaScript.

(Full disclosure: Kyle also some very kind things about some of my blog posts at the end of that episode, but you can switch it off before it gets to that bit.)

Both Ashley and Kyle bring a much-needed perspective to the discussion of language design. That perspective is the perspective of a teacher.

In his essay on W3C’s design principles, Bert Bos lists learnability among the fundamental driving forces (closely tied to readability). Learnability and teachability are two sides of the same coin, and I find it valuable to examine any language decisions through that lens. With that mind, introducing a new feature into a language that comes with such low teachability value as to warrant a teacher actively telling a student not to learn how things really work …well, that just doesn’t seem right.

Backdoor Service Workers

When I was moderating that panel at the Progressive Web App dev Summit, I brought up this point about twenty minutes in:

Alex, in your talk yesterday you were showing the AMP demo there with the Washington Post. You click through and there’s the Washington Post AMP thing, and it was able to install the Service Worker with that custom element. But I was looking at the URL bar …and that wasn’t the Washington Post. It was on the CDN from AMP. So I talked to Paul Backaus from the AMP team, and he explained that it’s an iframe, and using an iframe you can install a Service Worker from somewhere else.

Alex and Emily explained that, duh, that’s the way iframes work. It makes sense when you think about it—an iframe is pretty much the same as any other browser window. Still, it feels like it might violate the principle of least surprise.

Let’s say you followed my tongue-in-cheek advice to build a progressive web app store. Your homepage might have the latest 10 or 20 progressive web apps. You could also include 10 or 20 iframes so that those sites are “pre-installed” for the person viewing your page.

Enough theory. Here’s a practical example…

Suppose you’ve never visited the website for my book, html5forwebdesigners.com (if you have visited it, and you want to play along with this experiment, go to your browser settings and delete anything stored by that domain).

You happen to visit my website adactio.com. There’s a little blurb buried down on the home page that says “Read my book” with a link through to html5forwebdesigners.com. I’ve added this markup after the link:

<iframe src="https://html5forwebdesigners.com/iframe.html" style="width: 0; height: 0; border: 0">
</iframe>

That hidden iframe pulls in an empty page with a script element:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>HTML5 For Web Designers</title>
<script>
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/serviceworker.js');
}
</script>
</head>
</html>

That registers the Service Worker on my book’s site which then proceeds to install all the assets it needs to render the entire site offline.

There you have it. Without ever visiting the domain html5forwebdesigners.com, the site has been pre-loaded onto your device because you visited the domain adactio.com.

A few caveats:

  1. I had to relax the Content Security Policy for html5forwebdesigners.com to allow the iframe to be embedded on adactio.com:

    Header always set Access-Control-Allow-Origin: "https://adactio.com"
    
  2. If your browser’s settings has “Block third-party cookies and site data” selected in the preferences, the iframe-invoked Service Worker won’t install:

    Uncaught (in promise) DOMException: Failed to register a ServiceWorker: The user denied permission to use Service Worker.
    

The example I’ve put together here is relatively harmless. But it’s possible to imagine more extreme scenarios. Imagine there’s a publishing company that has 50 websites for 50 different publications. Each one of them could have an empty page waiting to be embedded via iframe from the other 49 sites. You only need to visit one page on one of those 50 sites to have 50 Service Workers spun up and caching assets in the background.

There’s the potential here for a tragedy of the commons. I hope we’ll be sensible about how we use this power.

Just don’t tell the advertising industry about this.

Unlabelled search fields

Adam Silver is writing a book on forms—you may be familiar with his previous book on maintainable CSS. In a recent article (that for some reason isn’t on his blog), he looks at markup patterns for search forms and advocates that we should always use a label. I agree. But for some reason, we keep getting handed designs that show unlabelled search forms. And no, a placeholder is not a label.

I had a discussion with Mark about this the other day. The form he was marking up didn’t have a label, but it did have a button with some text that would work as a label:

<input type="search" placeholder="…">
<button type="submit">
Search
</button>

He was wondering if there was a way of using the button’s text as the label. I think there is. Using aria-labelledby like this, the button’s text should be read out before the input field:

<input aria-labelledby="searchtext" type="search" placeholder="…">
<button type="submit" id="searchtext">
Search
</button>

Notice that I say “think” and “should.” It’s one thing to figure out a theoretical solution, but only testing will show whether it actually works.

The W3C’s WAI tutorial on labelling content gives an example that uses aria-label instead:

<input type="text" name="search" aria-label="Search">
<button type="submit">Search</button>

It seems a bit of a shame to me that the label text is duplicated in the button and in the aria-label attribute (and being squirrelled away in an attribute, it runs the risk of metacrap rot). But they know what they’re talking about so there may well be very good reasons to prefer duplicating the value with aria-label rather than pointing to the value with aria-labelledby.

I thought it would be interesting to see how other sites are approaching this pattern—unlabelled search forms are all too common. All the markup examples here have been simplified a bit, removing class attributes and the like…

The BBC’s search form does actually have a label:

<label for="orb-search-q">
Search the BBC
</label>
<input id="orb-search-q" placeholder="Search" type="text">
<button>Search the BBC</button>

But that label is then hidden using CSS:

position: absolute;
height: 1px;
width: 1px;
overflow: hidden;
clip: rect(1px, 1px, 1px, 1px);

That CSS—as pioneered by Snook—ensures that the label is visually hidden but remains accessible to assistive technology. Using something like display: none would hide the label for everyone.

Medium wraps the input (and icon) in a label and then gives the label a title attribute. Like aria-label, a title attribute should be read out by screen readers, but it has the added advantage of also being visible as a tooltip on hover:

<label title="Search Medium">
  <span class="svgIcon"><svg></svg></span>
  <input type="search">
</label>

This is also what Google does on what must be the most visited search form on the web. But the W3C’s WAI tutorial warns against using the title attribute like this:

This approach is generally less reliable and not recommended because some screen readers and assistive technologies do not interpret the title attribute as a replacement for the label element, possibly because the title attribute is often used to provide non-essential information.

Twitter follows the BBC’s pattern of having a label but visually hiding it. They also have some descriptive text for the icon, and that text gets visually hidden too:

<label class="visuallyhidden" for="search-query">Search query</label>
<input id="search-query" placeholder="Search Twitter" type="text">
<span class="search-icon>
  <button type="submit" class="Icon" tabindex="-1">
    <span class="visuallyhidden">Search Twitter</span>
  </button>
</span>

Here’s their CSS for hiding those bits of text—it’s very similar to the BBC’s:

.visuallyhidden {
  border: 0;
  clip: rect(0 0 0 0);
  height: 1px;
  margin: -1px;
  overflow: hidden;
  padding: 0;
  position: absolute;
  width: 1px;
}

That’s exactly the CSS recommended in the W3C’s WAI tutorial.

Flickr have gone with the aria-label pattern as recommended in that W3C WAI tutorial:

<input placeholder="Photos, people, or groups" aria-label="Search" type="text">
<input type="submit" value="Search">

Interestingly, neither Twitter or Flickr are using type="search" on the input elements. I’m guessing this is probably because of frustrations with trying to undo the default styles that some browsers apply to input type="search" fields. Seems a shame though.

Instagram also doesn’t use type="search" and makes no attempt to expose any kind of accessible label:

<input type="text" placeholder="Search">
<span class="coreSpriteSearchIcon"></span>

Same with Tumblr:

<input tabindex="1" type="text" name="q" id="search_query" placeholder="Search Tumblr" autocomplete="off" required="required">

…although the search form itself does have role="search" applied to it. Perhaps that helps to mitigate the lack of a clear label?

After that whistle-stop tour of a few of the web’s unlabelled search forms, it looks like the options are:

  • a visually-hidden label element,
  • an aria-label attribute,
  • a title attribute, or
  • associate some text using aria-labelledby.

But that last one needs some testing.

Update: Emil did some testing. Looks like all screen-reader/browser combinations will read the associated text.

On the side

My role at Clearleft is something along the lines of being a technical director. I’m not entirely sure what that means, but it seems to be a way of being involved in front-end development, without necessarily writing much actual code. That’s probably for the best. My colleagues Mark, Graham, and Charlotte are far more efficient at doing that. In return, I do my best to support them and make sure that they’ve got whatever they need (in terms of resources, time, and space) to get on with their work.

I’m continuously impressed not only by the quality of their output on client projects, but also by their output on the side.

Mark is working a project called Fractal. It’s a tool for creating component libraries, something he has written about before. The next steps involve getting the code to version 1.0 and completing the documentation. Then you’ll be hearing a lot more about this. The tricky thing right now is fitting it in around client work. It’s going to be very exciting though—everyone who has been beta-testing Fractal has had very kind words to say. It’s quite an impressive piece of work, especially considering that it’s the work of one person.

Graham is continuing on his crazily-ambitious project to recreate the classic NES game Legend Of Zelda using web technology. His documentation of his process is practically a book:

  1. Introduction,
  2. The Game Loop,
  3. Drawing to the Screen,
  4. Handling User Input,
  5. Scaling the Canvas,
  6. Animation — Part 1,
  7. Levels & Collision — part 1, and most recently
  8. Levels — part 2.

It’s simultaneously a project that involves the past—retro gaming—and the future—playing with the latest additions to JavaScript in modern browsers (something that feeds directly back into client work).

Charlotte has been speaking up a storm. She spoke at the Up Front conference in Manchester about component libraries:

The process of building a pattern library or any kind of modular design system requires a different approach to delivering a set of finished pages. Even when the final deliverable is a pattern library, we often still have to design pages for approval. When everyone is so used to working with pages, it can be difficult to adopt a new way of thinking—particularly for those who are not designers and developers.

This talk will look at how we can help everyone in the team adopt pattern thinking. This includes anyone with a decision to make—not just designers and developers. Everyone in the team can start building a shared vocabulary and attempt to make the challenge of naming things a little easier.

Then she spoke at Dot York about her learning process:

As a web developer, I’m learning all the time. I need to know how to make my code work, but more importantly, I want to understand why my code works. I’ve learnt most of what I know from people sharing what they know and I love that I can now do the same. In my talk I want to share my highlights and frustrations of continuous learning, my experiences of working with a mentor and fitting it into my first year at Clearleft.

She’ll also be speaking at Beyond Tellerrand in Berlin later this year. Oh, and she’s also now a co-organiser of the brilliant Codebar events that happen every Tuesday here in Brighton.

Altogether that’s an impressive amount of output from Clearleft’s developers. And all of that doesn’t include the client work that Mark, Graham, and Charlotte are doing. They inspire me!

Sticky headers

I made a little tweak to The Session today. The navigation bar across the top is “sticky” now—it doesn’t scroll with the rest of the content.

I made sure that the stickiness only kicks in if the screen is both wide and tall enough to warrant it. Vertical media queries are your friend!

But it’s not enough to just put some position: fixed CSS inside a media query. There are some knock-on effects that I needed to mitigate.

I use the space bar to paginate through long pages. It drives me nuts when sites with sticky headers don’t accommodate this. I made use of Tim Murtaugh’s sticky pagination fixer. It makes sure that page-jumping with the keyboard (using the space bar or page down) still works. I remember when I linked to this script two years ago, thinking “I bet this will come in handy one day.” Past me was right!

The other “gotcha!” with having a sticky header is making sure that in-page anchors still work. Nicolas Gallagher covers the options for this in a post called Jump links and viewport positioning. Here’s the CSS I ended up using:

:target:before {
    content: '';
    display: block;
    height: 3em;
    margin: -3em 0 0;
}

I also needed to check any of my existing JavaScript to see if I was using scrollTo anywhere, and adjust the calculations to account for the newly-sticky header.

Anyway, just a few things to consider if you’re going to make a navigational element “sticky”:

  1. Use min-height in your media query,
  2. Take care of keyboard-initiated page scrolling,
  3. Adjust the positioning of in-page links.

The Progressive Web App Dev Summit

I was in Amsterdam again at the start of last week for the Progressive Web App Dev Summit, organised by Google. Most of the talks were given by Google employees, but not all—this wasn’t just a European version of Google I/O. Representatives from Opera, Mozilla, Samsung, and Microsoft were also there, and there were quite a few case studies from independent companies. That was very gratifying to see.

Almost all the talks were related to progressive web apps. I say, “almost all” because there were occasional outliers. There was a talk on web components, which don’t have anything directly to do with progressive web apps (and I hope there won’t be any attempts to suggest otherwise), and another on rendering performance that had good advice for anyone building any kind of website. Most of the talks were about the building blocks of progressive web apps: HTTPS, Service Workers, push notifications, and all that jazz.

I was very pleased to see that there was a move away from the suggesting that single-page apps with the app-shell architecture model were the only way of building progressive web apps.

There were lots of great examples of progressively enhancing existing sites into progressive web apps. Jeff Posnick’s talk was a step-by-step walkthrough of doing exactly that. Reading through the agenda, I was really happy to see this message repeated again and again:

In this session we’ll take an online-only site and turn it into a fully network-resilient, offline-first installable progressive web app. We’ll also break out of the app shell and look at approaches that better-suit traditional server-driven sites.

Progressive Web Apps should work everywhere for every user. But what happens when the technology and API’s are not available for in your users browser? In this talk we will show you how you can think about and build sites that work everywhere.

Progressive Web Apps should load fast, work great offline, and progressively enhance to a better experience in modern browsers.

How do you put the “progressive” into your current web app?

You can (and should!) build for the latest and greatest browsers, but through a collection of fallbacks and progressive enhancements you can bring a lot tomorrow’s web to yesterday’s browsers.

I think this is a really smart move. It’s a lot easier to sell people on incremental changes than it is to convince them to rip everything out and start from scratch (another reason why I’m dubious about any association between web components and progressive web apps—but I’ll save that for another post).

The other angle that I really liked was the emphasis on emerging markets, not just wealthy westerners. Tal Oppenheimer’s talk Building for Billions was superb, and Alex kicked the whole thing off with some great facts and figures on mobile usage.

In my mind, these two threads are very much related. Progressive enhancement allows us to have our progressive web app cake and eat it too: we can make websites that can be accessed on devices with limited storage and slow networks, while at the same time ensuring those same sites take advantage of all the newest features in the latest and greatest browsers. I talked to a lot of Google devs about ways to measure the quality of a progressive web app, and I’m coming to the conclusion that a truly high-quality site is one that can still be accessed by a proxy browser like Opera Mini, while providing a turbo-charged experience in the latest version of Chrome. If you think that sounds naive or unrealistic, then I think you might want to dive deeper into all the technologies that make progressive web apps so powerful—responsive design, Service Workers, a manifest file, HTTPS, push notifications; all of those features can and should be used in a layered fashion.

Speaking of Opera, Andreas kind of stole the show, demoing the latest interface experiments in Opera Mobile.

That ambient badging that Alex was talking about? Opera is doing it. The importance of being able to access URLs that I’ve been ranting about? Opera is doing it.

Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right.

Nice! I’m looking forward to seeing what other browsers come up with it. It’s genuinely exciting to see all these different browser makers in complete agreement on which standards they want to support, while at the same time differentiating their products by competing on user experience. Microsoft recently announced that progressive web apps will be indexed in their app store just like native apps—a really interesting move.

The Progressive Web App Dev Summit wrapped up with a closing panel, that I had the honour of hosting. I thought it was very brave of Paul to ask me to host this, considering my strident criticism of Google’s missteps.

Initially there were going to be six people on the panel. Then it became eight. Then I blinked and it suddenly became twelve. Less of a panel, more of a jury. Half the panelists were from Google and the other half were from Opera, Microsoft, Mozilla, and Samsung. Some of those representatives were a bit too media-trained for my liking: Ali from Microsoft tried to just give a spiel, and Alex Komoroske from Google wouldn’t give me a straight answer about whether he wants Android Instant apps to succeed—Jake was a bit more honest. I should have channelled my inner Paxman a bit more.

Needless to say, nobody from Apple was at the event. No surprise there. They’ve already promised to come to the next event. There won’t be an Apple representative on stage, obviously—that would be asking too much, wouldn’t it? But at least it looks like they’re finally making an effort to engage with the wider developer community.

All in all, the Progressive Web App Dev Summit was good fun. I found the event quite inspiring, although the sausage festiness of the attendees was depressing. It would be good if the marketing for these events reached a wider audience—I met a lot of developers who only found out about it a week or two before the event.

I really hope that people will come away with the message that they can get started with progressive web apps right now without having to re-architect their whole site. Right now the barrier to entry is having your site running on HTTPS. Once you’ve got that up and running, it’s pretty much a no-brainer to add a manifest file and a basic Service Worker—to boost performance if nothing else. From there, you’re in a great position to incrementally add more and more features—an offline-first approach with your Service Worker, perhaps? Or maybe start dabbling in push notifications. The great thing about all of these technologies (with the glaring exception of web components in their current state) is that you don’t need to bet the farm on any of them. Try them out. Use them as enhancements. You’ve literally got nothing to lose …and your users have everything to gain.

Thank you, jQuery

I turned Huffduffer into a progressive web app recently. It was already running on HTTPS so I didn’t have much to do. One manifest file and one basic Service Worker did the trick.

Getting the 'add to home screen' prompt for https://huffduffer.com/ on Android Chrome.

I also did a bit of spring cleaning, refactoring some CSS. The site dates to 2008 so there’s plenty in there that I would do very differently today. Still, considering the age of the code, I wasn’t cursing my past self too much.

After that, I decided to refactor the JavaScript too. There, I had a clear goal: could I remove the dependency on jQuery?

It turned out to be pretty straightforward. I was able to bring my total JavaScript file size down to 3K (gzipped). Pretty much everything I was doing in jQuery could be just as easily accomplished with DOM methods like addEventListener and querySelectorAll, and objects like XMLHttpRequest and classList.

Of course, the reason why half of those handy helpers exist is because of jQuery. Certainly in the case of querySelector and querySelectorAll, jQuery blazed a trail for browsers and standards bodies to pave. In some cases, like animation, the jQuery-led solutions ended up in CSS instead of JavaScript, but the story was the same: developers used the heck of jQuery and browser makers paid attention to that. This is something that Jack spoke about at Render Conf a little while back.

Brian once said of PhoneGap that its ultimate purpose is to cease to exist. I think of jQuery in a similar way.

jQuery turned ten years old this year, and jQuery version 3.0 was just released. Congratulations, jQuery! You have served the web well.

A wager on the web

Jason has written a great post about progressive web apps. It’s also a post about whether fears of the death of the web are justified.

Lately, I vacillate on whether the web is endangered or poised for a massive growth due to the web’s new capabilities. Frankly, I think there are indicators both ways.

So he applies Pascal’s wager. The hypothesis is that the web is under threat and progressive web apps are a solution to fighting that threat.

  • If the hypothesis is incorrect and we don’t build progressive web apps, things continue as they are on the web (which is not great for users—they have to continue to put up with fragile, frustratingly slow sites).
  • If the hypothesis is incorrect and we do build progressive web apps, users get better websites.
  • If the hypothesis is correct and we do build progressive web apps, users get better websites and we save the web.
  • If the hypothesis is correct and we don’t build progressive web apps, the web ends up pining for the fjords.

Whether you see the web as threatened or see Chicken Little in people’s fears and whether you like progressive web apps or feel it is a stupid Google marketing thing, we can all agree that putting energy into improving the experience for the people using our sites is always a good thing.

Jason is absolutely correct. There are literally no downsides to us creating progressive web apps. Everybody wins.

But that isn’t the question that people have been tackling lately. None of these (excellent) blog posts disagree with the conclusion that building progressive web apps as originally defined would be a great move forward for the web:

The real question that comes out of those posts is whether it’s good or bad for the future of progressive web apps—and by extension, the web—to build stop-gap solutions that use some progressive web app technologies (Service Workers, for example) while failing to be progressive in other ways (only working on mobile devices, for example).

In this case, there are two competing hypotheses:

  1. In the short term, it’s okay to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because over time they’ll get improved and we’ll end up with proper progressive web apps in the long term.
  2. In the short term, we should build proper progressive web apps, and it’s a really bad idea to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because that encourages more people to build sub-par websites and progressive web apps become synonymous with door-slamming single-page apps in the long term.

The second hypothesis sounds pessimistic, and the first sounds optimistic. But the people arguing for the first hypothesis aren’t coming from a position of optimism. Take Christian’s post, for example, which I fundamentally disagree with:

End users deserve to have an amazing, form-factor specific experience. Let’s build those.

I think end users deserve to have an amazing experience regardless of the form-factor of their devices. Christian’s viewpoint—like Alex’s tweetstorm—is rooted in the hypothesis that the web is under threat and in danger. The conclusion that comes out of that—building mobile-only JavaScript-reliant progressive web apps is okay—is a conclusion reached through fear.

Never make any decision based on fear.