Tags: development

110

sparkline

A workshop on evaluating technology

After hacking away at Indie Web Camp Düsseldorf, I stuck around for Beyond Tellerrand. I ended up giving a talk, stepping in for Ellen. I was a poor substitute, but I hope I entertained the lovely audience for 45 minutes.

After Beyond Tellerrand, I got on a train to Nuremberg …along with a dozen of my peers who were also at the event.

All aboard the Indie Web Train from Düsseldorf to Nürnberg. Indie Web Train.

I arrived right in the middle of Web Week Nürnberg. Among the many events going on was a workshop that Joschi arranged for me to run called Evaluating Technology. The workshop version of my Beyond Tellerrand talk, basically.

This was an evolution of a workshop I ran a while back. I have to admit, I was a bit nervous going into this. I had no tangible material prepared; no slides, no handouts, nothing. Instead the workshop is a collaborative affair. In order for it to work, the attendees needed to jump in and co-create it with me. Luckily for me, I had a fantastic and enthusiastic group of people at my workshop.

Evaluating Technology

We began with a complete braindump. “Name some tools and technologies,” I said. “Just shout ‘em out.” Shout ‘em out, they did. I struggled to keep up just writing down everything they said. This was great!

Evaluating Technology

The next step was supposed to be dot-voting on which technologies to cover, but there were so many of them, we introduced an intermediate step: grouping the technologies together.

Evaluating Technology Evaluating Technology

Once the technologies were grouped into categories like build tools, browser APIs, methodologies etc., we voted on which categories to cover, only then diving deeper into specific technologies.

I proposed a number of questions to ask of each technology we covered. First of all, who benefits from the technology? Is it a tool for designers and developers, or is it a tool for the end user? Build tools, task runners, version control systems, text editors, transpilers, and pattern libraries all fall into the first category—they make life easier for the people making websites. Browser features generally fall into the second category—they improve the experience for the end user.

Looking at user-facing technologies, we asked: how well do they fail? In other words, can you add this technology as an extra layer of enhancement on top of what you’re building or do you have to make it a foundational layer that’s potentially a single point of failure?

For both classes of technologies, we asked the question: what are the assumptions? What fundamental philosophy has been baked into the technology?

Evaluating Technology Evaluating Technology

Now, the point of this workshop is not for me to answer those questions. I have a limited range of experience with the huge amount of web technologies out there. But collectively all of us attending the workshop will have a good range of experience and knowledge.

Interesting then that the technologies people voted for were:

  • service workers,
  • progressive web apps,
  • AMP,
  • web components,
  • pattern libraries and design systems.

Those are topics I actually do have some experience with. Lots of the attendees had heard of these things, they were really interested in finding out more about them, but they hadn’t necessarily used them yet.

And so I ended up doing a lot of the talking …which wasn’t the plan at all! That was just the way things worked out. I was more than happy to share my opinions on those topics, but it was of a shame that I ended up monopolising the discussion. I felt for everyone having to listen to me ramble on.

Still, by the end of the day we had covered quite a few topics. Better yet, we had a good framework for categorising and evaluating web technologies. The specific technologies we covered were interesting enough, but the general approach provided the lasting value.

All in all, a great day with a great group of people.

Evaluating Technology

I’m already looking forward to running this workshop again. If you think it would be valuable for your company, get in touch.

Going offline at Indie Web Camp Düsseldorf

I’ve just come back from a ten-day trip to Germany. The trip kicked off with Indie Web Camp Düsseldorf over the course of a weekend.

IndieWebCamp Düsseldorf 2017

Once again the wonderful people at Sipgate hosted us in their beautiful building, and once again myself and Aaron helped facilitate the two days.

IndieWebCamp Düsseldorf 2017

Saturday was the BarCamp-like discussion day. Plenty of interesting topics were covered. I led a session on service workers, and that’s also what I decided to work on for the second day—that’s when the talking is done and we get down to making.

IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017

I like what Ethan is doing on his offline page. He shows a list of pages that have been cached, but instead of just listing URLs, he shows a title and description for each page.

I’ve already got a separate cache for pages that gets added to as the user browses around my site. I needed to figure out a way to store the metadata for those pages so that I could then display it on the offline page. I came up with a workable solution, and interestingly, it involved no changes to the service worker script at all.

When you visit any blog post, I put metadata about the page into localStorage (after first checking that there’s an active service worker):

if (navigator.serviceWorker && navigator.serviceWorker.controller) {
  window.addEventListener('load', function() {
    var data = {
      "title": "A minority report on artificial intelligence",
      "description": "Revisiting Spielberg’s films after a decade and a half.",
      "published": "May 7th, 2017",
      "timestamp": "1494171049"
    };
    localStorage.setItem(
      window.location.href,
      JSON.stringify(data)
    );
  });
}

In my case, I’m outputting the metadata from the server, but you could just as easily grab some from the DOM like this:

var data = {
  "title": document.querySelector("title").innerText,
  "description": document.querySelector("meta[name='description']").getAttribute("contents")
}

Meanwhile in my service worker, when you visit that same page, it gets added to a cache called “pages”. Both localStorage and the cache API are using URLs as keys. I take advantage of that on my offline page.

The nice thing about writing JavaScript on my offline page is that I know the page will only be seen by modern browsers that support service workers, so I can use all sorts of fancy from ES6, or whatever we’re calling it now.

I start by looping through the keys of the “pages” cache (that’s right—the cache API isn’t just for service workers; you can access it from any script). Then I check to see if there is a corresponding localStorage key with the same string (a URL). If there is, I pull the metadata out of local storage and add it to an array called browsingHistory:

const browsingHistory = [];
caches.open('pages')
.then( cache => {
  cache.keys()
  .then(keys => {
    keys.forEach( request => {
      let data = JSON.parse(localStorage.getItem(request.url));
        if (data) {
          data['url'] = request.url;
          browsingHistory.push(data);
      }
    });

Then I sort the list of pages in reverse chronological order:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I loop through each page in the browsing history list and construct a link to each URL, complete with title and description:

let markup = '';
browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

Finally I dump the constructed markup into a waiting div in the page with an ID of “history”:

let container = document.getElementById('history');
container.insertAdjacentHTML('beforeend', markup);

All those steps need to be wrapped inside the then clause attached to caches.open("pages") because the cache API is asynchronous.

There you have it. Now if you’re browsing adactio.com and your network connection drops (or my server goes offline), you can choose from a list of pages you’ve previously visited.

The current situation isn’t ideal though. I’ve got a clean-up operation in my service worker to limit the number of items stored in my “pages” cache. The cache never gets bigger than 35 items. But there’s no corresponding clean-up of metadata stored in localStorage. So there could be a lot more bits of metadata in local storage than there are pages in the cache. It’s not harmful, but it’s a bit wasteful.

I can’t do a clean-up of localStorage from my service worker because service workers can’t access localStorage. There’s a very good reason for that: the localStorage API is synchronous, and everything that happens in a service worker needs to be asynchronous.

Service workers can access indexedDB: it’s asynchronous. I could use indexedDB instead of localStorage, but I’m not a masochist. My best bet would be to use the localForage library, which wraps indexedDB in the simple syntax of localStorage.

Maybe I’ll do that at the next Homebrew Website Club here in Brighton.

Patterns Day speakers

Ticket sales for Patterns Day are going quite, quite briskly. If you’d like to come along, but you don’t yet have a ticket, you might want to remedy that. Especially when you hear about who else is going to be speaking…

Sareh Heidari works at the BBC building websites for a global audience, in as many as twenty different languages. If you want to know about strategies for using CSS at scale, you definitely want to hear this talk. She just stepped off stage at the excellent CSSconf EU in Berlin, and I’m so happy that Sareh’s coming to Brighton!

Patterns Day isn’t the first conference about design systems and pattern libraries on the web. That honour goes to the Clarity conference, organised by the brilliant Jina Anne. I was gutted I couldn’t make it to Clarity last year. By all accounts, it was excellent. When I started to form the vague idea of putting on an event here in the UK, I immediately contacted Jina to make sure she was okay with it—I didn’t want to step on her toes. Not only was she okay with it, but she really wanted to come along to attend. Well, never mind attending, I said, how about speaking?

I couldn’t be happier that Jina agreed to speak. She has had such a huge impact on the world of pattern libraries through her work with the Lightning design system, Clarity, and the Design Systems Slack channel.

The line-up is now complete. Looking at the speakers, I find myself grinning from ear to ear—it’s going to be an honour to introduce each and every one of them.

This is going to be such an excellent day of fun and knowledge. I can’t wait for June 30th!

Styling the Patterns Day site

Once I had a design direction for the Patterns Day site, I started combining my marked-up content with some CSS. Ironically for an event that’s all about maintainability and reusability, I wrote the styles for this one-page site with no mind for future use. I treated the page as a one-shot document. I even used ID selectors—gasp! (the IDs were in the HTML anyway as fragment identifiers).

The truth is I didn’t have much of a plan. I just started hacking away in a style element in the head of the document, playing around with colour, typography, and layout.

I started with the small-screen styles. That wasn’t a conscious decision so much as just the way I do things automatically now. When it came time to add some layout for wider viewports, I used a sprinkling of old-fashioned display: inline-block so that things looked so-so. I knew I wanted to play around with Grid layout so the inline-block styles were there as fallback for non-supporting browsers. Once things looked good enough, the fun really started.

I was building the site while I was in Seattle for An Event Apart. CSS Grid layout was definitely a hot topic there. Best of all, I was surrounded by experts: Jen, Rachel, and Eric. It was the perfect environment for me to dip my toes into the waters of grid.

Jen was very patient in talking me through the concepts, syntax, and tools for using CSS grids. Top tip: open Firefox’s inspector, select the element with the display:grid declaration, and click the “waffle” icon—instant grid overlay!

For the header of the Patterns Day site, I started by using named areas. That’s the ASCII-art approach. I got my head around it and it worked okay, but it didn’t give me quite the precision I wanted. I switched over to using explicit grid-row and grid-column declarations.

It’s definitely a new way of thinking about layout: first you define the grid, then you place the items on it (rather than previous CSS layout systems where each element interacted with the elements before and after). It was fun to move things around and not have to worry about the source order of the elements …as long as they were direct children of the element with display:grid applied.

Without any support for sub-grids, I ended up having to nest two separate grids within one another. The logo is a grid parent, which is inside the header, also a grid parent. I managed to get things to line up okay, but I think this might be a good use case for sub-grids.

The logo grid threw up some interesting challenges. I wanted each letter of the words “Patterns Day” to be styleable, but CSS doesn’t give us any way to target individual letters other than :first-letter. I wrapped each letter in a b element, made sure that they were all wrapped in an element with an aria-hidden attribute (so that the letters wouldn’t be spelled out), and then wrapped that in an element with an aria-label of “Patterns Day.” Now I could target those b elements.

For a while, I also had a br element (between “Patterns” and “Day”). That created some interesting side effects. If a br element becomes a grid item, it starts to behave very oddly: you can apply certain styles but not others. Jen and Eric then started to test other interesting elements, like hr. There was much funkiness and gnashing of specs.

It was a total nerdfest, and I loved every minute of it. This is definitely the most excitement I’ve felt around CSS for a while. It feels like a renaissance of zen gardens and layout reservoirs (kids, ask your parents).

After a couple of days playing around with grid, I had the Patterns Day site looking decent enough to launch. I dabbled with some other fun CSS stuff in there too, like gratuitous clip paths and filters when hovering over the speaker images, and applying shape-outside with an image mask.

Go ahead and view source on the Patterns Day page if you want—I ended up keeping all the CSS in the head of the document. That turned out to be pretty good for performance …for first-time visits anyway. But after launching the site, I couldn’t resist applying some more performance tweaks.

Announcing Patterns Day: June 30th

Gather ‘round, my friends. I’ve got a big announcement.

You should come to Brighton on Friday, June 30th. Why? Well, apart from the fact that you can have a lovely Summer weekend by the sea, that’s when a brand new one-day event will be happening:

Patterns Day!

That’s right—a one-day event dedicated to all things patterny: design systems, pattern libraries, style guides, and all that good stuff. I’m putting together a world-class line-up of speakers. So far I’ve already got:

It’s going to be a brain-bendingly good day of ideas, case studies, processes, and techniques with something for everyone, whether you’re a designer, developer, product owner, content strategist, or project manager.

Best of all, it’s taking place in the splendid Duke Of York’s Picture House. If you’ve been to Remy’s FFconf then you’ll know what a great venue it is—such comfy, comfy seats! Well, Patterns Day will be like a cross between FFconf and Responsive Day Out.

Tickets are £150+VAT. Grab yours now. Heck, bring the whole team. Let’s face it, this is a topic that everyone is struggling with so we’re all going to benefit from getting together for a day with your peers to hammer out the challenges of pattern libraries and design systems.

I’m really excited about this! I would love to see you in Brighton on the 30th of June for Patterns Day. It’s going to be fun!

Getting griddy with it

I had the great pleasure of attending An Event Apart Seattle last week. It was, as always, excellent.

It’s always interesting to see themes emerge during an event, especially when those thematic overlaps haven’t been planned in advance. Jen noticed this one:

I remember that being a theme at An Event Apart San Francisco too, when it seemed like every speaker had words to say about ill-judged use of Bootstrap. That theme was certainly in my presentation when I talked about “the fallacy of assumed competency”:

  1. large company X uses technology Y,
  2. company X must know what they are doing because they are large,
  3. therefore technology Y must be good.

Perhaps “the fallacy of assumed suitability” would be a better term. Heydon calls it “the ‘made at Facebook’ fallacy.” But I also made sure to contrast it with the opposite extreme: “Not Invented Here syndrome”.

As well as over-arching themes, it was also interesting to see which technologies were hot topics at An Event Apart. There was one clear winner here—CSS Grid Layout.

Microsoft—a sponsor of the event—used An Event Apart as the place to announce that Grid is officially moving into development for Edge. Jen talked about Grid (of course). Rachel talked about Grid (of course). And while Eric and Una didn’t talk about it on stage, they’ve both been writing about the fun they’ve been having having with Grid. Una wrote about 3 CSS Grid Features That Make My Heart Flutter. Eric is documenting the overall of his site with Grid. So when we were all gathered together, that’s what we were nerding out about.

The CSS Squad.

There are some great resources out there for levelling up in Grid-fu:

With Jen’s help, I’ve been playing with CSS Grid on a little site that I’m planning to launch tomorrow (he said, foreshadowingly). I took me a while to get my head around it, but once it clicked I started to have a lot of fun. “Fun” seems to be the overall feeling around this technology. There’s something infectious about the excitement and enthusiasm that’s returning to the world of layout on the web. And now that the browser support is great pretty much across the board, we can start putting that fun into production.

Progressive Web App questions

I got a nice email recently from Colin van Eenige. He wrote:

For my graduation project I’m researching the development of Progressive Web Apps and found your offline book called resilient web design. I was very impressed by the implementation of the website and it really was a nice experience.

I’m very interested in your vision on progressive web apps and what capabilities are waiting for us regarding offline content. Would it be fine if I’d send you some questions?

I said that would be fine, although I couldn’t promise a swift response. He sent me four questions. I finally got ‘round to sending my answers…

1. https://resilientwebdesign.com/ is an offline web book (progressive web app). What was the primary reason make it available like this (besides the other formats)?

Well, given the subject matter, it felt right that the canonical version of the book should be not just online, but made with the building blocks of the web. The other formats are all nice to have, but the HTML version feels (to me) like the “real” book.

Interestingly, it wasn’t too much trouble for people to generate other formats from the HTML (ePub, MOBI, PDF), whereas I think trying to go in the other direction would be trickier.

As for the offline part, that felt like a natural fit. I had already done that with a previous book of mine, HTML5 For Web Designers, which I put online a year or two after its print publication. In that case, I used AppCache for the offline functionality. AppCache is horrible, but this use case might be one of the few where it works well: a static book that’s never going to change. Cache invalidation is one of the worst parts of using AppCache so by not having any kinds of updates at all, I dodged that bullet.

But when it came time for Resilient Web Design, a service worker was definitely the right technology. Still, I’ve got AppCache in there as well for the browsers that don’t yet support service workers.

2. What effect you you think Progressive Web Apps will have on content consuming and do you think these will take over the purpose of some Native Apps?

The biggest effect that service workers could have is to change the expectations that people have about using the web, especially on mobile devices. Right now, people associate the web on mobile with long waits and horrible spammy overlays. Service workers can help solve that first part.

If people then start adding sites to their home screen, that will be a great sign that the web is really holding its own. But I don’t think we should get too optimistic about that: for a user, there’s no difference between a prompt on their screen saying “add to home screen” and a prompt on their screen saying “download our app”—they’re equally likely to be dismissed because we’ve trained people to dismiss anything that covers up the content they actually came for.

It’s entirely possible that websites could start taking over much of the functionality that previously was only possible in a native app. But I think that inertia and habit will keep people using native apps for quite some time.

The big exception is in markets where storage space on devices is in short supply. That’s where the decision to install a native app isn’t taken likely (given the choice between your family photos and an app, most people will reject the app). The web can truly shine here if we build lightweight, performant services.

Even in that situation, I’m still not sure how many people will end up adding those sites to their home screen (it might feel so similar to installing a native app that there may be some residual worry about storage space) but I don’t think that’s too much of a problem: if people get to a site via search or typing, that’s fine.

I worry that the messaging around “progressive web apps” is perhaps over-fetishising the home screen. I don’t think that’s the real battleground. The real battleground is in people’s heads; how they perceive the web and how they perceive native.

After all, if the average number of native apps installed in a month is zero, then that’s not exactly a hard target to match. :-)

3. What is your vision regarding Progressive Web Apps?

For me, progressive web apps don’t feel like a separate thing from making websites. I worry that the marketing of them might inflate expectations or confuse people. I like the idea that they’re simply websites that have taken their vitamins.

So my vision for progressive web apps is the same as my vision for the web: something that people use every day for all sorts of tasks.

I find it really discouraging that progressive web apps are becoming conflated with single page apps and the app shell model. Those architectural decisions have nothing to do with service workers, HTTPS, and manifest files. Yet I keep seeing the concepts used interchangeably. It would be a real shame if people chose not to use these great technologies just because they don’t classify what they’re building as an “app.”

If anything, it’s good ol’ fashioned content sites (newspapers, wikipedia, blogs, and yes, books) that can really benefit from the turbo boost of service worker+HTTPS+manifest.

I was at a conference recently where someone was given a talk encouraging people to build progressive web apps but discouraging people from doing it for their own personal sites. That’s a horrible, elitist attitude. I worry that this attitude is being codified in the term “progressive web app”.

4. What is the biggest learning you’ve had since working on Progressive Web Apps?

Well, like I said, I think that some people are focusing a bit too much on the home screen and not enough on the benefits that service workers can provide to just about any website.

My biggest learning is that these technologies aren’t for a specific subset of services, but can benefit just about anything that’s on the web. I mean, just using a service worker to explicitly cache static assets like CSS, JS, and some images is a no-brainer for almost any project.

So there you go—I’m very excited about the capabilities of these technologies, but very worried about how they’re being “sold”. I’m particularly nervous that in the rush to emulate native apps, we end up losing the very thing that makes the web so powerful: URLs.

Small steps

The new Clearleft website is live! Huzzah!

Many people have been working very hard on it and it’s all looking rather nice. But, as I said before, the site launch isn’t the end—it’s just the beginning.

There are some obvious next steps: fixing bugs, adding content, tweaking copy, and, oh yeah, that whole “testing with real users” thing. But there’s also an opportunity to have some fun on the front end. Now that the site is out there in the wild, there’s a real incentive to improve its performance.

Off the top of my head, these are some areas where I think we can play around:

  • Font loading. Right now the site is just using @font-face. A smart font-loading strategy—at least for the body copy—could really help improve the perceived performance.
  • Responsive images. A long-term solution will require some wrangling on the back end, but I reckon we can come up with some way of generating different sized images to reference in srcset.
  • Service worker. It’s a no-brainer. Now that the Clearleft site is (finally!) running on HTTPS, having a simple service worker to cache static assets like CSS, JavaScript and some images seems like the obvious next step. The question is: what other offline shenanigans could we get up to?

I’m looking forward to tinkering with some of those technologies. Each one should make an incremental improvement to the site’s performance. There are already some steps on the back-end that are making a big difference: upgrading to PHP7 and using HTTP2.

Now the real fun begins.

Amber

I really enjoyed teaching in Porto last week. It was like having a week-long series of CodeBar sessions.

Whenever I’m teaching at CodeBar, I like to be paired up with people who are just starting out. There’s something about explaining the web and HTML from first principles that I really like. And people often have lots and lots of questions that I enjoy answering (if I can). At CodeBar—and at The New Digital School—I found myself saying “Great question!” multiple times. The really great questions are the ones that I respond to with “I don’t know …let’s find out!”

CodeBar is always a very rewarding experience for me. It has given me the opportunity to try teaching. And having tried it, I can now safely say that I like it. It’s also a great chance to meet people from all walks of life. It gets me out of my bubble.

I can’t remember when I was first paired up with Amber at CodeBar. It must have been sometime last year. I do remember that she had lots of great questions—at some point I found myself explaining how hexadecimal colours work.

I was impressed with Amber’s eagerness to learn. I also liked that she was making her own website. I told her about Homebrew Website Club and she started coming along to that (along with other CodeBar people like Cassie and Alice).

I’ve mentioned to multiple CodeBar students that there’s pretty much an open-door policy at Clearleft when it comes to shadowing: feel free to come along and sit with a front-end developer while they’re working on client projects. A few people have taken up the offer and enjoyed observing myself or Charlotte at work. Amber was one of those people. Again, I was very impressed with her drive. She’s got a full-time job (with sometimes-crazy hours) but she’s so determined to get into the world of web design and development that she’s willing to spend her free time visiting Clearleft to soak up the atmosphere of a design studio.

We’ve decided to turn this into something more structured. Amber and I will get together for a couple of hours once a week. She’s given me a list of some of the areas she wants to explore, and I think it’s a fine-looking list:

  • I want to gather base, structural knowledge about the web and all related aspects. Things seem to float around in a big cloud at the moment.
  • I want to adhere to best practices.
  • I want to learn more about what direction I want to go in, find a niche.
  • I’d love to opportunity to chat with the brilliant people who work at Clearleft and gain a broad range of knowledge from them.

My plan right now is to take a two-track approach: one track about the theory, and another track about the practicalities. The practicalities will be HTML, CSS, JavaScript, and related technologies. The theory will be about understanding the history of the web and its strengths and weaknesses as a medium. And I want to make sure there’s plenty of UX, research, information architecture and content strategy covered too.

Seeing as we’ll only have a couple of hours every week, this won’t be quite like the masterclass I just finished up in Porto. Instead I imagine I’ll be laying some groundwork and then pointing to topics to research. I guess it’s a kind of homework. For example, after we talked today, I set Amber this little bit of research for the next time we meet: “What is the difference between the internet and the World Wide Web?”

I’m excited to see where this will lead. I find Amber’s drive and enthusiasm very inspiring. I also feel a certain weight of responsibility—I don’t want to enter into this lightly.

I’m not really sure what to call this though. Is it mentorship? Or is it coaching? Or training? All of the above?

Whatever it is, I’m looking forward to documenting the journey. Amber will be writing about it too. She is already demonstrating a way with words.

Teaching in Porto, day four

Day one covered HTML (amongst other things), day two covered CSS, and day three covered JavaScript. Each one of those days involved a certain amount of hands-on coding, with the students getting their hands dirty with angle brackets, curly braces, and semi-colons.

Day four was a deliberate step away from all that. No more laptops, just paper. Whereas the previous days had focused on collaboratively working on a single document, today I wanted everyone to work on a separate site.

The sites were generated randomly. I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

For a bit of fun, the first brainstorming exercise (run as a 6-up) was to come with potential names for this service—4 minutes for 6 ideas. Then we went around the table, shared the ideas, got feedback, and settled on the names.

Now I asked everyone to come up with a one-sentence mission statement for their newly-named service. This was a good way of teasing out the most important verbs and nouns, which led nicely into the next task: answering the question “what is the core functionality?”

If that sounds familiar, it’s because it’s the first part of the three-step process I outlined in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

We did some URL design, figuring out what structures would make sense for straightforward GET requests, like:

  • /things
  • /things/ID

Then, once it was clear what the primary “thing” was (a car, a book, etc.), I asked them to write down all the pieces that might appear on such a page; one post-it note per item e.g. “title”, “description”, “img”, “rating”, etc.

The next step involved prioritisation. They took those post-it notes and put them on the wall, but they had to put them in a vertical line from top to bottom in decreasing order of importance. This can be a challenge, but it’s better to solve these problems now rather than later.

Okay. I know asked them to “mark up” those vertical lists of post-it notes: writing HTML tag names by each one. By doing this before doing any visual design, it meant they were thinking about the meaning of the content first.

After that, we did a good ol’ fashioned classic 6-up sketching exercise, followed by critique (including a “designated dissenter” for each round). At this point, I was encouraging them to go crazy with ideas—they already had the core functionality figured out (with plain ol’ client/server requests and responses) so they could all the bells and whistles they wanted on top of that.

We finished up with a discussion of some of those bells and whistles, and how they could be used to improve the user experience: Ajax, geolocation, service workers, notifications, background sync …the sky’s the limit.

It was a whirlwind tour for just one day but I think it helped emphasise the importance of thinking about the fundamentals before adding enhancements.

This marked the end of the structured masterclass lessons. Tomorrow I’m around to answer any miscellaneous questions (if I can) and chat to the students individually while they work on their term projects.

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Vertical limit

When I was first styling Resilient Web Design, I made heavy use of vh units. The vertical spacing between elements—headings, paragraphs, images—was all proportional to the overall viewport height. It looked great!

Then I tested it on real devices.

Here’s the problem: when a page loads up in a mobile browser—like, say, Chrome on an Android device—the URL bar is at the top of the screen. The height of that piece of the browser interface isn’t taken into account for the viewport height. That makes sense: the viewport height is the amount of screen real estate available for the content. The content doesn’t extend into the URL bar, therefore the height of the URL bar shouldn’t be part of the viewport height.

But then if you start scrolling down, the URL bar scrolls away off the top of the screen. So now it’s behaving as though it is part of the content rather than part of the browser interface. At this point, the value of the viewport height changes: now it’s the previous value plus the height of the URL bar that was previously there but which has now disappeared.

I totally understand why the URL bar is squirrelled away once the user starts scrolling—it frees up some valuable vertical space. But because that necessarily means recalculating the viewport height, it effectively makes the vh unit in CSS very, very limited in scope.

In my initial implementation of Resilient Web Design, the one where I was styling almost everything with vh, the site was unusable. Every time you started scrolling, things would jump around. I had to go back to the drawing board and remove almost all instances of vh from the styles.

I’ve left it in for one use case and I think it’s the most common use of vh: making an element take up exactly the height of the viewport. The front page of the web book uses min-height: 100vh for the title.

Scrolling.

But as soon as you scroll down from there, that element changes height. The content below it suddenly moves.

Let’s say the overall height of the browser window is 600 pixels, of which 50 pixels are taken up by the URL bar. When the page loads, 100vh is 550 pixels. But as soon as you scroll down and the URL bar floats away, the value of 100vh becomes 600 pixels.

(This also causes problems if you’re using vertical media queries. If you choose the wrong vertical breakpoint, then the media query won’t kick in when the page loads but will kick in once the user starts scrolling …or vice-versa.)

There’s a mixed message here. On the one hand, the browser is declaring that the URL bar is part of its interface; that the space is off-limits for content. But then, once scrolling starts, that is invalidated. Now the URL bar is behaving as though it is part of the content scrolling off the top of the viewport.

The result of this messiness is that the vh unit is practically useless for real-world situations with real-world devices. It works great for desktop browsers if you’re grabbing the browser window and resizing, but that’s not exactly a common scenario for anyone other than web developers.

I’m sure there’s a way of solving it with JavaScript but that feels like using an atomic bomb to crack a walnut—the whole point of having this in CSS is that we don’t need to use JavaScript for something related to styling.

It’s such a shame. A piece of CSS that’s great in theory, and is really well supported, just falls apart where it matters most.

Update: There’s a two-year old bug report on this for Chrome, and it looks like it might actually get fixed in February.

Fractal ways

24 Ways is back! That’s how we web nerds know that the Christmas season is here. It kicked off this year with a most excellent bit of hardware hacking from Seb: Internet of Stranger Things.

The site is looking lovely as always. There’s also a component library to to accompany it: Bits, the front-end component library for 24 ways. Nice work, courtesy of Paul. (I particularly like the comment component example).

The component library is built with Fractal, the magnificent tool that Mark has open-sourced. We’ve been using at Clearleft for a while now, but we haven’t had a chance to make any of the component libraries public so it’s really great to be able to point to the 24 Ways example. The code is all on Github too.

There’s a really good buzz around Fractal right now. Lots of people in the design systems Slack channel are talking about it. There’s also a dedicated Fractal Slack channel for people getting into the nitty-gritty of using the tool.

If you’re currently wrestling with the challenges of putting a front-end component library together, be sure to give Fractal a whirl.

Between the braces

In a post called Side Effects in CSS that he wrote a while back, Philip Walton talks about different kinds of challenges in writing CSS:

There are two types of problems in CSS: cosmetic problems and architectural problems.

The cosmetic problems are solved by making something look the way you want it to. The architectural problems are trickier because they have more long-term effects—maintainability, modularity, encapsulation; all that tricky stuff. Philip goes on to say:

If I had to choose between hiring an amazing designer who could replicate even the most complicated visual challenges easily in code and someone who understood the nuances of predictable and maintainable CSS, I’d choose the latter in a heartbeat.

This resonates with something I noticed a while back while I was doing some code reviews. Most of the time when I’m analysing CSS and trying to figure out how “good” it is—and I know that’s very subjective—I’m concerned with what’s on the outside of the curly braces.

selector {
    property: value;
}

The stuff inside the curly braces—the properties and values—that’s where the cosmetic problems get solved. It’s also the stuff that you can look up; I certainly don’t try to store all possible CSS properties and values in my head. It’s also easy to evaluate: Does it make the thing look like you want it to look? Yes? Good. It works.

The stuff outside the curly braces—the selectors—that’s harder to judge. It needs to be evaluated with lots of “what ifs”: What if this selects something you didn’t intend to? What if the markup changes? What if someone else writes some CSS that negates this?

I find it fascinating that most of the innovation in CSS from the browser makers and standards bodies arrives in the form of new properties and values—flexbox, grid, shapes, viewport units, and so on. Meanwhile there’s a whole other world of problems to be solved outside the curly braces. There’s not much that the browser makers or standards bodies can do to help us there. I think that’s why most of the really interesting ideas and thoughts around CSS in recent years have focused on that challenge.

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Assumptions

Last year Benedict Evans wrote about the worldwide proliferation and growth of smartphones. Nolan referenced that post when he extrapolated the kind of experience people will be having:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

This is the same argument that Tom made in his presentation at Responsive Field Day. The main point is that network conditions are unreliable, and I absolutely agree that we need to be very, very mindful of that. But I’m not so sure about the other conditions either. They smell like assumptions:

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

It’s not necessarily true that all those new web users will be running WebView browser like Chrome—there are millions of Opera Mini users, and I would expect that number to rise, given all the speed and cost benefits that proxy browsing brings.

I also don’t think that just because a device is a smartphone it necessarily means that it’s a pocket supercomputer. It might seem like a reasonable assumption to make, given the specs of even a low-end smartphone, but the specs don’t tell the whole story.

Alex gave a great presentation at the recent Polymer Summit. He dives deep into exactly how smartphones at the lower end of the market deal with websites.

I don’t normally enjoy listening to talk of hardware and specs, but Alex makes the topic very compelling by tying it directly to how we build websites. In short, we’re using waaaaay too much JavaScript. The message here is not “don’t use JavaScript” but rather “use JavaScript wisely.” Alas, many of the current crop of monolithic frameworks aren’t well suited to this.

Alex’s talk prompted Michael Scharnagl to take a look back at past assumptions and lessons learned on the web, from responsive design to progressive web apps.

We are consistently improving and we often have to realize that our assumptions are wrong.

This is particularly true when we’re making assumptions about how people will access the web.

It’s not enough to talk about the “next billion” in abstract, like an opportunity to reach teeming masses of people ripe for monetization. We need to understand their lives and their priorities with the sort of detail that can build empathy for other people living under vastly different circumstances.

That’s from an article Ethan linked to, noting:

Choice

Laurie Voss has written a thoughtful article called Web development has two flavors of graceful degradation in response to Nolan Lawson’s recent article. But I’m afraid I don’t agree with Laurie’s central premise:

…web app development and web site development are so different now that they probably shouldn’t be called the same thing anymore.

This is an idea I keep returning to, and each time I do, I find that it just isn’t that simple. There are very few web thangs that are purely interactive without any content, and there are also very few web thangs that are purely passive without any interaction. Instead, it’s a spectrum. Quite often, the position on that spectrum changes according to the needs of the user at any particular time—are Twitter and Flicker web sites while I’m viewing text and images, but then transmogrify into web apps the moment I want add, update, or delete a piece of text or an image?

In any case, the more interesting question than “is something a web site or a web app?” is the question “why?” Why does it matter? In my experience, the answer to that question generally comes down to the kind of architectural approach that a developer will take.

That’s exactly what Laurie dives into in his post. For web apps, use one architectural approach—for web sites, use a different architectural approach. To summarise:

  • in a web app, front-load everything and rely on client-side JavaScript for all subsequent interaction,
  • in a web site, optimise for many page loads, and make sure you don’t rely on client-side JavaScript.

I’m oversimplifying here, but the general idea is:

  • build web apps with the single page app architecture,
  • build web sites with progressive enhancement.

That’s sensible advice, but I’m worried that it could lead to a tautological definition of what constitutes a web app:

  1. This is a web app so it’s built as a single page app.
  2. Why do you define it as a web app?
  3. Because it’s built as a single page app.

The underlying question of what makes something a web app is bypassed by the architectural considerations …but the architectural considerations should be based on that underlying question. Laurie says:

If you are developing an app, the user ideally loads the app exactly once — whether it’s over a slow connection or not.

And similarly:

But if you are developing a web site consisting of many discrete pages, the act of loading goes from a single event to the most common event.

I completely agree that the architectural approach of single page apps is better suited to some kinds of web thangs more than others. It’s a poor architectural choice for a content-based site like nasa.gov, for example. Progressive enhancement would make more sense there.

But I don’t think that the architectural choices need to be in opposition. It’s entirely possible to reconcile the two. It’s not always easy—and the further along that spectrum you are, the tougher it gets—but it’s doable. You can begin with progressive enhancement, and then build up to a single page app architecture for more capable browsers.

I think that’s going to get easier as frameworks adopt a more mixed approach. Almost all the major libraries are working on server-side rendering as a default. Ember is leading the way with FastBoot, and Angular Universal is following. Neither of them are doing it for reasons of progressive enhancement—they’re doing it for performance and SEO—but the upshot is that you can more easily build a web app that simultaneously uses progressive enhancement and a single-page app model.

I guess my point is that I don’t think we should get too locked into the idea of web apps and web sites requiring fundamentally different approaches, especially with the changes in the technologies we used to build them.

We’ve made the mistake in the past of framing problems as “either/or”, when in fact, the correct solution was “both!”:

  • you can either have a desktop site or a mobile site,
  • you can either have rich interactivity or accessibility,
  • you can either have a single page app or progressive enhancement.

We don’t have to choose. It might take more work, but we can have our web cake and eat it.

The false dichotomy that I’m most concerned about is the pernicious idea that offline functionality is somehow in opposition to progressive enhancement. Given the design of service workers, I find this proposition baffling.

This remark by Tom is the very definition of a false dichotomy:

People who say your site should work without JavaScript are actually hurting the people they think they’re helping.

He was also linking to Nolan’s article, which could indeed be read as saying that you should for offline instead of building with progressive enhancement. But I don’t think that’s what Nolan is saying (at least, I sincerely hope not). I think that Nolan is saying that we should prioritise the offline scenario over scenarios where JavaScript fails or isn’t available. That’s a completely reasonable thing to say. But the idea that we should build for the offline scenario instead of scenarios where JavaScript fails is absurdly reductionist. We don’t have to choose!

But I can certainly understand how developers might come to be believe that building a progressive web app is at odds with progressive enhancement. Having made a bunch of progressive web apps—Huffduffer, The Session, this site, I can testify that service workers work superbly as a layer on top of an existing site, but all the messaging around progressive web apps seems to fixated on the idea of the app-shell model (a small tweak to the single page app model, where a little bit of interface is available on the initial page load instead of requiring JavaScript for absolutely everything). Again, it’s entirely possible to reconcile the app-shell approach with server rendering and progressive enhancement, but nobody seems to be talking about that. Instead, all of the examples and demos are built with an assumption about JavaScript availability.

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

Tom’s tweet included a screenshot of this part of Nolan’s article:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

Those seem like a reasonable set of assumptions. But even there, things aren’t so simple. Will people really be using “an evergreen browser and WebView”? Millions of people use proxy browsers like Opera Mini, which means you can’t guarantee JavaScript availability beyond the initial page load. UC Browser—which can also run in proxy mode—is now the second most popular mobile browser in the world.

That’s just one nit-picky example, but what I’m getting at here is that it really isn’t safe to make any assumptions. When we must make assumptions, let’s try to make them a last resort.

And just to be clear here, I’m not saying that just because we can’t make assumptions about devices or browsers doesn’t mean that we can’t build rich interactive web apps that work offline. I’m saying that we can build rich interactive web apps that work offline and also work when JavaScript fails or isn’t supported.

You don’t have to choose between progressive enhancement and a single page app/progressive web app/app shell/other things with the word “app”.

Progressive enhancement is an architectural approach to building on the web. You don’t have to use it, but please try to remember that it is your choice to make. You can choose to build a web app using progressive enhancement or not—there is nothing inherent in the nature of the thing you’re building that precludes progressive enhancement.

Personally, I find progressive enhancement a sensible way to counteract any assumptions I might inadvertently make. Progressive enhancement increases the chances that the web site (or web app) I’m building is resilient to the kind of scenarios that I never would’ve predicted or anticipated.

That’s why I choose to use progressive enhancement …and build progressive web apps.

Someday

In the latest issue of Justin’s excellent Responsive Web Design weekly newsletter, he includes a segment called “The Snippet Show”:

This is what tells all our browsers on all our devices to set the viewport to be the same width of the current device, and to also set the initial scale to 1 (not scaled at all). This essentially allows us to have responsive design consistently.

<meta name="viewport" content="width=device-width, initial-scale=1">

The viewport value for the meta element was invented by Apple when the iPhone was released. Back then, it was a safe bet that most websites were wider than the iPhone’s 320 pixel wide display—most of them were 960 pixels wide …because reasons. So mobile Safari would automatically shrink those sites down to fit within the display. If you wanted to over-ride that behaviour, you had to use the meta viewport gubbins that they made up.

That was nine years ago. These days, if you’re building a responsive website, you still need to include that meta element.

That seems like a shame to me. I’m not suggesting that the default behaviour should switch to assuming a fluid layout, but maybe the browser could just figure it out. After all, the CSS will already be parsed by the time the HTML is rendering. Perhaps a quick test for the presence of a crawlbar could be used to trigger the shrinking behaviour. No crawlbar, no shrinking.

Maybe someday the assumption behind the current behaviour could be flipped—assume a website is responsive unless the author explicitly requests the shrinking behaviour. I’d like to think that could happen soon, but I suspect that a depressingly large number of sites are still fixed-width (I don’t even want to know—don’t tell me).

There are other browser default behaviours that might someday change. Right now, if I type example.com into a browser, it will first attempt to contact http://example.com rather than https://example.com. That means the example.com server has to do a redirect, costing the user valuable time.

You can mitigate this by putting your site on the HSTS preload list but wouldn’t it be nice if browsers first checked for HTTPS instead of HTTP? I don’t think that will happen anytime soon, but someday …someday.

The imitation game

Jason shared some thoughts on designing progressive web apps. One of the things he’s pondering is how much you should try make your web-based offering look and feel like a native app.

This was prompted by an article by Owen Campbell-Moore over on Ev’s blog called Designing Great UIs for Progressive Web Apps. He begins with this advice:

Start by forgetting everything you know about conventional web design, and instead imagine you’re actually designing a native app.

This makes me squirm. I mean, I’m all for borrowing good ideas from other media—native apps, TV, print—but I don’t think that inspiration should mean imitation. For me, that always results in an interface that sits in a kind of uncanny valley of being almost—but not quite—like the thing it’s imitating.

With that out of the way, most of the recommendations in Owen’s article are sensible ideas about animation, input, and feedback. But then there’s recommendation number eight: Provide an easy way to share content:

PWAs are often shown in a context where the current URL isn’t easily accessible, so it is important to ensure the user can easily share what they’re currently looking at. Implement a share button that allows users to copy the URL to the clipboard, or share it with popular social networks.

See, when a developer has to implement a feature that the browser should be providing, that seems like a bad code smell to me. This is a problem that Opera is solving (and Google says it is solving, while meanwhile penalising developers who expose the URL to end users).

Anyway, I think my squeamishness about all the advice to imitate native apps is because it feels like a cargo cult. There seems to be an inherent assumption that native is intrinsically “better” than the web, and that the only way that the web can “win” is to match native apps note for note. But that misses out on all the things that only the web can do—instant distribution, low-friction sharing, and the ability to link to any other resource on the web (and be linked to in turn). Turning our beautifully-networked nodes into standalone silos just because that’s the way that native apps have to work feels like the cure that kills the patient.

If anything, my advice for building a progressive web app would be the exact opposite of Owen’s: don’t forget everything you’ve learned about web design. In my opinion, the term “progressive web app” can be read in order of priority:

  1. Progressive—build in a layered way so that anyone can access your content, regardless of what device or browser they’re using, rewarding the more capable browsers with more features.
  2. Web—you’re building for the web. Don’t lose sight of that. URLs matter. Accessibility matters. Performance matters.
  3. App—sure, borrow what works from native apps if it makes sense for your situation.

Jason asks questions about how your progressive web app will behave when it’s added to the home screen. How much do you match the platform? How do you manage going chromeless? And the big one: what do users expect?

Will people expect an experience that maps to native conventions? Or will they be more accepting of deviation because they came to the app via the web and have already seen it before installing it?

These are good questions and I share Jason’s hunch:

My gut says that we can build great experiences without having to make it feel exactly like an iOS or Android app because people will have already experienced the Progressive Web App multiple times in the browser before they are asked to install it.

In all the messaging from Google about progressive web apps, there’s a real feeling that the ability to install to—and launch from—the home screen is a real game changer. I’m not so sure that we should be betting the farm on that feature (the offline possibilities opened up by service workers feel like more of a game-changer to me).

People have been gleefully passing around the statistic that the average number of native apps installed per month is zero. So how exactly will we measure the success of progressive web apps against native apps …when the average number of progressive web apps installed per month is zero?

I like Android’s add-to-home-screen algorithm (although it needs tweaking). It’s a really nice carrot to reward the best websites with. But let’s not carried away. I think that most people are not going to click that “add to home screen” prompt. Let’s face it, we’ve trained people to ignore prompts like that. When someone is trying to find some information or complete a task, a prompt that pops up saying “sign up to our newsletter” or “download our native app” or “add to home screen” is a distraction to be dismissed. The fact that only the third example is initiated by the operating system, rather than the website, is irrelevant to the person using the website.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome.

My hunch is that the majority of people will still interact with your progressive web app via a regular web browser view. If, then, only a minority of people are going to experience your site launched from the home screen in a native-like way, I don’t think it makes sense to prioritise that use case.

The great thing about progressive web apps is that they are first and foremost websites. Literally everyone who interacts with your progressive web app is first going to do so the old-fashioned way, by following a link or typing in a URL. They may later add it to their home screen, but that’s just a bonus. I think it’s important to build progressive web apps accordingly—don’t pretend that it’s just like building a native app just because some people will be visiting via the home screen.

I’m worried that developers are going to think that progressive web apps are something that need to built from scratch; that you have to start with a blank slate and build something new in a completely new way. Now, there are some good examples of these kind of one-off progressive web apps—The Guardian’s RioRun is nicely done. But I don’t think that the majority of progressive web apps should fall into that category. There’s nothing to stop you taking an existing website and transforming it step-by-step into a progressive web app:

  1. Switch over to HTTPS if you aren’t already.
  2. Use a service worker, even if it’s just to provide a custom offline page and cache some static assets.
  3. Make a manifest file to point to an icon and specify some colours.

See? Not exactly a paradigm shift in how you approach building for the web …but those deceptively straightforward steps will really turbo-boost your site.

I’m really excited about progressive web apps …but mostly for the “progressive” and “web” parts. Maybe I’ll start calling them progressive web sites. Or progressive web thangs.

Extensible web components

Adam Onishi has written up his thoughts on web components and progressive enhancements, following on from a discussion we were having on Slack. He shares a lot of the same frustrations as I do.

Two years ago, I said:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

I still feel that way. In theory, web components are very exciting. In practice, web components are very worrying. The worrying aspect comes from the treatment of backwards compatibility.

It all comes down to the way custom elements work. When you make up a custom element, it’s basically a span.

<fancy-select></fancy-select>

Then, using JavaScript with ShadowDOM, templates, and the other specs that together make up the web components ecosystem, you turn that inert span-like element into something all-singing and dancing. That’s great if the browser supports those technologies, and the JavaScript executes successfully. But if either of those conditions aren’t met, what you’re left with is basically a span.

One of the proposed ways around this was to allow custom elements to extend existing elements (not just spans). The proposed syntax for this was an is attribute.

<select is="fancy-select">...</select>

Browser makers responded to this by saying “Nah, that’s too hard.”

To be honest, I had pretty much given up on the is functionality ever seeing the light of day, but Monica has rekindled my hope:

Still, I’m not holding my breath for this kind of declarative extensibility landing in browsers any time soon. Instead, a JavaScript-based way of extending existing existing elements is currently the only way of piggybacking on all the accessible behavioural goodies you get with native elements.

class FancySelect extends HTMLSelectElement

But this imperative approach fails completely if custom elements aren’t supported, or if the JavaScript fails to execute. Now you’re back to having spans.

The presentation on web components at the Progressive Web Apps Dev Summit referred to this JavaScript-based extensibility as “progressively enhancing what’s already available”, which is a bit of a stretch, given how completely it falls apart in older browsers. It was kind of a weird talk, to be honest. After fifteen minutes of talking about creating elements entirely from scratch, there was a minute or two devoted to the is attribute and extending existing elements …before carrying as though those two minutes never happened.

But even without any means of extending existing elements, it should still be possible to define custom elements that have some kind of fallback in non-supporting browsers:

<fancy-select>
 <select>...</select>
</fancy-select>

In that situation, you at least get a regular ol’ select element in older browsers (or in modern browsers before the JavaScript kicks in and uplifts the custom element).

Adam has a great example of this in his post:

I’ve been thinking of a gallery component lately, where you’d have a custom element, say <o-gallery> for want of a better example, and simply populate it with images you want to display, with custom elements and shadow DOM you can add all the rest, controls/layout etc. Markup would be something like:

<o-gallery>
 <img src="">
 <img src="">
 <img src="">
</o-gallery>

If none of the extra stuff loads, what do we get? Well you get 3 images on the page. You still get the content, but just none of the fancy interactivity.

Yes! This, in my opinion, is how we should be approaching the design of web components. This is what gets me excited about web components.

Then I look at pretty much all the examples of web components out there and my nervousness kicks in. Hardly any of them spare a thought for backwards-compatibility. Take a look, for example, at the entire contents of the body element for the Polymer Shop demo site:

<shop-app unresolved="">SHOP</shop-app>

This seems really odd to me, because I don’t think it’s a good way to “sell” a technology.

Compare service workers to web components.

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Service workers work well and fail well. If a browser supports service workers, the user gets all the benefits. If a browser doesn’t support service workers, the user get the same experience they would have always had.

Web components (will) work well, but fail badly. If a browser supports web components, the user gets the experience that the developer has crafted using these new technologies. If a browser doesn’t support web components, the user gets …probably nothing. It depends on how the web components have been designed.

It’s so much easier to get excited about implementing service workers. You’ve literally got nothing to lose and everything to gain. That’s not the case with web components. Or at least not with the way they are currently being sold.

See, this is why I think it’s so important to put some effort into designing web components that have some kind of fallback. Those web components will work well and fail well.

Look at the way new elements are designed for HTML. Think of complex additions like canvas, audio, video, and picture. Each one has been designed with backwards-compatibility in mind—there’s always a way to provide fallback content.

Web components give us developers the same power that, up until now, only belonged to browser makers. Web components also give us developers the same responsibilities as browser makers. We should take that responsibility seriously.

Web components are supposed to be the poster child for The Extensible Web Manifesto. I’m all for an extensible web. But the way that web components are currently being built looks more like an endorsement of The Replaceable Web Manifesto. I’m not okay with a replaceable web.

Here’s hoping that my concerns won’t be dismissed as “piffle and tosh” again by the very people who should be thinking about these issues.