Tags: web

223

sparkline

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

WorldWideWeb

Nine people came together at CERN for five days and made something amazing. I still can’t quite believe it.

Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.

Want to see the final result? Here you go:

worldwideweb.cern.ch

That’s the website we built. The call to action is hard to miss:

Launch WorldWideWeb

Behold! A simulation of using the first ever web browser, recreated inside your web browser.

Now you could try clicking around on the links on the opening doucment—remembering that you need to double-click on links to activate them—but you’ll quickly find that most of them don’t work. They’re long gone. So it’s probably going to be more fun to open a new page to use as your starting point. Here’s how you do that:

  1. Select Document from the menu options on the left.
  2. A new menu will pop open. Select Open from full document reference.
  3. Type a URL, like, say https://adactio.com
  4. Press that lovely chunky Open button.

You are now surfing the web through a decades-old interface. Double click on a link to open it. You’ll notice that it opens in a new window. You’ll also notice that there’s no way of seeing the current URL. Back then, the idea was that you would navigate primarily by clicking on links, creating your own “associative trails”, as first envisioned by Vannevar Bush.

But the WorldWideWeb application wasn’t just a browser. It was a Hypermedia Browser/Editor.

  1. From that Document menu you opened, select New file…
  2. Type the name of your file; something like test.html
  3. Start editing the heading and the text.
  4. In the main WorldWideWeb menu, select Links.
  5. Now focus the window with the document you opened earlier (adactio.com).
  6. With that window’s title bar in focus, choose Mark all from the Links menu.
  7. Go back to your test.html document, and highlight a piece of text.
  8. With that text highlighted, click on Link to marked from the Links menu.

If you want, you can even save the hypertext document you created. Under the Document menu there’s an option to Save a copy offline (this is the one place where the wording of the menu item isn’t exactly what was in the original WorldWideWeb application). Save the file so you can open it up in a text editor and see what the markup would’ve looked it.

I don’t know about you, but I find this utterly immersive and fascinating. Imagine what it must’ve been like to browse, create, and edit like this. Hypertext existed before the web, but it was confined to your local hard drive. Here, for the first time, you could create links across networks!

After five days time-travelling back thirty years, I have a new-found appreciation for what Tim Berners-Lee created. But equally, I’m in awe of what my friends created thirty years later.

Remy did all the JavaScript for the recreated browser …in just five days!

Kimberly was absolutely amazing, diving deep into the original source code of the application on the NeXT machine we borrowed. She uncovered some real gems.

Of course Mark wanted to make sure the font was as accurate as possible. He and Brian went down quite a rabbit hole, and with remote help from David Jonathan Ross, they ended up recreating entire families of fonts.

John exhaustively documented UI patterns that Angela turned into marvelous HTML and CSS.

Through it all, Craig and Martin put together the accomanying website. Personally, I think the website is freaking awesome—it’s packed with fascinating information! Check out the family tree of browsers that Craig made.

What a team!

Back at CERN

We got the band back together.

In September of 2013, I had the great pleasure and privilege of going to CERN with a bunch of very smart people. I’m not sure how I managed to slip by. We were there to recreate the experience of using the line-mode browser. As I wrote at the time:

Just to be clear, the line-mode browser wasn’t the world’s first web browser. That honour goes to Tim Berners-Lee’s WorldWideWeb programme. But whereas WorldWideWeb only ran on NeXT machines, the line-mode browser worked cross-platform and was, therefore, instrumental in demonstrating the power of the web as a universally-accessible medium.

In the run-up to the 30th anniversary of the original (vague but exciting) proposal for what would become the World Wide Web, we’ve been invited back to try to recreate the experience of using that first web browser, the one that one ever ran on NeXT machines.

I missed the first day due to travel madness—flying back from Interaction 19 in Seattle during snowmageddon to Heathrow and then to Geneva—but by the time I arrived, my hackmates had already made a great start in identifying the objectives:

  1. Give people an understanding of the user experience of the WorldWideWeb browser.
  2. Demonstrate that a read/write philosophy was there from the beginning.
  3. Give context—what was going on at the time?

That second point is crucial. WorldWideWeb wasn’t just a web browser; it was a browser/editor. That’s by far the biggest change in terms of the original vision of the web and what we ended up getting from Mosaic onwards.

Remy is working hard on the first point. He documented the first day and now on the second day, he’s made enormous progress already.

I’m focusing on point number three. I want to show the historical context for the World Wide Web. Here’s my plan…

Seeing as we’re coming up on the thirtieth anniversary, I thought it would be interesting to take the year of the proposal (1989) and look back in a time cone of thirty years previous to that at the influences on Tim Berners-Lee. I also want to look at what has happened with the web in the thirty years since the proposal. So the date of the proposal will be a centre point, with the timespan of 1959-1989 converging on it from the past, and the timespan of 1989-2019 diverging from it into the future. I hope it could make for a nice visualisation. Maybe I could try to get it look like data from a particle collision.

We’re here till the weekend and everyone else has already made tremendous progress. Kimberly has been hacking the Gibson …well, that’s what it looked like when she was deep in the code of the NeXT machine we’ve borrowed from Musée Bolo (merci beaucoup!).

We took a little time out for a tour of the data centre. Oh, and at lunch time, we sat with Robert Cailliau and grilled him with questions about the birth of the web. Quite a day!

Now it’s time for me to hit the hay and prepare for another day of hacking in this extraordinary place.

Ch-ch-ch-changes

It’s browser updatin’ time! Firefox 65 just dropped. So did Chrome 72. Safari 12.1 is shipping with iOS 12.2.

It’s interesting to compare the release notes for each browser and see the different priorities reflected in them (this is another reason why browser diversity is A Good Thing).

A lot of the Firefox changes are updates to dev tools; they just keep getting better and better. In fact, I’m not sure “dev tools” is the right word for them. With their focus on layout, typography, and accessibility, “design tools” might be a better term.

Oh, and Firefox is shipping support for some CSS properties that really help with print style sheets, so I’m disproportionately pleased about that.

In Safari’s changes, I’m pleased to see that the datalist element is finally getting implemented. I’ve been a fan of that element for many years now. (Am I a dork for having favourite HTML elements? Or am I a dork for even having to ask that question?)

And, of course, it wouldn’t be a Safari release without a new made up meta tag. From the people who brought you such hits as viewport and apple-mobile-web-app-capable, comes …supported-color-schemes (Apple likes to make up meta tags almost as much as Google likes to make up rel values).

There’ll be a whole bunch of improvements in how progressive web apps will behave once they’ve been added to the home screen. We’ll finally get some state persistence if you navigate away from the window!

Updated the behavior of websites saved to the home screen on iOS to pause in the background instead of relaunching each time.

Maximiliano Firtman has a detailed list of the good, the bad, and the “not sure yet if good” for progressive web apps on iOS 12.2 beta. Thomas Steiner has also written up the progress of progressive web apps in iOS 12.2 beta. Both are published on Ev’s blog.

At first glance, the release notes for Chrome 72 are somewhat paltry. The big news doesn’t even seem to be listed there. Maximiliano Firtman again:

Chrome 72 for Android shipped the long-awaited Trusted Web Activity feature, which means we can now distribute PWAs in the Google Play Store!

Very interesting indeed! I’m not sure if I’m ready to face the Kafkaesque process of trying to add something to the Google Play Store just yet, but it’s great to know that I can. Combined with the improvements coming in iOS 12.2, these are exciting times for progressive web apps!

2018 in numbers

I posted to adactio.com 1,387 times in 2018: sparkline

In amongst those notes were:

In my blog posts, the top tags were:

  1. frontend and development (42 posts), sparkline
  2. serviceworkers (27 posts), sparkline
  3. design (20 posts), sparkline
  4. writing and publishing (19 posts), sparkline
  5. javascript (18 posts). sparkline

In my links, the top tags were:

  1. development (305 links), sparkline
  2. frontend (289 links), sparkline
  3. design (178 links), sparkline
  4. css (110 links), sparkline
  5. javascript (106 links). sparkline

When I wasn’t updating this site:

But these are just numbers. To get some real end-of-year thoughts, read posts by Remy, Andy, Ana, or Bill Gates.

Words I wrote in 2018

I wrote just shy of a hundred blog posts in 2018. That’s an increase from 2017. I’m happy about that.

Here are some posts that turned out okay…

A lot of my writing in 2018 was on technical topics—front-end development, service workers, and so on—but I should really make more of an effort to write about a wider range of topics. I always like when Zeldman writes about his glamourous life. Maybe in 2019 I’ll spend more time letting you know what I had for lunch.

I really enjoy writing words on this website. If I go too long between blog posts, I start to feel antsy. The only relief is to move my fingers up and down on the keyboard and publish something. Sounds like a bit of an addiction, doesn’t it? Well, as habits go, this is probably one of my healthier ones.

Thanks for reading my words in 2018. I didn’t write them for you—I wrote them for me—but it’s always nice when they resonate with others. I’ll keep on writing my brains out in 2019.

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Webmentions at Indie Web Camp Berlin

I was in Berlin for most of last week, and every day was packed with activity:

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:

  1. By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
  2. If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

  • Does it include a link to a licence?
  • Is there a restrictive robots.txt file?
  • Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.

Several people are writing

Anne Gibson writes:

It sounds easy to make writing a habit, but like every other habit that doesn’t involve addictive substances like nicotine or dopamine it’s hard to start and easy to quit.

Alice Bartlett writes:

Anyway, here we are, on my blog, or in your RSS reader. I think I’ll do weaknotes. Some collections of notes. Sometimes. Not very well written probably. Generally written with the urgency of someone who is waiting for a baby wake up.

Patrick Rhone writes:

Bottom line; please place any idea worth more than 280 characters and the value Twitter places on them (which is zero) on a blog that you own and/or can easily take your important/valuable/life-changing ideas with you and make them easy for others to read and share.

Sara Soueidan writes:

What you write might help someone understand a concept that you may think has been covered enough before. We each have our own unique perspectives and writing styles. One writing style might be more approachable to some, and can therefore help and benefit a large (or even small) number of people in ways you might not expect.

Just write.

Even if only one person learns something from your article, you’ll feel great, and that you’ve contributed — even if just a little bit — to this amazing community that we’re all constantly learning from. And if no one reads your article, then that’s also okay. That voice telling you that people are just sitting somewhere watching our every step and judging us based on the popularity of our writing is a big fat pathetic attention-needing liar.

Laura Kalbag writes:

The web can be used to find common connections with folks you find interesting, and who don’t make you feel like so much of a weirdo. It’d be nice to be able to do this in a safe space that is not being surveilled.

Owning your own content, and publishing to a space you own can break through some of these barriers. Sharing your own weird scraps on your own site makes you easier to find by like-minded folks.

Brendan Dawes writes:

At times I think “will anyone reads this, does anyone care?”, but I always publish it anyway — and that’s for two reasons. First it’s a place for me to find stuff I may have forgotten how to do. Secondly, whilst some of this stuff is seemingly super-niche, if one person finds it helpful out there on the web, then that’s good enough for me. After all I’ve lost count of how many times I’ve read similar posts that have helped me out.

Robin Rendle writes:

My advice after learning from so many helpful people this weekend is this: if you’re thinking of writing something that explains a weird thing you struggled with on the Internet, do it! Don’t worry about the views and likes and Internet hugs. If you’ve struggled with figuring out this thing then be sure to jot it down, even if it’s unedited and it uses too many commas and you don’t like the tone of it.

Khoi Vinh writes:

Maybe you feel more comfortable writing in short, concise bullets than at protracted, grandiose length. Or maybe you feel more at ease with sarcasm and dry wit than with sober, exhaustive argumentation. Or perhaps you prefer to knock out a solitary first draft and never look back rather than polishing and tweaking endlessly. Whatever the approach, if you can do the work to find a genuine passion for writing, what a powerful tool you’ll have.

Amber Wilson writes:

I want to finally begin writing about psychology. A friend of mine shared his opinion that writing about this is probably best left to experts. I tried to tell him I think that people should write about whatever they want. He argued that whatever he could write about psychology has probably already been written about a thousand times. I told him that I’m going to be writer number 1001, and I’m going to write something great that nobody has written before.

Austin Kleon writes:

Maybe I’m weird, but it just feels good. It feels good to reclaim my turf. It feels good to have a spot to think out loud in public where people aren’t spitting and shitting all over the place.

Tim Kadlec writes:

I write to understand and remember. Sometimes that will be interesting to others, often it won’t be.

But it’s going to happen. Here, on my own site.

You write…

The top four web performance challenges

Danielle and I have been doing some front-end consultancy for a local client recently.

We’ve both been enjoying it a lot—it’s exhausting but rewarding work. So if you’d like us to come in and spend a few days with your company’s dev team, please get in touch.

I’ve certainly enjoyed the opportunity to watch Danielle in action, leading a workshop on refactoring React components in a pattern library. She’s incredibly knowledgable in that area.

I’m clueless when it comes to React, but I really enjoy getting down to the nitty-gritty of browser features—HTML, CSS, and JavaScript APIs. Our skillsets complement one another nicely.

This recent work was what prompted my thoughts around the principles of robustness and least power. We spent a day evaluating a continuum of related front-end concerns: semantics, accessibility, performance, and SEO.

When it came to performance, a lot of the work was around figuring out the most suitable metric to prioritise:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, or
  • time to first meaningful interaction.

And that doesn’t even cover the more easily-measurable numbers like:

  • overall file size,
  • number of requests, or
  • pagespeed insights score.

One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.

Then we got down to some nuts’n’bolts technology decisions. I took a step back and looked at the state of performance across the web. I thought it would be fun to rank the most troublesome technologies in order of tricksiness. I came up with a top four list.

Here we go, counting down from four to the number one spot…

4. Web fonts

Coming in at number four, it’s web fonts. Sometimes it’s the combined weight of multiple font files that’s the problem, but more often that not, it’s the perceived performance that suffers (mostly because of when the web fonts appear).

Fortunately there’s a straightforward question to ask in this situation: WWZD—What Would Zach Do?

3. Images

At the number three spot, it’s images. There are more of them and they just seem to be getting bigger all the time. And yet, we have more tools at our disposal than ever—better file formats, and excellent browser support for responsive images. Heck, we’re even getting the ability to lazy load images in HTML now.

So, as with web fonts, it feels like the impact of images on performance can be handled, as long as you give them some time and attention.

2. Our JavaScript

Just missing out on making the top spot is the JavaScript that we send down the pipe to our long-suffering users. There’s nothing wrong with the code itself—I’m sure it’s very good. There’s just too damn much of it. And that’s a real performance bottleneck, especially on mobile.

So stop sending so much JavaScript—a solution as simple as Monty Python’s instructions for playing the flute.

1. Other people’s JavaScript

At number one with a bullet, it’s all the crap that someone else tells us to put on our websites. Analytics. Ads. Trackers. Beacons. “It’s just one little script”, they say. And then that one little script calls in another, and another, and another.

It’s so disheartening when you’ve devoted your time and energy into your web font loading strategy, and optimising your images, and unbundling your JavaScript …only to have someone else’s JavaScript just shit all over your nice performance budget.

Here’s the really annoying thing: when I go to performance conferences, or participate in performance discussions, you know who’s nowhere to be found? The people making those third-party scripts.

The narrative around front-end performance is that it’s up to us developers to take responsibility for how our websites perform. But by far the biggest performance impact comes from third-party scripts.

There is a solution to this, but it’s not a technical one. We could refuse to add overweight (and in many cases, unethical) third-party scripts to the sites we build.

I have many, many issues with Google’s AMP project, but I completely acknowledge that it solves a political problem:

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web.

But how can we take that lesson from AMP and apply it to all our web pages? If we simply refuse to be the one to add those third-party scripts, we get fired, and somebody else comes in who is willing to poison web pages with third-party scripts. There’s nothing to stop companies doing that.

Unless…

Suppose we were to all make a pact that we would stand in solidarity with any of our fellow developers in that sort of situation. A sort of joining-together. A union, if you will.

There is power in a factory, power in the land, power in the hands of the worker, but it all amounts to nothing if together we don’t stand.

There is power in a union.

Flexibility

Over on A List Apart, you can read the first chapter from Tim’s new book, Flexible Typesetting.

I was lucky enough to get an advance preview copy and this book is ticking all my boxes. I mean, I knew I would love all the type nerdery in the book, but there’s a bigger picture too. In chapter two, Tim makes this provacative statement:

Typography is now optional. That means it’s okay for people to opt out.

That’s an uncomfortable truth for designers and developers, but it gets to the heart of what makes the web so great:

Of course typography is valuable. Typography may now be optional, but that doesn’t mean it’s worthless. Typographic choices contribute to a text’s meaning. But on the web, text itself (including its structural markup) matters most, and presentational instructions like typography take a back seat. Text loads first; typography comes later. Readers are free to ignore typographic suggestions, and often prefer to. Services like Instapaper, Pocket, and Safari’s Reader View are popular partly because readers like their text the way they like it.

What Tim describes there isn’t a cause for frustration or despair—it’s a cause for celebration. When we try to treat the web as a fixed medium where we can dictate the terms that people must abide by, we’re doing them (and the web) a disservice. Instead of treating web design as a pre-made contract drawn up by the designer and presented to the user as a fait accompli, it is more materially honest to treat web design as a conversation between designer and user. Both parties should have a say.

Or as Tim so perfectly puts it in Flexible Typesetting:

Readers are typographers, too.

Greater expectations

I got an intriguing email recently from someone who’s a member of The Session, the community website about Irish traditional music that I run. They said:

When I recently joined, I used my tablet to join. Somewhere I was able to download The Session app onto my tablet.

But there is no native app for The Session. Although, as it’s a site that I built, it is, a of course, progressive web app.

They went on to say:

I wanted to put the app on my phone but I can’t find the app to download it. Can I have the app on more than one device? If so, where is it available?

I replied saying that yes, you can absolutely have it on more than one device:

But you don’t find The Session app in the app store. Instead you go to the website https://thesession.org and then add it to your home screen from your browser.

My guess is that this person had added The Session to the home screen of their Android tablet, probably following the “add to home screen” prompt. I recently added some code to use the window.beforeinstallprompt event so that the “add to home screen” prompt would only be shown to visitors who sign up or log in to The Session—a good indicator of engagement, I reckon, and it should reduce the chance of the prompt being dismissed out of hand.

So this person added The Session to their home screen—probably as a result of being prompted—and then used it just like any other app. At some point, they didn’t even remember how the app got installed:

Success! I did it. Thanks. My problem was I was looking for an app to download.

On the one hand, this is kind of great: here’s an example where, in the user’s mind, there’s literally no difference between the experience of using a progressive web app and using a native app. Win!

But on the other hand, the expectation is still that apps are to be found in an app store, not on the web. This expectation is something I wrote about recently (and Justin wrote a response to that post). I finished by saying:

Perhaps the inertia we think we’re battling against isn’t such a problem as long as we give people a fast, reliable, engaging experience.

When this member of The Session said “My problem was I was looking for an app to download”, I responded by saying:

Well, I take that as a compliment—the fact that once the site is added to your home screen, it feels just like a native app. :-)

And they said:

Yes, it does!

Altering expectations

Luke has written up the selection process he went through when Clearleft was designing the Virgin Holidays app. When it comes to deploying on mobile, there were three options:

  1. Native apps
  2. A progressive web app
  3. A hybrid app

The Virgin Holidays team went with that third option.

Now, it will come as no surprise that I’m a big fan of the second option: building a progressive web app (or turning an existing site into a progressive web app). I think a progressive web app is a great solution for travel apps, and the use-case that Luke describes sounds perfect:

Easy access to resort staff and holiday details that could be viewed offline to help as many customers as possible travel without stress and enjoy a fantastic holiday

Luke explains why they choice not to go with a progressive web app.

The current level of support and leap in understanding meant we’d risk alienating many of our customers.

The issue of support is one that is largely fixed at this point. When Clearleft was working on the Virgin Holidays app, service workers hadn’t landed in iOS. Hence, the risk of alienating a lot of customers. But now that Mobile Safari has offline capabilities, that’s no longer a problem.

But it’s the second reason that’s trickier:

Simply put, customers already expected to find us in the App Store and are familiar with what apps can historically offer over websites.

I think this is the biggest challenge facing progressive web apps: battling expectations.

For over a decade, people have formed ideas about what to expect from the web and what to expect from native. From a technical perspective, native and web have become closer and closer in capabilities. But people’s expectations move slower than technological changes.

First of all, there’s the whole issue of discovery: will people understand that they can “install” a website and expect it to behave exactly like a native app? This is where install prompts and ambient badging come in. I think ambient badging is the way to go, but it’s still a tricky concept to explain to people.

But there’s another way of looking at the current situation. Instead of seeing people’s expectations as a negative factor, maybe it’s an opportunity. There’s an opportunity right now for companies to be as groundbreaking and trendsetting as Wired.com when it switched to CSS for layout, or The Boston Globe when it launched its responsive site.

It makes for a great story. Just look at the Pinterest progressive web app for an example (skip to the end to get to the numbers):

Weekly active users on mobile web have increased 103 percent year-over-year overall, with a 156 percent increase in Brazil and 312 percent increase in India. On the engagement side, session length increased by 296 percent, the number of Pins seen increased by 401 percent and people were 295 percent more likely to save a Pin to a board. Those are amazing in and of themselves, but the growth front is where things really shined. Logins increased by 370 percent and new signups increased by 843 percent year-over-year. Since we shipped the new experience, mobile web has become the top platform for new signups. And for fun, in less than 6 months since fully shipping, we already have 800 thousand weekly users using our PWA like a native app (from their homescreen).

Now admittedly their previous mobile web experience was a dreadful doorslam, but still, those are some amazing statistics!

Maybe we’re underestimating the malleability of people’s expectations when it comes to the web on mobile. Perhaps the inertia we think we’re battling against isn’t such a problem as long as we give people a fast, reliable, engaging experience.

If you build that, they will come.

Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…

Twitter and Instagram progressive web apps

Since support for service workers landed in Mobile Safari on iOS, I’ve been trying a little experiment. Can I replace some of the native apps I use with progressive web apps?

The two major candidates are Twitter and Instagram. I added them to my home screen, and banished the native apps off to a separate screen. I’ve been using both progressive web apps for a few months now, and I have to say, they’re pretty darn great.

There are a few limitations compared to the native apps. On Twitter, if you follow a link from a tweet, it pops open in Safari, which is fine, but when you return to Twitter, it loads anew. This isn’t any fault of Twitter—this is the way that web apps have worked on iOS ever since they introduced their weird web-app-capable meta element. I hope this behaviour will be fixed in a future update.

Also, until we get web notifications on iOS, I need to keep the Twitter native app around if I want to be notified of a direct message (the only notification I allow).

Apart from those two little issues though, Twitter Lite is on par with the native app.

Instagram is also pretty great. It too suffers from some navigation issues. If I click through to someone’s profile, and then return to the main feed, it also loads it anew, losing my place. It would be great if this could be fixed.

For some reason, the Instagram web app doesn’t allow uploading multiple photos …which is weird, because I can upload multiple photos on my own site by adding the multiple attribute to the input type="file" in my posting interface.

Apart from that, though, it works great. And as I never wanted notifications from Instagram anyway, the lack of web notifications doesn’t bother me at all. In fact, because the progressive web app doesn’t keep nagging me about enabling notifications, it’s a more pleasant experience overall.

Something else that was really annoying with the native app was the preponderance of advertisements. It was really getting out of hand.

Well …(looks around to make sure no one is listening)… don’t tell anyone, but the Instagram progressive web app—i.e. the website—doesn’t have any ads at all!

Here’s hoping it stays that way.

Speaking my brains in Boston

I was in Boston last week to give a talk. I ended up giving four.

I was there for An Event Apart which was, as always, excellent. I opened up day two with my talk, The Way Of The Web.

This was my second time giving this talk at An Event Apart—the first time was in Seattle a few months back. It was also my last time giving this talk at An Event Apart—I shan’t be speaking at any of the other AEAs this year, alas. The talk wasn’t recorded either so I’m afraid you kind of had to be there (unless you know of another conference that might like to have me give that talk, in which case, hit me up).

After giving my talk in the morning, I wasn’t quite done. I was on a panel discussion with Rachel about CSS grid. It turned out to be a pretty good format: have one person who’s a complete authority on a topic (Rachel), and another person who’s barely starting out and knows just enough to be dangerous (me). I really enjoyed it, and the questions from the audience prompted some ideas to form in my head that I should really note down in a blog post before they evaporate.

The next day, I went over to MIT to speak at Design 4 Drupal. So, y’know, technically I’ve lectured at MIT now.

I wasn’t going to do the same talk as I gave at An Event Apart, obviously. Instead, I reprised the talk I gave earlier this at Webstock: Taking Back The Web. I thought it was fitting given how much Drupal’s glorious leader, Dries, has been thinking about, writing about, and building with the indie web.

I really enjoyed giving this talk. The audience were great, and they had lots of good questions afterwards. There’s a video, which is basically my voice dubbed over the slides, followed by a good half of questions afterwards.

When I was done there, after a brief excursion to the MIT bookstore, I went back across the river from Cambridge to Boston just in time for that evening’s Boston CSS meetup.

Lea had been in touch to ask if I would speak at this meet-up, and I was only too happy to oblige. I tried doing something I’ve never done before: a book reading!

No, not reading from Going Offline, my current book which I should encouraging people to buy. Instead I read from Resilient Web Design, the free online book that people literally couldn’t buy if they wanted to. But I figured reading the philosophical ramblings in Resilient Web Design would go over better than trying to do an oral version of the service worker code in Going Offline.

I read from chapters two (Materials), three (Visions), and five (Layers) and I really, really liked doing it! People seemed to enjoy it too—we had questions throughout.

And with that, my time in Boston was at an end. I was up at the crack of dawn the next morning to get the plane back to England where Ampersand awaited. I wasn’t speaking there though. I thoroughly enjoyed being an attendee and absorbing the knowledge bombs from the brilliant speakers that Rich assembled.

The next place I’m speaking will much closer to home than Boston: I’ll be giving a short talk at Oxford Geek Nights on Wednesday. Come on by if you’re in the neighbourhood.

Clearleft.com is a progressive web app

What’s that old saying? The cobbler’s children have no shoes that work offline. Or something.

It’s been over a year since the Clearleft site relaunched and I listed some of the next steps I had planned:

Service worker. It’s a no-brainer. Now that the Clearleft site is (finally!) running on HTTPS, having a simple service worker to cache static assets like CSS, JavaScript and some images seems like the obvious next step.

You know how it is. Those no-brainer tasks are exactly the kind of thing that end up on a to-do list without ever quite getting to-done. Meanwhile I’ve been writing and speaking about how any website can be a progressive web app. I think Alanis Morissette used to sing about this sort of situation.

Enough is enough! Clearleft.com is now a progressive web app. It has a manifest file and a service worker script.

The service worker logic is fairly straightforward, and taken almost verbatim from Going Offline. As you navigate around the site, the service worker applies different logic depending on the kind of file you’re requesting:

  • Pages are served fresh from the network, falling back to the cache when there’s a problem.
  • Everything else is served from the cache where possible, resorting to the network only if there’s no match in the cache—quite the performance boost!

In both cases, if a page or a file is retrieved from the network, it’s gets put into a cache. I’ve got one cache for pages, and another for everything else. And even if a file is retrieved from that cache, I still fire off a fetch request to grab a fresh copy for the cache. So while there’s a chance that a stale file might be served up, it will only ever be slightly stale, and the next time it’s requested, it’ll be fresh.

In the worst-case scenario, when a page can’t be retrieved from the network or the cache, you end up seeing a custom offline page. There you can see a list of any pages that are cached (meaning you can revisit them even without an internet connection).

A custom offline page showing a list of URLs.

It’s not ideal—page titles would be friendlier than URLs—but it’s a start. I’m sure I’ll revisit it soon. Honest.

Oh, and after a year of procrastinating about doing this, guess how long it took? About half a day. Admittedly, this isn’t my first progressive web app, and the more you build ‘em, the easier it gets. Still, it’s a classic example of a small investment of time leading to a big improvement in performance and user experience.

If you think your company’s website could benefit from being a progressive web app (and believe me, it definitely could), you have a couple of options:

  1. Arm yourself with a copy of Going Offline and give it a go yourself. Or
  2. Get in touch with Clearleft. We can help you. (See, I can say that with a straight face now that we’re practicing what we preach.)

Either way, don’t dilly dally …like I did.

Document

A little while back, I showed Paul what I was working on with The Gęsiówka Story. I value his opinion and I really like the Bradshaw’s Guide project that he’s been working on. We’re both in complete agreement with Russell Davies’ call for an internet of unmonetisable enthusiasms. Call them side projects if you like, but for me, these are the things that the World Wide Web excels at.

These unomentisable enthusiasms/side projects are what got me hooked on the web in the first place. Fray.com—back when it was a website for personal stories—was what really made the web click for me. I had seen brochure sites, I had seen e-commerce sites, but it was seeing something built purely for the love of it that caused that lightbulb moment for me.

I told Paul about another site I remembered from that time (we’re talking about the mid-to-late nineties here). It was called Private Art. It was the work of one family, the children of Private Art Pranger who served in World War Two and wrote letters from the front. Without any expectations, I did a quick search, and amazingly, the site is still up!

Yes, it’s got tiled background images, and the framesetted content is in a pop-up window, but it works. The site hasn’t been updated for fifteen years but it works perfectly in a web browser today. That’s kind of amazing. We really shouldn’t take the longevity of our materials for granted. Could you imagine trying to open a word processing document from the late nineties on your computer today? You’d have a bad time.

Working on The Gęsiówka Story helped to remind me of some of the things that made me fall in love with the web in the first place. What I wrote about it is equally true of Private Art:

When we talk about documents on the web, we usually use the word “document” as a noun. But working on The Gęsiówka Story, I came to think of the word “document” as a verb.

The World Wide Web is a medium that’s works for quick, short-term lightweight bits of fun and also for long-term, deeper, slower, thoughtful archives of our collective culture.

The web is a many-splendoured thing.

Frustration

I had some problems with my bouzouki recently. Now, I know my bouzouki pretty well. I can navigate the strings and frets to make music. But this was a problem with the pickup under the saddle of the bouzouki’s bridge. So it wasn’t so much a musical problem as it was an electronics problem. I know nothing about electronics.

I found it incredibly frustrating. Not only did I have no idea how to fix the problem, but I also had no idea of the scope of the problem. Would it take five minutes or five days? Who knows? Not me.

My solution to a problem like this is to pay someone else to fix it. Even then I have to go through the process of having the problem explained to me by someone who understands and cares about electronics much more than me. I nod my head and try my best to look like I’m taking it all in, even though the truth is I have no particular desire to get to grips with the inner workings of pickups—I just want to make some music.

That feeling of frustration I get from having wiring issues with a musical instrument is the same feeling I get whenever something goes awry with my web server. I know just enough about servers to be dangerous. When something goes wrong, I feel very out of my depth, and again, I have no idea how long it will take the fix the problem: minutes, hours, days, or weeks.

I had a very bad day yesterday. I wanted to make a small change to the Clearleft website—one extra line of CSS. But the build process for the website is quite convoluted (and clever), automatically pulling in components from the site’s pattern library. Something somewhere in the pipeline went wrong—I still haven’t figured out what—and for a while there, the Clearleft website was down, thanks to me. (Luckily for me, Danielle saved the day …again. I’d be lost without her.)

I was feeling pretty down after that stressful day. I felt like an idiot for not knowing or understanding the wiring beneath the site.

But, on the other hand, considering I was only trying to edit a little bit of CSS, maybe the problem didn’t lie entirely with me.

There’s a principle underlying the architecture of the World Wide Web called The Rule of Least Power. It somewhat counterintuitively states that you should:

choose the least powerful language suitable for a given purpose.

Perhaps, given the relative simplicity of the task I was trying to accomplish, the plumbing was over-engineered. That complexity wouldn’t matter if I could circumvent it, but without the build process, there’s no way to change the markup, CSS, or JavaScript for the site.

Still, most of the time, the build process isn’t a hindrance, it’s a help: concatenation, minification, linting and all that good stuff. Most of my frustration when something in the wiring goes wrong is because of how it makes me feel …just like with the pickup in my bouzouki, or the server powering my website. It’s not just that I find this stuff hard, but that I also feel like it’s stuff I’m supposed to know, rather than stuff I want to know.

On that note…

Last week, Paul wrote about getting to grips with JavaScript. On the very same day, Brad wrote about his struggle to learn React.

I think it’s really, really, really great when people share their frustrations and struggles like this. It’s very reassuring for anyone else out there who’s feeling similarly frustrated who’s worried that the problem lies with them. Also, this kind of confessional feedback is absolute gold dust for anyone looking to write explanations or documentation for JavaScript or React while battling the curse of knowledge. As Paul says:

The challenge now is to remember the pain and anguish I endured, and bare that in mind when helping others find their own path through the knotted weeds of JavaScript.

HTTPS + service worker + web app manifest = progressive web app

I gave a quick talk at the Delta V conference in London last week called Any Site can be a Progressive Web App. I had ten minutes, but frankly I only needed enough time to say the title of the talk because, well, that was also the message.

There’s a common misconception that making a Progressive Web App means creating a Single Page App with an app-shell architecture. But the truth is that literally any website can benefit from the performance boost that results from the combination of HTTPS + Service Worker + Web App Manifest.

See how I define a progressive web app as being HTTPS + service worker + web app manifest? I’ve been doing that for a while. Here’s a post from last year called Progressing the web:

Literally any website can be a progressive web app:

That last step can be tricky if you’re new to service workers, but it’s not unsurmountable. It’s certainly a lot easier than completely rearchitecting your existing website to be a JavaScript-driven single page app.

Later I wrote a post called What is a Progressive Web App? where I compared the definition to responsive web design.

Regardless of the specifics of the name, what I like about Progressive Web Apps is that they have a clear definition. It reminds me of Responsive Web Design. Whatever you think of that name, it comes with a clear list of requirements:

  1. A fluid layout,
  2. Fluid images, and
  3. Media queries.

Likewise, Progressive Web Apps consist of:

  1. HTTPS,
  2. A service worker, and
  3. A Web App Manifest.

There’s more you can do in addition to that (just as there’s plenty more you can do on a responsive site), but the core definition is nice and clear.

But here’s the thing. Outside of the confines of my own website, it’s hard to find that definition anywhere.

On Google’s developer site, their definition uses adjectives like “reliable”, “fast”, and “engaging”. Those are all great adjectives, but more useful to a salesperson than a developer.

Over on the Mozilla Developer Network, their section on progressive web apps states:

Progressive web apps use modern web APIs along with traditional progressive enhancement strategy to create cross-platform web applications. These apps work everywhere and provide several features that give them the same user experience advantages as native apps. 

Hmm …I’m not so sure about that comparison to native apps (and I’m a little disturbed that the URL structure is /Apps/Progressive). So let’s click through to the introduction:

PWAs are web apps developed using a number of specific technologies and standard patterns to allow them to take advantage of both web and native app features.

Okay. Specific technologies. That’s good to hear. But instead of then listing those specific technologies, we’re given another list of adjectives (discoverable, installable, linkable, etc.). Again, like Google’s chosen adjectives, they’re very nice and desirable, but not exactly useful to someone who wants to get started making a progressive web app. That’s why I like to cut to the chase and say:

  • You need to be running on HTTPS,
  • Then you can add a service worker,
  • And don’t forget to add a web app manifest file.

If you do that, you’ve got a progressive web app. Now, to be fair, there’a lot that I’m leaving out. Your site should be fast. Your site should be responsive (it is, after all, on the web). There’s not much point mucking about with service workers if you haven’t sorted out the basics first. But those three things—HTTPS + service worker + web app manifest—are specifically what distinguishes a progressive web app. You can—and should—have a reliable, fast, engaging website before turning it into a progressive web app.

Jason has been thinking about progressive web apps a lot lately (he should write a book or something), and he said to me:

I agree with you on the three things that comprise a PWA, but as far as I can tell, you’re the first to declare it as such.

I was quite surprised by that. I always assumed that I was repeating the three ingredients of a progressive web app, not defining them. But looking through all the docs out there, Jason might be right. It’s surprising because I assumed it was obvious why those three things comprise a progressive web app—it’s because they’re testable.

Lighthouse, PWA Builder, Sonarwhal and other tools that evaluate your site will measure its progressive web app score based on the three defining factors (HTTPS, service worker, web app manifest). Then there’s Android’s Add to Home Screen prompt. Here finally we get a concrete description of what your site needs to do to pass muster:

  • Includes a web app manifest…
  • Served over HTTPS (required for service workers)
  • Has registered a service worker with a fetch event handler

(Although, as of this month, Chrome will no longer show the prompt automatically—you also have to write some JavaScript to handle the beforeinstallprompt  event).

Anyway, if you’re looking to turn your website into a progressive web app, here’s what you need to do (assuming it’s already performant and responsive):

  1. Switch over to HTTPS. Certbot can help you here.
  2. Add a web app manifest.
  3. Add a service worker to your site so that it responds even when there’s no network connection.

That last step might sound like an intimidating prospect, but help is at hand: I wrote Going Offline for exactly this situation.