I’ve been immersing myself in musical activities recently.
Two weeks ago I was in the studio with Salter Cane. In three days, we managed to record eleven(!) songs! Not bad. We recorded everything live, treating the vocals as guide vocals. We’ve still got some overdubbing to do but we’re very happy with the productivity.
Being in a recording studio for days is intense. It’s an all-consuming activity that leaves you drained. And it’s not just the playing that’s exhausting—listening can be surprisingly hard work.
For those three days, I was pretty much offline.
Then the week after that, I was in Belfast all week for the trad festival. I’ve written up a report over on The Session. It was excellent! But again, it was all-consuming. Classes in the morning and sessions for the rest of the day.
I don’t post anything here in my journal for those two weeks. I didn’t read through my RSS subscriptions. I was quite offline.
I say “quite” offline, because the week after next I’m going to be really offline.
I’m really looking forward to it. And I feel like the recent musical immersions have been like training for the main event in the tournament of being completely cut off from the internet.
The slides from Aaron’s workshop at today’s PWA Summit. I really like the idea of checking navigator.connection.downlink and navigator.connection.saveData inside a service worker to serve different or fewer assets!
It all started when I was checking out the very nice new redesign of WebPageTest. I figured while I was there, I’d run some of my sites through it. I passed in a URL from The Session. When the test finished, I noticed that the “screenshot” tab said that something was being logged to the console. That’s odd! And the file doing the logging was the service worker script.
I fired up Chrome (which isn’t my usual browser), and started navigating around The Session with dev tools open to see what appeared in the console. Sure enough, there was a failed fetch attempt being logged. The only time my service worker script logs anything is in the catch clause of fetching pages from the network. So Chrome was trying to fetch a web page, failing, and logging this error:
The service worker navigation preload request failed with a network error.
But all my pages were loading just fine. So where was the error coming from?
After a lot of spelunking and debugging, I think I’ve figured out what’s happening…
First of all, I’m making use of navigation preloads in my service worker. That’s all fine.
Secondly, the website is a progressive web app. It has a manifest file that specifies some metadata, including start_url. If someone adds the site to their home screen, this is the URL that will open.
So here’s what I think is happening. When I navigate to a page on the site in Chrome, the service worker handles the navigation just fine. It also parses the manifest file I’ve linked to and checks to see if that start URL would load if there were no network connection. And that’s when the error gets logged.
I only noticed this behaviour because I had specified a query string on my start URL in the manifest file. Instead of a start_url value of /, I’ve set a start_url value of /?homescreen. And when the error shows up in the console, the URL being fetched is /?homescreen.
Crucially, I’m not seeing a warning in the console saying “Site cannot be installed: Page does not work offline.” So I think this is all fine. If I were actually offline, there would indeed be an error logged to the console and that start_url request would respond with my custom offline page. It’s just a bit confusing that the error is being logged when I’m online.
I thought I’d share this just in case anyone else is logging errors to the console in the catch clause of fetches and is seeing an error even when everything appears to be working fine. I think there’s nothing to worry about.
Update: Jake confirmed my diagnosis and agreed that the error is a bit confusing. The good news is that it’s changing. In Chrome Canary the error message has already been updated to:
DOMException: The service worker navigation preload request failed due to a network error. This may have been an actual network error, or caused by the browser simulating offline to see if the page works offline: see https://w3c.github.io/manifest/#installability-signals
Oh boy, do I have some obscure browser behaviour for you!
To set the scene…
I’ve been writing here in my online journal for almost twenty years. The official anniversary will be on September 30th. But this website has been even online longer than that, just in a very different form.
Like a tour guide taking you around the ruins of some lost ancient civilisation, let me point out some interesting features:
Observe the .shtml file extension. That means it was once using Apache’s server-side includes, a simple way of repeating chunks of markup across pages. Scientists have been trying to reproduce the wisdom of the ancients using modern technology ever since.
See how the layout is 100vw and 100vh? Well, this was long before viewport units existed. In fact there is no CSS at all on that page. It’s one big table element with 100% width and 100% height.
So if there’s no CSS, where is the border-radius coming from? Let me introduce you to an old friend—the non-animated GIF. It’s got just enough transparency (though not proper alpha transparency) to fake rounded corners between two solid colours.
The management takes no responsibility for any trauma that might befall you if you view source. There you will uncover JavaScript from the dawn of time; ancient runic writing like if (navigator.appName == "Netscape")
Now if your constitution was able to withstand that, brace yourself for what happens when you click on either of the two links, deutsch or english.
You find yourself inside a frameset. You may also experience some disorienting “DHTML”—the marketing term given to any combination of JavaScript and positioning in the late ’90s.
Note that these are not iframes, they are frames. Different thing. You could create single page apps long before Ajax was a twinkle in Jesse James Garrett’s eye.
If you view source, you’ll see a React-like component system. Each frameset component contains frame components that are isolated from one another. They’re like web components. Each frame has its own (non-shadow) DOM. That’s because each frame is actually a separate web page. If you right-click on any of the frames, your browser should give the option to view the framed document in its own tab or window.
Now for the part where modern and ancient technologies collide…
If you’re looking at the frameset URL in Firefox or Safari, everything displays as it should in all its ancient glory. But if you’re looking in Google Chrome and you’ve visited adactio.com before, something very odd happens.
Each frame of the frameset displays my custom offline page. The only way that could be served up is through my service worker script. You can verify this by opening the framest URL in an incognito window—everything works fine when no service worker has been registered.
I have no idea why this is happening. My service worker logic is saying “if there’s a request for a web page, try fetching it from the network, otherwise look in the cache, otherwise show an offline page.” But if those page requests are initiated by a frame element, it goes straight to showing the offline page.
Is this a bug? Or perhaps this is the correct behaviour for some security reason? I have no idea.
I wonder if anyone has ever come across this before. It’s a very strange combination of factors:
a domain served over HTTPS,
that registers a service worker,
but also uses framesets and frames.
I could submit a bug report about this but I fear I would be laughed out of the bug tracker.
Still …the World Wide Web is remarkable for its backward compatibility. This behaviour is unusual because browser makers are at pains to support existing content and never break the web.
Technically a modern website (one that registers a service worker) shouldn’t be using deprecated technology like frames. But browsers still need to be able support those old technologies in order to render old websites.
This situation has only arisen because the same domain—adactio.com—is host to a modern website and a really old one.
Maybe Chrome is behaving strangely because I’ve built my online home on ancient burial ground.
Update: Both Remy and Jake did some debugging and found the issue…
It’s all to do with navigation preloads and the value of event.preloadResponse, which I believe is only supported in Chrome which would explain the differences between browsers.
event.preloadResponse is a promise that resolves with a response, if:
Navigation preload is enabled.
The request is a GET request.
The request is a navigation request (which browsers generate when they’re loading pages, including iframes).
Otherwise event.preloadResponse is still there, but it resolves with undefined.
Notice that iframes are mentioned, but not frames.
My code was assuming that if event.preloadRepsonse exists in my block of code for responding to page requests, then there’d be a response. But if the request was initiated from a frameset, it is a request for a page and event.preloadRepsonsedoes exist …but it’s undefined.
I’ve updated my code now to check this assumption (and fall back to fetch).
This may technically still be a bug though. Shouldn’t a page loaded from a frameset count as a navigation request?
This in an intriguing promise (there’s no code yet):
A PWA typically requires writing a service worker, an app manifest and a ton of custom code. Progressier flattens the learning curve. Just add it to your html template — you’re done.
I worry that this one line of code will pull in many, many, many, many lines of JavaScript.
Chris Ferdinandi blogs every day about the power of vanilla JavaScript. For over a week now, his daily posts have been about service workers. The cumulative result is this excellent collection of resources.
A Chrome-only API for adding offline content to an index that can be exposed in Android’s “downloads” list. It just shipped in the lastest version of Chrome.
I’m not a fan of browser-specific non-standards but you can treat this as an enhancement—implementing it doesn’t harm non-supporting browsers and you can use feature detection to test for it.
How do we tell our visitors our sites work offline? How do we tell our visitors that they don’t need an app because it’s no more capable than the URL they’re on right now?
Remy expands on his call for ideas on branding websites that work offline with a universal symbol, along the lines of what we had with RSS.
What I’d personally like to see as an outcome: some simple iconography that I can use on my own site and other projects that can offer ambient badging to reassure my visitor that the URL they’re visiting will work offline.
The cloud gives us collaboration, but old-fashioned apps give us ownership. Can’t we have the best of both worlds?
We would like both the convenient cross-device access and real-time collaboration provided by cloud apps, and also the personal ownership of your own data embodied by “old-fashioned” software.
This is a very in-depth look at the mindset and the challenges involved in building truly local-first software—something that Tantek has also been thinking about.
Apple aren’t the best at developer relations. But, bad as their communications can be, I’m willing to cut them some slack. After all, they’re not used to talking with the developer community.
John Wilander wrote a blog post that starts with some excellent news: Full Third-Party Cookie Blocking and More. Safari is catching up to Firefox and disabling third-party cookies by default. Wonderful! I’ve had third-party cookies disabled for a few years now, and while something occassionally breaks, it’s honestly a pretty great experience all around. Denying companies the ability to track users across sites is A Good Thing.
In the same blog post, John said that client-side cookies will be capped to a seven-day lifespan, as previously announced. Just to be clear, this only applies to client-side cookies. If you’re setting a cookie on the server, using PHP or some other server-side language, it won’t be affected. So persistent logins are still doable.
Then, in an audacious example of burying the lede, towards the end of the blog post, John announces that a whole bunch of other client-side storage technologies will also be capped to seven days. Most of the technologies are APIs that, like cookies, can be used to store data: Indexed DB, Local Storage, and Session Storage (though there’s no mention of the Cache API). At the bottom of the list is this:
Service Worker registrations
Okay, let’s clear up a few things here (because they have been so poorly communicated in the blog post)…
The seven day timer refers to seven days of Safari usage, not seven calendar days (although, given how often most people use their phones, the two are probably interchangable). So if someone returns to your site within a seven day period of using Safari, the timer resets to zero, and your service worker gets a stay of execution. Lucky you.
This only applies to Safari. So if your site has been added to the home screen and your web app manifest has a value for the “display” property like “standalone” or “full screen”, the seven day timer doesn’t apply.
That piece of information was missing from the initial blog post. Since the blog post was updated to include this clarification, some people have taken this to mean that progressive web apps aren’t affected by the upcoming change. Not true. Only progressive web apps that have been added to the home screen (and that have an appropriate “display” value) will be spared. That’s a vanishingly small percentage of progressive web apps, especially on iOS. To add a site to the home screen on iOS, you need to dig and scroll through the share menu to find the right option. And you need to do this unprompted. There is no ambient badging in Safari to indicate that a site is installable. Chrome’s install banner isn’t perfect, but it’s better than nothing.
Just a reminder: a progressive web app is a website that
runs on HTTPS,
has a service worker,
and a web manifest.
Adding to the home screen is something you can do with a progressive web app (or any other website). It is not what defines progressive web apps.
In any case, this move to delete service workers after seven days of using Safari is very odd, and I’m struggling to find the connection to the rest of the blog post, which is about technologies that can store data.
As I understand it, with the crackdown on setting third-party cookies, trackers are moving to first-party technologies. So whereas in the past, a tracking company could tell its customers “Add this script element to your pages”, now they have to say “Add this script element and this script file to your pages.” That JavaScript file can then store a unique idenitifer on the client. This could be done with a cookie, with Local Storage, or with Indexed DB, for example. But I’m struggling to understand how a service worker script could be used in this way. I’d really like to see some examples of this actually happening.
The best explanation I can come up with for this move by Apple is that it feels like the neatest solution. That’s neat as in tidy, not as in nifty. It is definitely not a nifty solution.
If some technologies set by a specific domain are being purged after seven days, then the tidy thing to do is purge all technologies from that domain. Service workers are getting included in that dragnet.
Now, to be fair, browsers and operating systems are free to clean up storage space as they see fit. Caches, Local Storage, Indexed DB—all of those are subject to eventually getting cleaned up.
So I was curious. Wanting to give Apple the benefit of the doubt, I set about trying to find out how long service worker registrations currently last before getting deleted. Maybe this announcement of a seven day time limit would turn out to be not such a big change from current behaviour. Maybe currently service workers last for 90 days, or 60, or just 30.
This is not a minor change. This is a crippling attack on service workers, a technology specifically designed to improve the user experience for return visits, whether it’s through improved performance or offline access.
I wouldn’t be so stunned had this announcement come with an accompanying feature that would allow Safari users to know when a website is a progressive web app that can be added to the home screen. But Safari continues to ignore the existence of progressive web apps. And now it will actively discourage people from using service workers.
If you’d like to give feedback on this ludicrous development, you can file a bug (down in the cellar in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying “Beware of the Leopard”).
No doubt there will still be plenty of Apple apologists telling us why it’s good that Safari has wished service workers into the cornfield. But make no mistake. This is a terrible move by Apple.
I will say this though: given The Situation we’re all living in right now, some good ol’ fashioned Hot Drama by a browser vendor behaving badly feels almost comforting.
Creating a PWA has saved a lot of kilobytes after the initial load by storing files on the device to reuse on subsequent requests – this in turn lowers the load time and carbon footprint on subsequent page views, making the website better for both people and planet. We’ve also enabled offline access, which significantly improves user experience for people in areas with patchy connections, such as mobile users on their commute.
Guten Morgen. All right. I’m just going to get started because I’ve got a lot to talk about and I’m very, very excited to be here.
I’m excited to talk about the web. I’ve been thinking a lot about the web. You know, I think a lot about the web all the time, but this year, in particular, thinking about where the web came from; asking myself where the web came from, which is kind of a dumb question because it’s pretty obvious where the web came from.
It came from this guy. This is Tim Berners-Lee and he is the creator of the World Wide Web. It was 30 years ago, March 1989, that he wrote a proposal while he was at CERN, a very dull-looking proposal called “Information Management: A Proposal” that had incomprehensible diagrams trying to explain what he had in mind. But a supervisor, Mike Sendall, saw the potential and scrawled across the top, “Vague but exciting.”
Tim Berners-Lee starts working on this idea he has for a global hypertext system and he starts creating the world’s first web browser and the world’s first web server, which is this NeXT machine which is in the Science Museum in London, a lovely machine, the NeXT box.
I have a great affection for it because, earlier this year, I was very honored to be invited to CERN, along with this bunch of hackers, to take part in a project related to the 30th anniversary of that proposal. I will show you a video that explains the project.
So, we came to CERN this week in order to create some sort of modern-day interpretation of the very first web browser.
—Kimberly Blessing
Well, the project is to restore the first browser which was developed by the inventor of the Web, and the idea is to create an experience for the people who could not use the web in its early days to have an idea how it felt to use the web at that time.
—Martin Akolo Chiteri
I think the biggest difficulty was to make the browser work in the NeXT machine that we had.
—Angela Ricci
We really needed to work with an original NeXT box in order to really understand what that experience was like in order to be able to write some code and replicate that experience.
—Kimberly Blessing
My role is code, so generating the code to create the interactive aspect of the World Wide web browser, recreated browser. It’s very much writing JavaScript to kind of create all the NeXT operating system UI, making requests to servers to go and get the HTML and massage the HTML back into a format that looks good in the World Wide Web browser; and making sure we end up with a URL that goes into production that someone can visit and see their own webpages. The tangible software is what I’m responsible for, so I have to make sure it all gets done. Otherwise, we have no browser to look at, basically.
—Remy Sharp
We got together a few years back to do a similar sort of hack project here at CERN which was creating the world’s second-ever web browser, which was the Line Mode browser. We had a lot of fun with it and it’s a great bunch of people from all over the world. It’s been really great to get back together and it’s always amazing to be here at CERN, to be at not just the birthplace of the Web, but the most important place on the planet for science.
Yeah, it’s been a lot of fun. I kind of don’t want it to be over because we are in our element, hacking away, having fun, and just soaking up the atmosphere, and we are getting to chat with people who were there 30 years ago, Jean-Francois Groff and Robert Cailliau, these people who were involved in the creation of the World Wide web. To me, that’s amazing to be surrounded by so much World Wide Web history.
The plan is that this will go online and anyone will be able to access it because it’s on the web, and that’s the beautiful thing about the web is that anyone can visit a website, and so everyone will have the opportunity to try using the world’s first web browser and see what modern webpages would look like if they were passed through this first web browser.
—Jeremy Keith
Well, spoiler alert. The project was a success and you can, indeed, look at your websites in a recreation of the first-ever web browser. This is the URL. It’s worldwideweb.cern.ch.
Success, that was good. But as you could probably tell from that video, Remy was the one basically making this all happen. He was the one writing the JavaScript to recreate this in a modern browser. This is the first-ever web page viewed in the first-ever web browser.
As you gathered, again, I was really fascinated by the history of the Web, like, where did it come from, and the people who were there at the time and getting to pick their brains. I spent most of my time working on the accompanying website to go with this project. I was creating this timeline.
Because this was to mark the 30th anniversary of this proposal, I thought, well, we could easily look at what has happened in the last 30 years: websites, web servers, formats, standards - all that stuff. But I thought it would be fascinating to look at the previous 30 years as well and try and figure out the things that were happening that influenced Tim Berners-Lee in terms of hypertext, networks, computing, and all this stuff.
But I’d kind of had given myself this arbitrary cut-off point of 30 years to make this nice symmetry of it being the 30th anniversary of the World Wide Web. I could go further back. I could start asking, well, what happened before 30 years ago? What were the biggest influences on Tim Berners-Lee and the World Wide Web?
Now, if you were to ask Tim Berners-Lee himself who his biggest influencers were, he would give you a straight-up answer. He will say his biggest influencers were Conway Berners-Lee and Mary Lee Woods, his father and mother, which is fair enough. Normally, when you ask people who their influences are, they say, “Oh, my parents. They gave me a loving environment. They kindled my curiosity,” and all that stuff.
I’m sure that’s true but, in this case, it was also a big influence in a practical sense in that both Mary and Conway worked on the Ferranti Mark 1. That’s where they met. They were programmers. Tim Berners-Lee’s parents were programmers on the Ferranti Mark 1, a very early computer. This is in the 1950s in Britain.
Okay, this feels like a good origin story for the web, right? They were working on this early computer.
But it’s an early computer; it’s not the first computer. Maybe I need to go back further. How far back do I go to find the first computer?
Is this the first computer, the Antikythera mechanism? You can see this in a museum in Athens. This was recovered from a shipwreck. It was recovered at the start of the 20th Century, but it dates back thousands of years, a mechanism for predicting the position of stars and planets. It does calculations. It is a calculating device. Not a programmable computer as such, though.
If you’re thinking about the origins of the idea of a programmable computer, I think we could start to look at this gentleman, Charles Babbage. This is half of Charles Babbage’s brain, which is in the Science Museum in London along with that original NeXT box that the World Wide web was created on. The other half is in the Computing History Museum in California.
Charles Babbage lived in the 19th Century, and kind of got a lot of seed funding from the U.K. government to build a device, the Difference Engine, which would do calculations. Later on, he scrapped that and started working on the Analytical Engine which would be even better — a 2.0 version. It never got finished, by the way, but it was a really amazing idea because you could see the architecture of like a central processing unit, but it was still fundamentally a calculator, a calculating machine.
The breakthrough in terms of programming maybe came from Charles Babbage’s collaborator. This is Ada Lovelace. She was translating documents by an Italian mathematician about Difference Engines and calculations. She realized that—hang on—if we’re doing operations on numbers, what if those numbers could stand for other concepts, non-numerical like words or thoughts? Then we could do operations on things other than numbers, which is exactly what we do today in modern computing.
If you use a word processor, you’re not processing words; you’re operating on ones and zeros. If you use a graphics program, you’re not actually moving pixels around; you’re operating on ones and zeroes. This idea of how anything could stand in for ones and zeros for numbers kind of started with Ada Lovelace.
But, as I said, the Difference Engine and the Analytical Engine, they never got finished, and this was kind of a dead-end. It turns out, they weren’t an influence.
Later on, for example, this genius who was definitely responsible for the first working computers, Alan Turing, he wasn’t aware of the work of Babbage and Lovelace, which is a shame. He was kind of working in isolation.
He came up with the idea of the universal machine, the Turing Machine. Give it an infinitely long tape and enough state, enough time, you could calculate literally anything, which is pretty much what computers are.
He was working at Bletchley Park breaking the code for the Enigma machines, and that leads to the creation of what I think would be the first programmable computer. This is Colossus at Bletchley Park. This was created by a colleague of Turing, Tommy Flowers.
It is programmable. It’s using valves, but it’s absolutely programmable. It was top secret, so even for years after the war, this was not known about. In the history books, even to this day, you’ll often see ENIAC listed as the first programmable computer, but I think that honor goes to Tommy Flowers and Colossus.
By the way, Alan Turing, after the war, after 1945, he did go on to work and keep on working in the field of computing. In fact, he worked as a consultant at Ferranti. He was working on the Ferranti Mark 1, the same computer where Tim Berners-Lee’s parents met when they were programmers.
As I say, that was after the war ended in 1945. Now, we can’t say that the work at Bletchley Park was responsible for winning the war, but we could probably say that it’s certainly responsible for shortening the war. If it weren’t for the work done by the codebreakers at Bletchley Park, the war might not have finished in 1945.
1945 is the year that this gentleman wrote a piece that was certainly influential on Tim Berners-Lee. This is Vannevar Bush, a scientist, a thinker. In 1945, he published a piece in the Atlantic Monthly under the heading, “A Scientist Looks at Tomorrow,” he publishes, “As We May Think.”
In this piece, he describes an imaginary device. It’s a mechanical device inside a desk, and the operator is allowed to work on reams and reams of microfilm and to connect ideas together, make these associative trails. This is kind of like hypertext before the word hypertext has been coined. Vannevar Bush calls this device the Memex. That’s published in 1945.
Also, in 1945, this young man has been drafted into the U.S. Navy and he’s shipping out to the Pacific. His name is Douglas Engelbart. Literally as the ship is leaving the harbor to head to the Pacific, word comes through that the war is over.
Now, he still gets shipped out to the Pacific. He’s in the Philippines. But now, instead of fighting against the Japanese, he’s lounging around in a hut on stilts reading magazines and that’s where he reads “As We May Think,” by Vannevar Bush.
Fast-forward years later; he’s trying to decide what to do with his life other than settle down, get married, have a job, you know, that kind of thing. He thinks, “No, no, I want to make the world a better place.” He realizes that computers could be the way to do this if they could implement something very much like the Memex. Instead of a mechanical device, what if computers could create the Memex, this hypertext system? He devotes his life to this and effectively invents the field of Human Computer Interaction.
On December 9th, 1968, he demonstrates what he’s been working at. This is in San Francisco, and he demonstrates bitmap screens. He demonstrates real-time collaboration on documents, working hypertext …and also he invents the mouse for the demo.
We have a pointing device called a mouse, a standard keyboard, and a special key set we have here. And we are going to go for a picture down on our laboratory in Menlo Park and pipe it up. It’ll show you, from another point of view, more about how that mouse works.
Come in, Menlo Park. Okay, there’s Don Anders’ hand in Menlo Park. In a second, we’ll see the screen that he’s working and the way the tracking spot moves in conjunction with movements of that mouse. I don’t know why we call it a mouse sometimes. I apologize; it started that way and we never did change it.
—Douglas Engelbart
This was ground-breaking. The mother of all demos, it came to be known as. This was a big influence on Tim Berners-Lee.
At this point, we’ve entered the time cone of those 30 years before the proposal that Tim Berners-Lee made, which is good because this is the moment where I like to branch off from this timeline and sort of turn it around.
The question I’m sure nobody is asking—because you saw there was a video link-up there; Douglas Engelbart is in San Francisco, and he has a video link-up with Menlo Park to demonstrate real-time collaboration with computers—the question nobody is asking is, who is operating the video camera in Menlo Park?
Well, I’ll tell you the answer to that question that nobody is asking. The man operating the video camera in Menlo Park is this man. His name is Stuart Brand. Now, Stuart Brand has spent most of the ‘60s doing what you would do in the ‘60s; he was dropping acid. This was all kosher. This was before it was illegal.
He was on the Merry Pranksters bus with Ken Kesey and, on one particular acid trip, he literally saw the Earth curving away and realizing that, yeah, we’re all on one planet, man! And he started a campaign with badges called, “Why haven’t we seen a photograph of the whole Earth yet?” I like the “yet” part in there like it’s a conspiracy that we haven’t seen a photograph of the whole Earth.
He was kind of onto something here, realizing that seeing our planet as a whole planet from space could be a consciousness-changing thing much like LSD is a consciousness-changing thing. Sure enough, people did talk about the effect it had when we got photographs like Earthrise from Apollo 8, and he used those pictures when he published the Whole Earth Catalog, which was a series of books.
The Whole Earth Catalog was basically like Wikipedia before the internet. It was this big manual of how to do everything. The idea was, if you were running a commune, living in a commune, you needed to know about technology, and agriculture, and weather, and all the stuff, and you could find that in the Whole Earth Catalog.
He was quite an influential guy, Stuart Brand. You probably heard the Steve Jobs commencement speech where he quotes Stuart Brand, “Stay hungry, stay foolish,” all that stuff.
Stuart Brand also did a lot of writing. After Douglas Engelbart’s demo, he started to see that this computer thing was something else. He literally said computers are the new LSD, so he starts really investigating computing and computers.
He writes this great article in “Rolling Stone” magazine in 1972 about space war, one of the first games you could play on the screen. But he has a wide range of interests. He kind of kicked off the environmental movement in some ways.
At one point, he writes a book about architecture. He writes a book called “How Buildings Learn.” There’s a television series that goes with it as well. This is a classic book (the definition of a classic book being a book that everyone has heard of and nobody has read).
In this book, he starts looking at the work of a British architect called Frank Duffy. Frank Duffy has this idea about architecture he calls shearing layers. The way that Frank Duffy puts it is that a building, properly conceived, consists of several layers of longevity, so kind of different rates of change.
He diagrams this out in terms of a building, and you see that you’ve got the site that the building is on that’s moving at a geological timescale, right? That should be around for thousands of years, we would hope.
Then you’ve got the actual structure that could stand for centuries.
Then you get into the infrastructure inside. You know, the plumbing and all that, you probably want to swap out every few decades.
Basically, until you get down to the stuff inside a room, the furniture that you can move around on a daily basis. You’ve got all these timescales moving from fast to slow as you move inwards into the house.
What I find fascinating about this idea of these different layers as well is the way that each layer depends on the layer below. Like, you can’t have the structure of a building without first having a site to put it on. You can’t move furniture around inside a room until you’ve made the room using the walls and the doors, right? This idea of shearing layers is kind of fascinating, and we’re going to get back to it.
Something else that Stuart Brand went on to do; he was one of the co-founders of the Long Now Foundation. Anybody here part of the Long Now Foundation? Any members of the Long Now Foundation?
Ah… It’s a great organization. It’s literally dedicated to long-term thinking. It was founded by Stuart Brand and Danny Hillis, the computer scientist, and Brian Eno, the musician and producer. Like I said, dedicated to long-term thinking. This is my membership card made out of a durable metal because it’s got to last for thousands of years.
If you go on the website of the Long Now Foundation, you’ll notice that the years are made up of five digits, so instead of 2019, it will be 02019. Well, you know, you’ve got to solve the Y10K problem. They’re dedicated to long-term thinking, to trying to think in the longer now.
One of the most famous projects is the clock of the Long Now. This is a clock that will tell time for 10,000 years. Brian Eno has done the chimes. They’re generative. It’ll never chime the same way twice. It chimes once a century. This is a scale model that’s in the Science Museum in London along with half of Charles Babbage’s brain and the original NeXT machine that Tim Berners-Lee created World Wide Web on.
This is just a scale model. The full-sized clock is going to be inside a mountain in west Texas. You’ll be able to visit it. It’ll be like a pilgrimage. Construction is underway. I hope to visit the clock one day.
Stuart Brand collected his thoughts. It’s a really fascinating project when you think about, how do you design something to last 10,000 years? How do you communicate over 10,000 years? One of those tricky design problems almost like the Voyager Golden Record or the Yucca Mountain waste disposal. How do you communicate to future generations? You can’t rely on language. You can’t rely on semiotics.
Anyway, he collected a lot of his thoughts into this book called “The Clock of the Long Now,” subtitled “Time and Responsibility: The Ideas Behind the World’s Slowest Computer.” He’s thinking about time. That’s when he comes back to shearing layers and these different layers of rates of change; different layers of time.
Stuart Brand abstracts the idea of shearing layers into something called “pace layers.” What if it’s not just architecture? What if any kind of system has these different rates of change, these layers?
He diagrams this out in terms of the human species, so think of humans. We have these different layers that we operate at.
At the lowest, slowest level, there is our nature, literally, like what makes us human in terms of our DNA. That doesn’t change for tens of thousands of years. Physiologically, there’s no difference between a caveman and an astronaut, right?
Then you’ve got culture, which cumulates of centuries, and the tribal identities we have around things like nations, language, and things like that.
Governance, models of governance, so not governments but governance, as in the way we choose to run things, whether that’s a feudal society, or a monarchy, or representative democracy, right? Those things do change, but not too fast, hopefully.
Infrastructure: you’ve got to keep up with the times, you know? This needs to move at a faster pace, again.
Commerce: much more fast-moving. Commerce needs to — you’re getting into the faster timescales there.
Then he puts fashion at the top. By fashion, he means anything that is supposed to be new and exciting, so that includes pop music, for example. The whole idea with fashion is that it’s there to try stuff out and discard it very quickly.
“What about this?” “No.” “What about that?” “Try this.” “No, try that.”
The good stuff, the stuff that kind of sticks to the wall, will maybe find its way down to the longer-lasting layers. Maybe a really good pop song from fashion ends up becoming part of culture, over time.
Here’s the way that Stuart Brand describes pace layers. He says:
Fast learns; slow remembers. Fast proposals, and slow disposes. Fast is discontinuous; slow is continuous. Fast and small instructs slow and big by a crude innovation and by occasional revolution, but slow and big controls small and fast by constraint and constancy.
He says:
Fast gets all of our attention but slow has all the power.
Pace layers is one of those ideas that, once you see it, you can’t unsee it. You know when you want to make someone’s life a misery, you just teach them about typography. Now they can’t unsee all the terrible kerning in the world. I can’t unsee pace layers. I see them wherever I look.
Does anyone remember this book, UX designers in the room, “The Elements of User Experience,” by Jesse James Garrett? It’s old now. We’re going back in the way but, in it, he’s got this diagram about the different layers to a user experience. You’ve got the strategy below that finally ends up with an interface at the top.
I look at this, and I go, “Oh, right. It’s pace layers. It’s literally pace layers.” Each layer depending on the layer below, the slower layers at the bottom, the faster-moving things at the top.
With this mindset that pace layers are everywhere, I thought, “Can I map out the web in terms of pace layers, the technology stack of the web?” I’m going to give it a go.
At the lowest stack, the slowest moving, I would say there’s the internet itself, as in TCP/IP, the transmission control protocol, Internet protocol created by Bob Kahn and Vint Cerf in the ‘70s and pretty much unchanged since then, deliberately dumb, deliberately simple. All it does is move packets around. Pretty much unchanged.
On top of that, you get the other protocols that use TCP/IP, like in the case of the web, the hypertext transfer protocol. Now, this has changed over time. We now have HTTP/2. But it hasn’t been rapid change. It’s been gradual. Again, that kind of feels right. It feels good that HTTP isn’t constantly changing underneath us too much.
Then we serve up over HTTP are URLs. I wish that URLs were down here. I wish that URLs were everlasting, never changing. But, unfortunately, I must acknowledge that that’s not true. Links die. We have to really work hard to keep them alive. I think we should work hard to keep them alive.
What do you put at those URLs? At the simplest level, it’s supposed to be plain text. But this is the web, so let’s say structured text. This is going to be HTML, the hypertext markup language, which Tim Berners-Lee came up with when he created the World Wide Web. I say, “Came up with.” He basically stole it from SGML that scientists at CERN were already using and sprinkled in one or two new tags, as they were calling it back then.
There were maybe like 20-something tags in HTML when Tim Berners-Lee created the web. Now we’ve got over 100 elements, as we call them. But I feel like I’ve been able to keep up with the pace of change. I mean, the vast big kind of growth spurt with HTML was probably HTML5. That’s been a while back now. It’s definitely change that I can keep on top of.
Then we have CSS, the presentation layer. That feels like it’s been moving at a nice clip lately. I feel like we’ve been getting a lot of cool stuff in CSS, like Flexbox and Grid, and all this new stuff that browsers are shipping. Still, I feel like, yeah, yeah, this is good. It’s right that we get lots of CSS pretty rapidly. It’s not completely overwhelming.
Then there’s the JavaScript ecosystem. I specifically say the “JavaScript ecosystem” as opposed to the “JavaScript language” because the JavaScript language is being developed at a nice pace. I feel like it’s going at a good speed of standardization. But the ecosystem, the frameworks, the libraries, the build tools, all of that stuff, that feels like, “You know what? Try this. No, try that. What about this? What about that? Oh, you’re still using that framework? No, no, we stopped using that last week. Oh, you’re still using that build tool? No, no, no, that’s so … we’ve moved on.”
I find this very overwhelming. Can I get a show of hands of anybody else who feels overwhelmed by this rate of change? All right. Keep your hands up. Keep your hands up and just look around. I want you to see you are not alone. You are not alone.
But I tell you what; after mapping these layers out into the pace layer diagram, I realized, wait a minute. The JavaScript layer, the fashion layer, if you will, it’s supposed to be like that. It’s supposed to be trying stuff out. Throw this at the wall. No, throw that at the wall. How about this? How about that?
It’s true that the good stuff does stick. Like if I think back to the first uses of JavaScript—okay, I’m showing my age, but—when JavaScript first came along, we’d use it for things like image rollovers or form validation, right? These days, if I wanted to do an image rollover—you mouseover something and it changes its appearance—I wouldn’t use JavaScript. I’d use CSS because we’ve got :hover.
If I was doing a form validation, like, “Oh, has that field actually been filled in?” because it’s required and, “Does that field actually look like an email address?” because it’s supposed to be an email address, I wouldn’t even use JavaScript. I would use HTML; input type="email" required. Again, the good stuff moves down into the sort of slower layers. Fast learns; slow remembers.
The other thing I realized when I diagrammed this out was that, “Huh, this kind of maps to how I approach building on the web.” I pretty much take this for granted that it’s going to be on the Internet. There’s not much I can do about that. Then I start thinking about URLs like URL-first design, the information architecture of a site. I think it’s underrated. I think people should create a URL-first design. URL design, in general, I think it’s a really good place to start if you’re building a product or a service.
Then, about your content in terms of structure. What is the most important thing on this page? That should be an H1. Is this a paragraph? Is it a list? Thinking about the structure first and then going on to think about the appearance which is definitely the way you want to go if you’re making something responsive. Think about the structure first and then the appearance and all these different form factors.
Then finally, add in behavior with JavaScript. Whatever HTML and CSS can’t do, that’s what I will use JavaScript for to kind of enhance it from there.
This maps really nicely to how I personally approach building things on the web. But, it is a testament to the flexibility of the World Wide Web that, if you don’t want to build in this way, you don’t have to.
If you want, you could build like this. JavaScript is a really powerful language. If you wanted to do navigations and routing in JavaScript, you can. If you want to inject all your content into the page using JavaScript, you can. CSS in JS? You can. Right? I mean, this is pretty much the architecture of a single-page app. It’s on the internet and everything is in JavaScript. The internet is a delivery mechanism for a chunk of JavaScript that does everything: the markup, the CSS, the routing.
This isn’t how I approach building on the web. I kind of ask myself why this doesn’t feel quite right to me. I think it’s because of the way it kind of turns everything into a single point of failure, which is the JavaScript, rather than spreading out those points of failure. We’re on the Internet and, as long as the JavaScript runs okay, the user gets everything. It turns what you’re building into a binary proposition that either it doesn’t work at all or it works great. Those are your own two options.
Now I’ll point out that, in another medium, this would make complete sense. Like if you’re building a native app. If you build an iOS app and I’ve got an iOS device, I get 100% of what you’ve designed and built. But if you’re building an iOS app and I have an Android device, I get zero percent of what you’ve designed and built because you can’t install an iOS app on an Android device. Either it works great or it doesn’t work at all; 100% or zero percent.
The web doesn’t have to be like that. If you build in that layered way on the web, then maybe I don’t get 100% of what you’ve designed and built but I don’t get zero percent, either. I’ll get something somewhere along the way, hopefully, closer to working great. It goes from not working at all to just about working. It works fine and works well; it works great.
You’re building up these layers of experience, the idea being that nobody gets left behind. Everybody gets something regardless of their device, their network, their browser. Everyone is not going to get the same experience, but everybody gets something. That feels very true to the original sort of founding ideas of the web and it maps so nicely to our technical stack on the web, the fact that you can start to think about things like URLs-first and think about the structure, then the presentation, and then the behavior.
I’m not the only one who likes thinking in this layered kind of way when it comes to the web. I’m going to quote my friend Ethan Marcotte. He says:
I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it will change if, say, fonts aren’t available, or JavaScript doesn’t work, or someone doesn’t see the design as you or I might and is having the page read allowed to them.
That’s a really good point that when you build in the layered way, you’re building in the resilience that something can fall back to a layer a little further down.
This brings up something I’ve mentioned here before at Beyond Tellerrand, which is that, when we’re evaluating technologies, the question we tend to ask is, how well does it work? That’s an absolutely valid question. You’re about to try a new tool, a new framework, a new standard. You ask yourself; how well does it work?
I think the more important question to ask is:
How well does it fail?
What happens if that piece of technology fails? That’s why I like this layered approach because this fails really well. JavaScript’s no longer a single point of failure. Neither is CSS, frankly. If the CSS never loaded, the user still gets something.
Now, this brings up an idea, a principle that definitely influenced Tim Berners-Lee. It was at the heart of his design principles for the World Wide Web. It’s called the Principle of Least Power that states, “Choose the least powerful language suitable for a given purpose,” which sounds really counterintuitive. Why would I choose the least powerful language to do something?
It’s kind of down to the fact that there’s a trade-off. With power, you get a fragility, right? Maybe something that is really powerful isn’t as universal as something simpler. It makes sense to figure out the simplest technology you can use to achieve a task.
I’ll give an example from my friend Derek Featherstone. He says:
In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.
Again, he’s talking about the resilience you get by building in a layered way and choosing the least powerful technology.
It’s like a classic example being ARIA. The first rule of ARIA is, don’t use ARIA if you don’t have to. Rather than using a div and then adding the event handlers and the ARIA roles to make it look like a button, just use a button. Use the simpler technology lower in the stack.
Now, I get pushback on this because people will tell me, “Well, that’s fine if you’re building something simple, but I’m not building something simple. I’m building something complex.” Everyone likes to think they’re building something complex, right? Everyone is convinced they’re working on really hard things, which makes sense. That’s human nature.
If you’re at a cocktail party and someone says, “What do you do?” and you describe your work and they say, “Oh, okay. That sounds really easy,” you’d be offended, right?
If you’re at a party and someone says, “What do you do?” you describe your work and they go, “Wow, God, that sounds hard,” you’d be like, “Yeah! Yeah, it is hard. What I do is hard.”
I think we gravitate to this, especially when someone markets it as, “This is a serious tool for serious, complex sites.” I’m like, “That’s me. I’m working on a serious, complex site.”
I don’t think the reality is quite like that. Reality is just messier. There’s nothing quite that simple. Very few things are really that complex.
Everything kind of exists on this continuum somewhere along the way. Even the simplest website has some form of interaction, something appy about it.
Those are those other two terms people use when talking about simple and complex is website and web app, as if you can divide the entirety of the whole World Wide Web into two categories: websites and web apps.
Again, that just doesn’t make sense to me. I think the truth is, things are messier and schmooshier between this continuum of websites and web apps. I don’t get why we even need the separate word. It’s all web stuff.
Though, there is this newer term, “Progressive Web App,” that I kind of like.
Who has heard of Progressive Web Apps? All right.
Who thinks they have a good handle on what a Progressive Web App is?
Okay. See, that’s a lot fewer hands, which is totally understandable because, if you start googling, “What is a Progressive Web App?” you get these Zen-like articles. “It’s a state of mind.” “It’s about rich, native-like interactions, man.”
No. No, it’s not. Worse still, you read, “Oh, a Progressive Web App is a Single PageApp.” I was like, no, you’ve lost me there. No, it’s not. Or least it can be, but any website can be a Progressive Web App. You can elevate a website to be a Progressive Web App.
I don’t mean in some sort of Zen-like fashion. I mean using technologies, three particular technologies.
You make sure that website is running HTTPS,
you have a web app manifest that’s a JSON file with metadata, and
you have a service worker that gets installed on the user’s device.
That’s it. These three technologies turn a website into a Progressive Web App — no mystery about it.
The tricky bit is that service worker part. It’s kind of a weird thing because it’s JavaScript but it’s JavaScript that gets installed on the user’s device and acts like a proxy. It intercepts network requests and can do different things like grab things from the cache instead of going to the network.
I’m not going to go into how it works because I’ve written plenty about that in this book “Going Offline,” so if you want to know the code, you can go read the book.
I will say that when I first came across service workers, it totally did my head in because this is my mental model of the web. We’ve got the stack of technologies that we’re building on top of, each layer depending on the layer below. Then service workers come along and say, “Well, actually, you could have a website like this,” where the lowest layer, the network, the Internet goes away and the website still works. Mind is blown!
It took me a while to get my head around that. The service worker file is on the user’s device and, if they’ve got no Internet connect, it can still make decisions and serve up something like a custom offline page.
Here’s a website I run called huffduffer.com. It’s for making your own podcast out of found sounds. If you’re offline or the website is down, which happens, and you visit Huffduffer, you get this offline page saying, “Sorry, you’re offline.” Not very useful, but it’s branded like the site, okay? It’s almost like the way you have a custom 404 page. Now you can have a custom offline page that matches your site. It’s a small thing, but it can be handy.
We ran this conference in Brighton for two years, Ampersand. It’s a web typography conference. That also has a very simple offline page that just says, “Sorry. You’re offline,” but then it has the bare minimum information you need about the conference like where is this conference happening; what time does it start?
You can imagine a restaurant website having this, an offline page that tells you, “Here is the address. Here are the opening hours.” I would like it if restaurant websites had that information when you’re online as well, but…
You can also have fun with this, like Trivago. Their site relies on search, so there’s not much you can do when you’re offline, so they give you a game to play, the offline maze to keep you entertained.
That’s kind of at the simplest level of what you could do, a custom offline page.
Then at the other level, I’ve written this book called “Resilient web Design.” A lot of the ideas I’m talking about here are in this book. The book is a website. You go to the website and you read the book. That’s it. It’s free. You just go to resilientwebdesign.com. I mean free. I don’t ask for your email address and I’m not tracking any information at all.
This is how it looks when you’re online, and then this is how it looks when you’re offline. It is exactly the same. In fact, the moment you visit the website, it basically downloads the whole book.
Now, that’s the extreme example. Most websites, you wouldn’t want to do that because you kind of want the HTML to be fresh. This is never going to get updated. I’m done with this so I’m totally fine with, you go straight to the cache; never even go to the network. It’s absolutely offline first. You’re probably going to want something in between those two extremes.
On my own website, adactio.com, if you’re browsing around the website and you’re reading things, that’s all fine. Then what if you lose your internet connection? You get the custom offline page that says, “Sorry, you’re offline,” but then it also shows you the things you’ve previously visited.
You can revisit any of these pages. These have all been cached, so you can cache things as people are browsing around the site. That’s a nice little pattern that a lot of websites could benefit from. It only suffers from the fact that all I can show you is stuff you’ve already seen. You have to have already visited these pages for them to show up in this list.
Another pattern that I think is maybe better from a user experience point is when you put the control in the hands of the user.
This website, archive.dconstruct.org, this is what it sounds like. It’s an archive. It’s conference talks.
We ran a conference called dConstruct for 10 years from 2005 to 2015. Breaking news; we’re bringing it back for a one-off event next year, September 2020.
Anyway, all the talks from ten years are online here as audio files. You can browse around and listen to these talks.
You’ll also see that there’s this option to save for offline, exposed on the interface. What that does it is doesn’t just save the page offline; it also saves the audio offline. Then, when you’re an airplane or at the bottom of the ocean or whatever, you can then listen to the things you explicitly asked to be saved offline. It’s effectively a podcast player in the browser.
You see there’s a lot of things you can do. There are kind of a lot of layers you can build upon once you have a service worker.
At the very least, you can do caching because that’s the stuff we do anyway, like put this file in the cache, your CSS, your JavaScript, your icons, whatever.
Then think about, well, maybe I should have a custom offline page, even if it’s just for the branding reasons of having that nice page, just like we have a custom 404 page.
Then you start thinking, well, I want the adding to home screen experience to be good, so you’ve got the web app manifest.
You implement one of those patterns there allowing the users to save things offline, maybe.
Also, push notifications are now possible thanks to service workers. It used to be, if you wanted to make someone’s life a misery, you had to build a native app to give them push notifications all day long. Now you can make someone’s life a misery on the web too.
There’s even more advanced APIs like background sync where the website can talk to the web server even when that website isn’t open in the browser and sync up information — super powerful stuff.
Now, the support for something like service worker and the cache API is almost universal at this point. The support for stuff like background sync notifications is spottier, not universal, and that’s okay because, as long as you’re adding these things in layers. Then it’s fine if something doesn’t have universal support, right? It’s making something work great but, if someone doesn’t get that, it still works good. It’s all about building in that layered way.
Now you’re probably thinking, “Ah-ha! I’ve hoisted him by his own petard because service workers use JavaScript. That means they rely on JavaScript. You’ve made JavaScript a single point of failure!” Exactly what I was complaining about with Single Page Apps, right?
There’s a difference. With a Single Page App, you’re relying on JavaScript. The user gets absolutely nothing if JavaScript doesn’t work.
In the case of service workers, you literally cannot make a website that relies on a service worker. You have to make a website that works first without a service worker and then add the service worker on top, because, think about it; the first time anybody visits the website, even if their browser supports service workers, the service worker is not installed. So you have to build in layers.
I think this is why it appeals to me so much. The design of service workers is a layered design. You have to have something that works first, and then you elevate it. You improve the user experience using these technologies but you don’t rely on it. It’s not a single point of failure. It’s an enhancement.
That means you can take any website. Somebody’s homepage; a book that’s online; this archived stuff ; something that is more appy, sure, and make it work pretty much like a native app. It can appear full screen, add to the home screen, be indistinguishable from native apps so that the latest and greatest browsers and devices get the best experience. They’re making full use of the newest technologies.
But, as well as these things working in the latest and greatest browsers, they still work in the first web browser ever created. You can still look at these things in that very first web browser that Tim Berners-Lee created at worldwideweb.cern.ch.
It’s like it is an unbroken line over 30 years on the web. We’re not talking about the Long Now when we’re talking about 30 years but, in terms of technology, that does feel special.
You can also look at the world’s first webpage in the first-ever web browser but, almost more amazingly, you can look at the world’s first web page at its original URL in a modern web browser and it still works.
We managed to make the web so much better with new APIs, new technologies, without breaking it, without breaking that backward compatibility. There’s something special about that. There’s something special about the web if you build in layers.
I’m encouraging you to think in terms of layers and use the layers of the web.
I was quite nervous about this talk. It’s very different from my usual fare. Usually I have some big sweeping arc of history, and lots of pretentious ideas joined together into some kind of narrative arc. But this talk needed to be more straightforward and practical. I wasn’t sure how well I would manage that brief.
I’m happy with how it turned out. I had quite a few people come up to me to say how much they appreciated how I was explaining the code. That was very nice to hear—I really wanted this talk to be approachable for everyone, even though it included plenty of JavaScript.
The dates for next year’s Events Apart have been announced, and I’ll be speaking at three of them:
The question is, do I attempt to deliver another practical code-based talk or do I go back to giving a high-level talk about ideas and principles? Or, if I really want to challenge myself, can I combine the two into one talk without making a Frankenstein’s monster?
Come and see me at An Event Apart in 2020 to find out.