Tags: btconf



Tuesday, September 19th, 2017

Evaluating Technology

A presentation from the Beyond Tellerrand conference held in Düsseldorf in May 2017. I also presented a version of this talk at An Event Apart, Smashing Conference, Render, Frontend United, and From The Front.

I’m going to show you some code. Who wants to see some code?

All right, I’ll show you some code. This is code. This is a picture of code.

Photograph 51

The code base in this case is the deoxyribonucleic acid. This is literally a photograph of code. It’s the famous Photograph 51, which was taken by Rosalind Franklin, the X-ray crystallographer. And it was thanks to her work that we were able to decode the very structure of DNA.

Rosalind Franklin

Base-4, unlike the binary base that we work with, with computers, it’s base-4 A-C-G-T: Adenine, Cytosine, Guanine, Thymine. From those four simple ingredients we get DNA, and from DNA we get every single life form on our planet: mammals, birds, fish, plants. Everything is made of DNA. This huge variety from such simple building blocks.

Apollo 11 Mission Image - Earth view over Central and North America

What’s interesting, though, is if you look at this massive variety of life on our planet, you start to see some trends over time as life evolves through the process of natural selection. You see a trend towards specialisation, a species becoming really, really good at something as the environment selects for fitness. A trend towards ubiquity as life attempts to spread as far as possible. And, interestingly, a trend towards cooperation, that a group could be more powerful than an individual.

Now we’re no different to any other life form, and this is how we have evolved over time from simpler beginnings. I mean, we like to think of ourselves as being a more highly evolved species than other species, but the truth is that every species of life on this planet is the most highly evolved species of life on this planet because they’re still here. Every species is fit for its environment. Otherwise they wouldn’t be here.

This is the process, this long, slow process of natural selection. It’s messy. It takes a long time. It relies on errors in the code to get selected for. This is the process that we human beings have gone through, same as every other species on the planet.

But then we figured out a way to hack the process. We figured out a way to get a jumpstart on evolution, and that’s through technology. Through technology we can bypass the process of natural selection and augment ourselves, extend our capabilities like this.

Acheulean hand ax

This is a very early example of technology. It existed for millions of years in this form, ubiquitous, across the planet. This is the Acheulean hand ax. We didn’t need to evolve a sharp cutting tool at the end of our limb because, through technology, we were able to create a sharp cutting tool at the end of our limb. Then through that we were able to extend our capabilities and shape our environment.

We shape our tools and, thereafter, the tools shape us.

And we have other tools. This is a modern tool, the pencil. I’m sure you’ll all familiar with it. You use it all the time. I think it’s a great piece of technology, great affordance on there. Built in progress bar, and it’s got an undo at the end.

I, Pencil

What’s interesting is if you look at the evolution of technology and you compare it to the evolution of biology, you start to see some of the same trends; trends towards specialisation, ubiquity, and cooperation.

The pencil does one thing really, really well. The Acheulean hand ax does one thing really, really well.

All over the world you found Acheulean hand axes, and all over the world you will find the pencil in pretty much the same form.

And, most importantly of all, cooperation. No human being can make a pencil. Not by themselves. It requires cooperation.

There’s a famous book by Leonard Read called I, Pencil, and it’s told from the point of view of a pencil and describing how it requires cooperation. It requires human beings to come together to fell the trees to get the wood, to get the graphite, to put it all together. No single human being can do that by themselves. We have to cooperate to create technology.

You can try to create technology by yourself, but you’re probably going to have a hard time. Like Thomas Thwaites, he’s an artist in the U.K. You might have seen his most recent project. He tried to live as a goat for a year.

The toaster project

This is from a while back where he attempted to make a toaster from scratch. When I say from scratch, I mean from scratch. He wanted to mine his own metals. He wanted to smelt the steel. He wanted to create the plastic, wire it all up, and do it all by himself. It was a very interesting process. It didn’t really work out. I mean it worked for like a second or two when he plugged it in and then completely burned out, and it was prohibitively expensive.

When it comes to technology, cooperation is built in, along with those other trends: specialisation, ubiquity.

It’s easy to think when we compare these trends in biology and technology and we see the overlap, to fall into the trap of thinking they’re basically the same process, but they’re not. Underneath the hood the process is very different.

In biology it’s natural selection, this long, messy, slow process. But kind of like DNA, it’s very simple building blocks that results in amazing complexity. With technology it’s kind of the other way around. Nature doesn’t imagine the end result of a species and then work towards that end result. Nature doesn’t imagine an elephant or an ostrich. It’s just, that’s the end result of evolution. Whereas with technology, we can imagine things, design things, and then build them. Picture something in our mind that we want to exist in the world and then work together to build that.

Now one of my favourite examples of imagining technology and then creating it is a design school called Chindogu created by Kenji Kawakami. He started the International Chindogu Society in 1995. There’s goals, principles behind Chindogu, and the main one is that these things, these pieces of technology must be not exactly useful, but somehow not all together useless.

Noodle cooler

I’ll show you what I mean and you get the idea. You look at these things and you think, uh, that’s crazy. But actually, is it crazy or is it brilliant? Like this, I think, well, that’s ridiculous. Well— actually, not entirely useless, not exactly useful, but, you know, keeping your shoes dry in the rain. That seems sort of useful.

Butter stick Shoe umbrellas

They’re described as being un-useless. These are un-useless objects. But why not? I mean why not harvest the kinetic energy of your child to clean the floors? If you don’t have a child, that’s fine. It works other ways.

Toddler mop Cat mop

These things, I mean they’re fun to imagine and to create, but you couldn’t imagine them actually in the world being used. You couldn’t imagine mass adoption. Like, I found this thing from the book of Chindogu from 1995, and it describes this device where you kind of put a camera on the end of a stick so you can take self portraits, but you couldn’t really imagine anyone actually using something like this out in the world, right?

Selfie stick

These are all examples of what we see in the history of technology. From Acheulean hand axes to pencils to Chindogu, there are bits of hardware. When we think of technology, that’s what we tend to think of: bits of hardware. And the hardware is augmenting the human. The human is using the hardware to gain benefit.

Something interesting happened in the 20th Century when we started to get another layer in between the human and the hardware, and that’s software. Then the human can interact with the software, and the software can interact with the hardware. I would say the best example of this, looking back through the history of technology of the last 100 years or so, would be the Apollo Program, the perfect mixture of human, software, and hardware.

Apollo 11 Mission Image - View of Moon limb and Lunar Module during ascent, Mare Smythii, Earth on horizon

By the way, seeing as we were just talking about selfies and selfie sticks, I just want to point out that this picture is one of the very few examples of an everyone-elsie. This picture was taken by Michael Collins in the Command Module, and Neil Armstrong and Buzz Aldrin are in that spaceship, and every human being alive on planet earth is also in this picture with one exception, Michael Collins, the person taking the picture. It’s an everyone-elsie.

I think the Apollo program is the pinnacle of human achievement so far, I would say, and this perfect example of this mixture of, like, amazing humans required to do this, amazing hardware to get them there, and amazing software. It’s hard to imagine how it would have been possible to send people to the moon without the work of Margaret Hamilton. Writing the onboard flight software and also creating entire schools of thought of software engineering.

Margaret Hamilton

Since then, and looking through the trend of technology from then onwards, what you start to notice is that the hardware becomes less and less important, and the software is what really starts to count with Moore’s law and everything like that, that we can put more and more complexity into the software. Maybe the end goal of technology is eventually that the hardware becomes completely irrelevant, fades away. This idea of design dissolving in behaviour.


This idea of the hardware becoming irrelevant in a way was kind of what was at the heart of the World Wide web project created by Tim Berners-Lee when he was at CERN because there at CERN — CERN is an amazing place, but everybody just kind of does whatever they want. It’s crazy. There’s almost no hierarchy, which means everybody uses whatever kind of computer they want. You can’t dictate to people at CERN you all must use this operating system. That was at the heart of the World Wide web project, the idea to make the hardware irrelevant. It shouldn’t matter what kind of computer you’ve got. You should still be able to access information.

Tim Berners-Lee

We kind of take that for granted today, but it is quite a revolutionary thought. We don’t worry about it today. You make a website, of course you can look at it on a Windows device or a Mac or a Linux machine or an iPhone, an iOS device, or an Android device. Of course. But it wasn’t clear at the time. You know back at the time you would make software for specific operating systems, so this idea of making hardware irrelevant was kind of revolutionary.

The World Wide web project is a classic example of a piece of technology that didn’t come out of nowhere. It built on what came before. Like every other piece of technology, it built on what was already there. You can’t have Twitter or Facebook without the World Wide Web, and you can’t have the World Wide web without the Internet. You can’t have the Internet without computers. You can’t have computers without electricity. You can’t have electricity without the Industrial Revolution. Building on the shoulders of giants all the way up.

There’s also this idea of the adjacent possible. It’s when these things become possible. You couldn’t have had the World Wide web right after the Industrial Revolution because these other steps hadn’t yet taken place. It’s something that the author Steven Johnson takes about: the adjacent possible. It was impossible to invent the microwave oven in 16th Century Holland because there were too many other things that needed to be invented in the way.

It’s easy to see this as an inevitable process that, of course electricity follows industrialisation. Of course computers come, and of course the Internet comes. And there is a certain amount of inevitability. This happens all the time in the history of technology where there’s simultaneous inventions and people are beating one another to the patent office by hours to patent, whether it’s radio or the telephone or any of these devices that it seemed inevitable.

I don’t think the specifics are inevitable. Something like the World Wide web was inevitable, but the World Wide web we got was not. Something like the Internet was inevitable, but not the Internet that we got.

The World Wide web project itself has these building blocks: HTTP, the protocol, URLs as identifiers, and HTML was a simple format. Again, these formats are built upon what came before. Because it turns out that making the technology—creating a format or a protocol or spec for identifying things—not to belittle the work, but that’s actually not the hard part. The hard part is convincing people to use the protocol, convincing people to use the format.

Grace Hopper

That’s where you butt up against humans. How do you convince humans? Which always reminds me of Grace Hopper, an amazing computer scientist, rear admiral Grace Hopper, co-inventor of COBOL and the inventor of the compiler, without which we wouldn’t have computing as we know it today. She bumped up against this all the time, that people were reluctant to try new things. She had this phrase. She said, “Humans are allergic to change.” Now, she used to try and fight that. In fact, she used to have a clock on her wall that went backwards to simply demonstrate that it’s an arbitrary convention. You could change the convention.

She said the most dangerous phrase in the English language is, “We’ve always done it that way.” So she was right to notice that humans are allergic to change. I think we could all agree on that.

But her tactic was, “I try to change that,” whereas with Tim Berners-Lee and the World Wide Web, he sort of embraced it. He sort of went with it. He said, “Okay. I’ve got these things I want to convince people to use, but humans are allergic to change,” and that’s why he built on top of what was already there.

He didn’t create these things from scratch. HTTP, the protocol, is built on top of TCP/IP, the work of Bob Kahn and Vint Cerf. The URLs work on top of the Domain Name System and the work of Jon Postel. And HTML, this very simple format, was built on top of a format, a flavour of SGML, that everybody at CERN was already using. So it wasn’t a hard sell to get people to use HTML because it was very familiar.

In fact, if you were to look at SGML back then in use at CERN, you would see these elements.

<body> <title> <p> <h1> <h2> <h3> <ol> <ul> <li> <dl> <dt> <dd>

These are SGML elements used in CERN SGML. You could literally take a CERN SGML document, change the file extension to .htm, and it was an HTML document.

It’s true. Humans are allergic to change, so go with that. Don’t make it hard for them.

Now of course, we got these elements in HTML. This is where they came from. It’s just taking wholesale from SGML. Over time, we got a whole bunch more elements. We got more semantic richness added to HTML, so we can structure our documents more clearly.

<article> <section> <aside> <figure> <main> <header> <footer>

Where it gets really interesting is that we also got more behavioural elements added to HTML, the elements that browsers recognise and do quite advanced things with like video and audio and canvas.

<canvas> <video> <audio> <picture> <datalist>

Now what’s interesting is that I find it fascinating that we can evolve a format like this. We can just keep adding things to the format. The reason why we could do that is because these elements were designed with backwards compatibility built in. If you have an open video tag, closing video tag, you can put content in between there for the browsers that don’t understand the video tag.

The same with canvas. You can put fallback content in there, so you don’t have to wait for every browser to support one of these elements. You can start using it straight away and still provide something for older browsers. That’s very deliberate.

The canvas element was actually a proprietary element created by Apple and other browsers saw it and said, “Oh, yeah, we like that. We’re going to take that,” and they started standardising on it. To begin with, it was a standalone element like img. You put a closing slash there or whatever. But when it got standardised, they deliberately added a closing tag so that people could put fallback content in there. What I’m saying is it wasn’t an accident. It was designed.

Now Chris yesterday mentioned the HTML design principles, and this is one of them—that when you’re creating new elements, new attributes, you should design them in such a way that “the content can degrade gracefully in older or less capable user agents even when making use of these new elements, attributes, APIs, content models.” It is a design decision. There are HTML design principles. They’re very good.

I like design principles. I like design principles a lot. I actually collect them. I’m a bit of a nerd for design principles, and I collect them at this URL:


There you will find design principles for software, for organisations, for people, for schools of thought. There’s Chindogu design principles I’ve collected there.

I guess why I’m fascinated by principles is where they sit. Jina talked about this yesterday in relation to a design system, in that you begin with the goals. This is like the vision, what you’re trying to achieve, and then the principles define how you’re going to achieve that. Then the patterns are the result of the principles. The principles are based on the goals, which result in the patterns.

In the case of the World Wide Web, the goal is to make hardware irrelevant. Access to information regardless of hardware. The principles are encoded in the HTML design principles, and then the patterns are those elements that we get, those elements that are designed with backwards compatibility in mind.

Now when we look at new things added to HMTL, new features, new browser APIs, what we tend to ask, of course, is: how well does it work?

How well does this thing do what it claims it’s going to do? That’s an excellent question to ask whenever you’re evaluating a new technology or tool. But I don’t think it’s the most important question. I think it’s just as important to ask: how well does it fail?

How well does it fail?

If you look at those HTML elements, which have been designed that way, they fail well. They fail well in older browsers. You can have that fallback content. I think this is a good lens to look at technology through because what we tend to do, when there’s a new browser API, we go to Can I Use, and we see, well, what’s the support like? We see some green, and we see some red. But the red doesn’t tell you how well it fails.

Here’s an example: CSS shapes. If you go to caniuse.com and you look at the support, there’s some green, and there’s some red. You might think there’s not enough green, so I’m not going to use it. But what you should really be asking is, how well does it fail?

In the case of CSS shapes, here’s an example of CSS shapes in action. I’ve got a border radius on this image, and on this text here I’ve said, shape-outside: circle on the image, so the text is wrapping around that circle. How well does it fail? Well, let’s look at it in a browser that doesn’t support CSS shapes, and we see the text goes in a straight line.

I’d say it fails pretty well because this is what would have happened anyway, and the text wrapping around the circle was kind of an enhancement on top of what would have happened anyway. Actually, it fails really well, so you might as well go ahead and use it. You might as well go ahead and use it even if it was only supported in one browser or two browsers because it fails well.

Let’s use that lens of asking how well does it work and how well does it fail to look at some of the technologies that you’ve probably been hearing about—some of the buzzwords in the world of front-end development. Let’s start with this. This is a big buzzword these days: service workers.

Service Workers

Who has heard of service workers? Okay. Quite a few.

Who is using service workers? Not so many. Interesting.

The rest of you, you’ve heard of it, and you’re currently probably in the state of evaluating the technology, trying to decide whether you should use this technology.

I’m not going to explain how service workers work. I guess I’ll just describe what it can do. It’s an amazing piece of technology that you kind of install on the user’s machine and then it sits there like a virus intercepting requests, which sounds scary, but actually is really powerful because you can really improve performance. You can serve things from a cache. You get access to the cache API. You can make things work offline, which is kind of amazing, because you’ve got access to those requests.

I was trying to describe it the other day and the best way I could think of describing it was a service worker is like doing a man-in-the-middle attack on your own website, but in a good way—in a good way. There’s endless possibilities of what you can do with this technology. It’s very powerful. And, at the very least, you can make a nice, custom, offline page instead of the dinosaur game or whatever people would normally get when they’re offline. You can have a custom offline page in the same way you could have a custom 404 page.

The Guardian have a service worker on their site, and they do a crossword puzzle. You’re on the train, you’re trying to read that article, but there’s no internet connection. Well, you can play the crossword puzzle. Little things like that, so it can be used for real delight. It’s a great technology.

How well does it work? It does what it says…. You don’t get anything for free with service workers, though. A service worker file is JavaScript, which can actually be quite confusing because you’ll be tempted to treat it like your other JavaScript files and do what you would do to other JavaScript files, but don’t do that. It’s almost like service worker scripts happen to be written in JavaScript, but they require this whole new mindset. So it’s kind of hard to get your head around. It’s a new technology to learn, but it’s powerful.

Well, let’s see what the support is like on Can I Use. Not bad. Not bad at all. Some good green there, but there’s quite a bit of red. If this is the reason why you haven’t used service workers yet because you see the support and you think, “Not enough support. I’m not going to invest my time,” I think you haven’t asked the question, “how well does it fail?” This is where I think the absolute genius of service worker comes in.

Service workers fail superbly because here’s what happens with a service worker. The first time someone visits your site there, of course, is no service worker installed on the client. They must first visit your site, get the downloads, have that service worker installed, which means every browser doesn’t support service workers for the first visit.

Then, on subsequent visits, you can use the service worker for the browsers that support it as this enhancement. Provide the custom offline page. Cache those assets. Do offline first stuff. But you’re not going to harm any of those browsers that are in the red on Can I Use, and that’s deliberate in the design of service workers. It’s been designed that way. I think service workers fail really well.

Let’s look at another hot topic.

Web Components

Who has heard of web components? Who is using web components—the real thing now? Okay. Wow. Brave. Brave person.

Web components actually aren’t a specific technology. web components is an umbrella term. I mean, in a way, service workers is kind of an umbrella term because it’s what you get access to through service workers that counts. You get access to the fetch API and the cache API and even notifications through a service worker.

With web components, it’s this term for a combination of specs, a combination of APIs like custom elements, the very sinister sounding shadow DOM, which is not as scary as it sounds, and there’s other things in there too like HTML imports and template. All of this stuff together is given the label web components. The idea is we’ve already got these very powerful elements in HTML, and it’s great when they get added to HTML, but it takes a long time. The standards process is slow. What if we could just make our own elements? That’s what you get to do with custom elements. You get to make shit up.

<mega-menu> <slippy-map> <image-gallery> <modal-lightbox> <off-canvas>

These common patterns. You keep having to reinvent the wheel. Let’s make an element for that. The only requirement with a custom element is that you have to have a hyphen in there. This is kind of a long-term agreement with the spec makers that they will never make an HTML element with a hyphen in it. Therefore, it’s kind of a safe space to use a hyphen in a made up element.

Okay, but if you just make up an element like this, it’s effectively the same as having a span in your document. It doesn’t do anything. It’s the other specs that make it come to life, like having HTML imports that link off to a file that describes what the browser is supposed to do with this new element that you’ve created.

Then in that file you could have your HTML. You could have your CSS. You could have JavaScript. And, crucially, it is modular. It doesn’t leak through. Those styles won’t leak through to the rest of the page. This is the dream we’ve been chasing: encapsulation. This is kind of the problem that React is solving. This is the reason why we have design systems, to try and be modular and try and encapsulate styles, behaviours, semantics, meaning.

Web components are intended as a solution to this, so it sounds pretty great. How well does it work? Well, let’s see what the browser support is like for some parts of web components. Let’s take custom elements. Yeah, some green, but there’s an awful lot of red. Never mind, as we’ve learned from looking at things like CSS shapes and service workers. But the red doesn’t tell us anything because the lack of support in a browser doesn’t answer the question, how well does it fail? How well do web components fail?

This is where it gets interesting because the answer to the question, “How well do web components fail?” is …it depends.

It depends on how you use the web components. It depends on if you applied the same kind of design principles that the creators of HTML applied when they’re making new elements.

Let’s say you make an image-gallery element, and you make it so that the content of the image gallery is inside the open and closing tag.

  <img src="..." alt="...">
  <img src="..." alt="...">
  <img src="..." alt="...">

Now in a non-supporting browser this is actually acceptable because they won’t understand what this image-gallery thing is. They won’t throw an error because HTML is very tolerant of stuff it doesn’t understand. They’ll just display the images as images. That’s acceptable.

Now in a browser that supports web components, all those different specs, you can take these images, and you can whiz-bang them up into a swishy carousel with all sorts of cool stuff going on that’s encapsulated; that you can share with other people; that people can just drop into their site. If you do this, web components fail very well. However, what I tend to see when I see web components in use is more like this where it’s literally an opening tag, closing tag, and all of the content and all the behaviour and all the styling is away somewhere else being pulled in through JavaScript, creating a kind of single source of failure.


In fact, there’s demo sites to demonstrate the power of web components that do this. The Polymer Project, there’s a whole collection of web components, and they created an entire online shop to demonstrate how cool web components are, and this is the HTML of that shop.


The body element simply contains a shop-app custom element and then a script, and all the power is in the script. Here the web component fails really badly because you get absolutely nothing. That’s what I mean when I say it depends. It depends entirely on how we use them.

Now the good news is, as we saw from looking at Can I Use, it’s very early days with web components. We haven’t figured out yet what the best practices are, so we can set the course of the future here. We can decide that there should be design principles for how we collectively use this powerful technology like web components.

See, the exciting thing about web components is that they give us developers the same power that previously only browser makers had. But the scary thing about web components is that they give us developers the same power that previously only browser makers had. With great power, et cetera, et cetera, and we should rise to the challenge of that responsibility.

What’s interesting about both these things we’re looking at is that, like I said, they’re not really a single technology in themselves. They’re kind of these umbrella terms. With service worker it’s an umbrella term for fetch and cache and notifications, background sync — very cool stuff. With web components it’s an umbrella term for custom elements and HTML imports and shadow DOM and all this stuff.

But they’re both coming from the same place, the same sort of point of view, which is this idea that we, web developers, should be given that power and that responsibility to have access to these low-level APIs rather than just waiting for standards bodies to give us access through new APIs. This is all encapsulated in a school of thought called The Extensible Web, that we should have access to these low-level APIs.

The Extensible web is effectively — it’s literally a manifesto. There’s a manifesto for The Extensible Web. It’s just a phrase. It’s not a technology, just words, but words are very powerful when it comes to technology, when it comes to adopting technology. Words can get you very far. Ajax is just a word. It’s just a word for technologies that already existed at the time, but Jesse James Garrett put a word on it, and it made it easier to talk about it, and it helped the adoption of those technologies.

Responsive web Design: what Ethan did was he put a phrase to a collection of technologies: media queries, fluid layouts, fluid images. Wrapped it all up in a very powerful term, Responsive web Design, and the web was never the same.

Progressive Web Apps

Here’s a term you’ve probably heard of over the last couple of days: progressive web apps. Anybody who went to the Microsoft talk yesterday at lunchtime would have heard about progressive web apps. It’s just a term. It’s just an umbrella term for other technologies underneath. Progressive web app is the combination of having your site run over HTTPS, so it’s secure, which by the way is a requirement for running a service worker, and then also having a manifest file, which contains all this metadata. Chris mentioned it yesterday. You point to your icons and metadata about your site. All that adds up to, hey, you’ve got a progressive web app.

It’s a good sounding — I like this term. It’s a good sounding term. It was created by Frances Berriman and her husband, Alex Russell, to describe this. Again, a little bit of a manifesto in that these sites should be responsive and intuitive and they need to fulfil these criteria. But I worry sometimes about the phrasing. I mean, all the technologies are great. And you will actually get rewarded if you use these technologies. If you use HTTPS, you got a service worker, you got a manifest file. On Chrome for Android, if someone visits your site a couple of times, they’ll be prompted to add the site to the home screen just as though it were a native app. It will behave like a native app in the app switcher. You’re getting rewarded for these best practices.

But when I see the poster children for progressive web apps, my heart sinks when I see stuff like this. This is the Washington Post progressive web app, but this is what you get if you visit on the “wrong” device. In this case I’m visiting on a desktop browser, and I’m being told to come back with a mobile browser. Oh, how the tables have turned! It was not that long ago when we were being turned away on our mobile devices, and now we’re turning people away on desktops.

This was a solved problem. We did this with responsive web design. The idea of having a separate site for your progressive web app - no, no, no. We’re going back to the days of m.sites and the “real” website. No. No. I feel this is the wrong direction.

I worry that maybe this progressive web app terminology might be hurting it and the way that Google are pushing this app shell model. Anything can be a progressive web app, anything on the web.

I mean I’ve got things that I’ve turned into progressive web apps, and some of them might be, okay, maybe you consider this site, Huffduffer, as an app. I don’t know what a web app is, but people tell me it might be a web app. But I’ve also got like a community website, and it fulfils all the criteria. I guess it’s a progressive web app. My personal site, it’s a blog, but technically it’s a progressive web app. I put a book online. A book is an app now because it fulfils all the criteria. Even a single page collecting design principles is technically a progressive web app.

I worry about the phrasing, potentially limiting people when they come to evaluate the technology. “Oh, progressive web app, well, that’s not for me because I’m not building apps. I’m building some other kind of site.” I think that would be a real shame because literally every site on the web can benefit from those technologies, which brings me to the next question when we’re evaluating technology. Who benefits from the technology?

Who benefits?

Broadly speaking, I would say there’s kind of two schools of who could benefit from a particular technology on the Web. Does the technology benefit the developer or does the technology benefit the user? Much like what Chris was showing yesterday with the Tetris blocks and kind of going on a scale from technologies that benefit users to technologies that benefit developers.

Now I would say that nine times out of ten there is no conflict. Nine times out of ten a piece of technology is beneficial to the developer and beneficial to the user. You could argue that any technology that benefits the developer is de facto a benefit to the user because the developer is working better, working faster, therefore they can get the website out, and that’s good for the user.

Let’s talk about technologies that directly impact users versus the technologies that directly impact developers. Now personally I’m going to generally fall down on the side of technologies that benefit users over technologies that benefit developers. I mean, you look at something like service workers. There isn’t actually a benefit to developers. If anything, there’s a tax because you’ve got to get your head around service workers. You’ve got a new thing to learn. You’ve got to get your head down, learn how it works, write the code. It’s actually not beneficial for developers, but the end result—offline pages, faster performance—hugely beneficial for users. I’ll fall down on that side.

Going back to when I told you I was a nerd for design principles. Well, I actually have a favourite design principle and it’s from the HTML design principles. It’s the one that Chris mentioned yesterday morning. It’s known as the priority of constituencies:

In case of conflict, consider users over authors over specifiers over theoretical purity.

That’s pretty much the way I evaluate technology too. I think of the users first. And the authors, that’s us, we have quite a strong voice in that list, but it is second to users.

Now when we’re considering the tools and we’re evaluating who benefits from this tool, “Is it developers, or is it users, or is it both?” I think we need to stop and make a distinction about the kinds of tools we work with. I’m trying to work out how to phrase this distinction, and I kind of think of it as inward facing tools and outward facing tools: inward facing tools developers use; outward facing tools that directly touch end users.

I’ll show you what I mean. These are like the inward facing tools in that you put them on your computer. They sit on your computer, but the end output is still going to be HTML, CSS, and JavaScript. These are tools to make you work faster: task runners, version control, build tools—all that kind of stuff.

Now when it comes to evaluating these technologies, my attitude is, whatever works for you. Now we can have arguments and say, “Oh, I prefer this tool over that tool”, but it really doesn’t matter. What matters is: does it work for you? Does it make you work faster? Does it make your team work faster? That’s really the only criteria because none of these directly touch the end user.

That criterion of, “Hey, what works for me”, that’s a good one to apply for these inward facing tools, but I think we need to apply different criteria for the outward facing tools, the tools that directly affect the end user because, yes, we, developers, get benefit from these frameworks and libraries that are written in CSS and JavaScript, but the user pays a tax. The user pays a tax in the download of these things when they’re on the client. It’s actually interesting to see how a lot of these JavaScript frameworks have kind of shifted the pendulum where it used to be the user had to pay a tax if you wanted to use React of Angular or Ember. The pendulum is swinging back that we can get the best of both worlds where you can use these tools as an inward facing tool, use them on the server, and still get the benefit to the user without the user having to pay this tax.

I think we need to evaluate inward facing tools and outward facing tools with different criteria. Now when it comes to evaluating tools, especially tools that directly affect the end user—CSS frameworks, JavaScript libraries, things like that—there’s a whole bunch of questions to ask to evaluate the technology, questions like: what’s the browser support like? What browsers does this tool not work in? What’s the community like? Am I going to get a response to my questions? How big is the file size? How much of a tax is the user going to have to download? All of these are good questions, but they are not the most important question.

The most important question—I’d say this is true of evaluating any technology—is, what are the assumptions?

What are the assumptions?

What are the assumptions that have been baked into the tool you’re about to use, because I guarantee you there are assumptions baked into those tools. I know that because those tools were created by humans. And we humans, we have biases. We have assumptions, and we can’t help but encode those biases and assumptions into what we make. It’s true of anything we make. It’s particularly true of software.

We talk about opinionated software. But in a way, all software is opinionated. You just have to realise where the opinions lie. This is why you can get into this situation where we’re talking about frameworks and libraries, and one person is saying, “Oh, this library rocks”, and the other person is saying, “No, this library sucks!” They’re both right and they’re both wrong because it entirely depends on how well the philosophy of that tool matches your own philosophy.

If you’re using a tool that’s meant to extend your capabilities and that tool matches your own philosophy, you will work with the tool, and you will work faster and better. But if the philosophy of the tool has a mismatch with your own philosophy, you’re going to fight that tool every step of the way. That’s why people can be right and wrong about these frameworks. What works for one person doesn’t work for another. All software is opinionated.

It makes it really hard to try and create un-opinionated software. At Clearleft we’ve got this tool. It’s an open source project now called Fractal for building pattern libraries, working with pattern libraries. The fundamental principle behind it was that it should be as agnostic as possible, completely agnostic to build tools, completely agnostic to templating languages, that it should be able to work just about anywhere. It turns out it’s really, really hard to make agnostic software because you keep having to make decisions that favour one thing over another at every step.

Whether it’s writing the documentation or showing an example, you have to show the example in some templating language. You have to choose a winner in the documentation to demonstrate something. It’s really hard to write agnostic software. Every default you add to a piece of software shows your assumptions because those defaults matter.

But I don’t want to make it sound like these tools have a way of working and there’s no changing it, that the assumptions are baked in and there’s nothing you can do about it; that you can’t fight against those assumptions. Because there are examples of tools being used other than the uses for which they were intended right throughout the history of technology. I mean, when Alexander Graham Bell created the telephone, he thought that people would use it to listen to concerts that were happening far away. When Edison created the gramophone, he thought that people would record their voices so they could have conversations at a distance. Those two technologies ended up being used for the exact opposition purposes than what their inventors intended.

Hedy Lamarr

Here’s an example from the history of technology from Hedy Lamarr, the star of the silver screen the first half of the 20th Century here in Europe. She ended up married to an Austrian industrialist arms manufacturer. After the Anschluss, she would sit in on those meetings taking notes. Nobody paid much attention to her, but she was paying attention to the technical details.

She managed to get out of Nazi occupied Europe, which was a whole adventure in itself. Made her way to America, and she wanted to do something for the war effort, particularly after an incident where a refuge ship was sunk by a torpedo. A whole bunch of children lost their lives, and she wanted to do something to make it easier to get the U-boats. She worked on a system for torpedoes. It was basically a guidance system for radio controlled torpedoes.

The problem is, if you have a radio frequency you’re using to control the torpedo to guide it towards its target, if the enemy figure out what the frequency is, they can jam the signal and now you can no longer control the torpedo. Together with a composer named George Antheil, Hedy Lamarr came up with this system for constantly switching the frequency, so both the torpedo and the person controlling it are constantly switching the radio frequency to the same place, and now it’s much, much harder to jam that transmission.

Okay. But what’s that got to do with us, some technology for guided missiles in World War II? In this room, I’m guessing you’ve got devices that have WiFi and Bluetooth and GPS, and all of those technologies depend on frequency hopping. That wasn’t the use for which it was created, but that’s the use we got out of it.

We can kind of bend technology to our will, and yet there seems to be a lot of times this inevitability to technology. I don’t mean on the front-end where it’s like, “I guess I have to learn this JavaScript framework” because it seems inevitable that everyone must learn this JavaScript framework. Does anyone else feel disempowered by that, that feeling of, “uh, I guess I have to learn that technology because it’s inevitable?”

I get that out in the real world as well: “I guess this technology is coming”, you know, with self-driving cars, machine learning, whatever it happens to be. I guess we’ve just got to accept it. There’s even this idea of technological determinism that technology is the driving force of human history. We’re just along for the ride. It’s the future. Take it.

The ultimate extreme of this attitude of technological determinism is the idea of the technological singularity, kind of like the rapture for the nerds. It’s an idea borrowed from cosmology where you have a singularity at the heart of a black hole. You know a star collapses to as dense as possible. It creates a singularity. Nothing can escape, not even light.

The point is there’s an event horizon around a black hole, and it’s impossible from outside the event horizon to get any information from what’s happening beyond the event horizon. With a technological singularity, the idea is that technology will advance so quickly and so rapidly there will be an event horizon, and it’s literally impossible for us to imagine what’s beyond that event horizon. That’s the technological singularity. It makes me uncomfortable.

But looking back over the history of technology and the history of civilisation, I think we’ve had singularities already. I think the Agricultural Revolution was a singularity because, if you tried to describe to nomadic human beings before the Agricultural Revolution what life would be like when you settle down and work on farms, it would be impossible to imagine. The Industrial Revolution was kind of a singularity because it was such a huge change from agriculture. And we’re probably living through a third singularity now, an information age singularity.

But the interesting thing is, looking back at those previous singularities, they didn’t wipe away what came before. Those things live alongside. We still have agriculture at the same time as having industry. We still have nomadic peoples, so it’s not like everything gets wiped out by what comes before.

In fact, Kevin Kelly, who is a very interesting character, he writes about technology. In one of his books he wrote that no technology has ever gone extinct, which sounds like actually a pretty crazy claim, but try and disprove it. And he doesn’t mean it is a technology sitting in a museum somewhere. He means that somewhere in the world somebody is still using that piece of technology, some ancient piece of farming equipment, some ancient piece of computer equipment.

He writes these very provocational sort of books with titles like What Technology Wants, and The Inevitable, which makes it sound like he’s on the side of technological determinism, but actually his point is a bit more subtle. He’s trying to point out that there is an inevitability to what’s coming down the pipe with these technologies, but we shouldn’t confuse that with not being able to control it and not being able to steer the direction of those technologies.

Like I was saying, something like the World Wide Web was inevitable, but the World Wide Web we got was not. I think it’s true of any technology. We can steer it. We can choose how we use the technologies.

Looking at Kevin Kelly and his impressive facial hair, you might be forgiven for thinking that he’s Amish. He isn’t Amish, but he would describe himself as Amish-ish in that he’s lived with the Amish, and he thinks we can learn a lot from the Amish.

It turns out they get a very bad reputation. People think that the Amish reject technology. It’s not true. What they do is they take their time.

The Amish are steadily adopting technology at their pace. They are slow geeks.

I think we could all be slow geeks. We could all be a bit more Amish-ish. I don’t mean in our dress sense or facial hair. I mean in the way that we are slow geeks and we ask questions of our technology. We ask questions like, “How well does it work?” but also, “How well does it fail?” That we ask, “Who benefits from this technology?” And perhaps most importantly that we ask, “What are the assumptions of those technologies?”

Because when I look back at the history of human civilisation and the history of technology, I don’t see technology as the driving force; that it was inevitable that we got to where we are today. What I see as the driving force are people, remarkable people, it’s true, but people nonetheless.

Rosalind Franklin Margaret Hamilton Grace Hopper Hedy Lamarr

And you know who else is remarkable? You’re remarkable. And your attitude shouldn’t be, “It’s the future. Take it.” It should be, “It’s the future. Make it.” And I’m looking forward to seeing the future you make. Thank you.

Monday, July 10th, 2017

Off the Beaten Track · Matthias Ott – User Experience Designer

I love the way Matthias sums up his experience of the Beyond Tellerrand conference. He focuses on three themes:

  • Rediscovering originality,
  • Storytelling with code, and
  • Adopting new technologies.

I heartily agree with his reasons for attending the conference:

There are many ways to broaden your horizons if you are looking for inspiration: You could do some research, read a book or an article, or visit a new city. But one of the best ways surely is the experience of a conference, because it provides you with many new concepts and ideas. Moreover, ideas that were floating around in your head for a while are affirmed.

Tuesday, May 23rd, 2017

Evaluating Technology – Jeremy Keith – btconfDUS2017 on Vimeo

I wasn’t supposed to speak at this year’s Beyond Tellerrand conference, but alas, Ellen wasn’t able to make it so I stepped in and gave my talk on evaluating technology.

Evaluating Technology – Jeremy Keith – btconfDUS2017

Thursday, November 10th, 2016

From Pages to Patterns – Charlotte Jackson - btconfBER2016 on Vimeo

The video of Charlotte’s excellent pattern library talk that she presented yesterday in Berlin.

From Pages to Patterns – Charlotte Jackson - btconfBER2016

Wednesday, May 11th, 2016

Jeremy Keith on Vimeo

Here’s the video of the talk I just gave at the Beyond Tellerrand conference in Düsseldorf: Resilience.

Resilience - Jeremy Keith - btconfDUS 2016

Friday, December 11th, 2015

An Event’s Lifecycle: The Highs, The Lows, The Silence // beyond tellerrand

I can certainly relate to everything Marc describes here. You spend all your time devoted to putting on an event; it’s in the future, coming towards you; you’re excited and nervous …and then the event happens, it’s over before you know it, and the next day there’s nothing—this thing that was dominating your horizon is now behind you. Now what?

I think if you’ve ever put something out there into the world, this is going to resonate with you.

Sunday, September 6th, 2015


A presentation on progressive enhancement from the Beyond Tellerrand conference held in Düsseldorf in May 2015.

Es ist mir eine Freude—und eine Überraschung—wieder hier zu sein. No, really good to be back at Beyond Tellerrand. So I’ve been here in the odd years. I was here year one, year three, and now it’s year five. So it’s like the opposite of Star Trek films. It’s only the odd years that I’m here.

Today, what I’d like to do is I’d like to change your mind. When I say, “Change your mind,” I don’t mean I want you to reverse a previously help position. No, I mean I literally want to change your mind, as in rewire your brains.

That might sound like quite a difficult task: to rewire the brains of another human. But, the truth is, we rewire our brains all the time, right? When you see something, and once you’ve been shown it, you can’t un-see it. Well, that’s because you’ve had your mind changed. Your brain has been rewired.

Like the first time someone shows you the arrow in the FedEx logo, right, between the E and the X, and then every time you see the FedEx logo, all you see is the arrow, right? You can’t un-see that. You’ve all seen the arrow, right? Okay, good. Right?

Or the Toblerone logo, are you guys familiar with this? Who sees the bear in the Toblerone? About half the room. [Laughter]

Right? So now the other half of the room have just had their brains rewired because you’ll always see the bear.

Consider the duck—a perfectly ordinary, everyday duck. But then somebody, usually on the Internet, tells you all ducks are actually wearing dog masks. All ducks are actually wearing dog masks. [Applause]

Wait, wait, wait, wait, wait. When I show you the same picture of the exact same duck …so I have already succeeded in rewiring your brains.

But these are all very trivial examples, right, of having your brain rewired. There are much more important examples we’ve seen throughout history like Copernicus and Galileo with their heliocentric model of the solar system rather than the geocentric model. And it changed our understanding of our place in the universe.

What’s interesting about these kinds of brain rewiring, fundamental shifts is that nothing actually changes in the world itself, right? The earth always orbited around the sun, not the other way around. And yet, after Copernicus and Galileo, everything changed because our understanding of our place in the universe changed.

Or when Charles Darwin finally published his beautiful theory of evolution by natural selection, again, nothing actually changed in the natural world. Natural selection had been going on for millennia. And yet, everything changed because our understanding of our place in the natural world changed with Darwin.

I love this, by the way. This is a page from Darwin’s sketchbook. It’s one of my favorite sketches of all time where you literally see the idea forming in the sketch he made. What it reminds me of is this: This is another paper. This one sits behind a class case in CERN. “Information Management: A Proposal.” This is by Tim Berners-Lee, and this would go on to become the World Wide Web. This was something he had to present to his supervisor, Mike Sendall, and you can see what Mike Sendall wrote at the top. He wrote, “Vague, but exciting.” [Laughter]

Gave the web its go ahead. And in a way, what Tim Berners-Lee had to do then— it wasn’t enough to just invent, you know, HTTP, HTML, and URLs—he had to convince people that this was a good way of managing information. He had to rewire people’s brains. This idea of this network that has no centre where anybody can add a new node to the network; that requires some fundamental shifts …and the idea that anybody could access it. You didn’t need a particular computer or particular operating system. That this would be for everyone. Right, we had to rewire our brains to get that, to understand this decentralisation.

And then comes the problem of designing, developing for this new medium. How do we get our heads around it? Well, at first, we kind of don’t. At first what we do is we take what we already know from the previous medium and apply it.

Does anyone remember this book? Oh, yeah. Okay. We’re showing our age now. Creating Killer Websites, by David Siegel. It’s an old book. This—this was the guy who came up with the idea of using the one pixel by one pixel spacer gif, right, and giving it different dimensions. Tables for layout, right, all these hacks that we used to do to design for the web.

They were hacks, but they were necessary hacks because we didn’t have any other way of doing it until, you know, Web Standards came along. But, fundamentally, our idea of design was how do we make it look as good as the printed page? How do we make it look as good as what we’re used to from magazines, from designing for print?

And, of course, when you compare the web on that metric, especially 20 years ago, the web seemed woeful, right? 216 colors, 1 font: It seemed pretty hard to create killer websites. But people, even back then, were making the point that this isn’t the way to go around it. That what we should do is truly rewire our brains and accept the web for what it is.

John Allsopp wrote, 15 years ago—15 years ago, he wrote an article A Dao of Web Design. Who’s read A Dao of Web Design by John Allsopp? You are my people. The rest of you, please read A Dao of Web Design by John Allsopp. It’s wonderful.

What makes it wonderful, partly, is the fact that it’s still so relevant. I mean, you can’t say that about many articles about the Web that are 15 years old; that are more relevant today than the day they were written. Essentially what he’s arguing for is that we accept the web as the web. We embrace flexibility and ubiquity. Embrace that unknown nature of the web rather than trying to fight it, rather than trying to treat it like the medium that came before, right? As if it’s a poor, second cousin to print.

John pointed out that this isn’t new. This always happens. What happens is, a new medium comes along, and what we initially do is we take on the tropes of the medium that came before.

When radio came along, people did theatre on the radio. Then when television came along, they did radio, but on television. In each case, it took a while for each new medium to develop its own vocabulary.

Scott McLeod talked about this very same thing in his great book Understanding Comics. A different medium, but he noticed the same trend. We take what we know from the medium before and then apply it to this new medium, before we really get our heads around it, before we change our minds, rewire our brains.

When I look back at the history of design and development on the web, I think the brain rewiring started to happen because of this group, the Web Standards Project. When we think of the Web Standards Project today, we think of their work in convincing browser makers to support the standards, right? Lobbying Internet Explorer and Netscape to support things like CSS, but that was only half their work.

The other half of the work of the Web Standards Project was convincing us, designers and developers, to embrace the web, to use Web Standards. Like people like Jeffrey Zeldman writing books like this. I’ve met people who say that this book changed their minds. It rewired their brains. They weren’t the same person after reading this book as they were before.

I guess the central message that the Web Standards Project, Jeffrey Zeldman, all these standardistas were getting across back then was trying to get away from this idea that your presentation and your structure will be all clumped together in HTML and separated out, and that we could do that using Web Standards like CSS. And so that we could allow HTML to be HTML, and be used for structuring content, and allow CSS to be used for presentation.

Now, it’s all very well to read this or have someone tell you this. But, to truly have your mind changed and to, like, get it, get that lightbulb moment, I think there’s one website that did that. It was Dave’s website, the CSS Zen Garden, right? This was a machine for rewiring brains.

I had heard, yeah, okay, separation of presentation and structure. I get it. But then when you’re looking through, you go, whoa. This is actually the same HTML with different CSS, and the design could change that much. That’s when you really get it.

Remember that “a-ha” moment looking at the Zen Garden? You’re like, okay, now I get it—the separation of presentation and structure.

Of course, there’s the third layer we have on the Web, which is JavaScript, and that’s there for behaviour. Structure, presentation, and behaviour: that’s the broad idea behind progressive enhancement; that we have these three layers of building, one on top of the other. I usually illustrate this with the upper back torso of my friend Lynn because she’s really into Web Standards and this awesome tattoo that we have structure/HTML, presentation/CSS, and behaviour/JavaScript.

What this reminds me of is something I remember reading from the world of architecture. There’s a book called How Buildings Learn by Stewart Brand, a really good book. He talks about this concept of shearing layers or pace layers, that buildings themselves have different layers that they build upon that move at different paces, at different time scales.

The site of a building is on the geological time scale. It shouldn’t really change at all. The structure is also a fairly long-lasting thing. Then we build up. We build the walls. We build the rooms. And, within the rooms, we can move stuff around on a daily basis, but you’re not going to change the underlying structure very often, maybe a century or two, right?

I think this is useful for thinking about the web. On the web, I guess our site would be our URLs, the thing that shouldn’t change. Cool URLs don’t change. Then, on top of that, we get our structure with HTML. That’s what it’s there for.

There’s something in the design of HTML that’s really powerful, and it seems really simple, but actually it’s incredibly powerful. That’s this fault tolerant nature of HTML. It’s designed to be fault tolerant.

To explain what I mean, let’s look at what happens when you give a web browser an HTML element, an element with an opening tag and a closing tag, some text in between. Well, the browser is going to display the text in between the opening and closing tag. It’s standard behaviour.

Where it gets interesting is when you give a browser an element it doesn’t understand, right? It still just displays the text between the opening and closing tag, even though it doesn’t recognise that element. See, why this is interesting is what the browser does not do.

The browser does not throw an error to the end user. The browser does not stop parsing the HTML at this point and refuse to parse any further. It just sees something it doesn’t understand, renders the text between the tags, and carries on.

You all know this. This is a pretty simple facet of HTML. Yet, this is what allows HTML to grow over time. Because of this behaviour, we can start adding extra richness to HTML, introduce new elements, secure in the knowledge that older browsers will display that text in between the opening and closing tags. It won’t throw an error. It won’t stop parsing.

We can use that in the design of new elements to create fallback content. The fact that canvas has an opening and closing tag was a deliberate design decision. It initially came from Apple, and it was a self-closing tag. When it became a standard, they made sure they had an opening and closing tag so that they could provide fallback content for older browsers because of that fault tolerant nature of HTML.

The browser sees something it doesn’t understand. It just renders what’s in between the opening and closing tags. We get it with canvas. We get it with video. We get it with audio. Now we get it with picture.

This fault tolerant nature of HTML has allowed HTML to grow and evolve over time. When you think about it, the age of HTML is crazy. It’s over 20 years old, and it still works. You might say it’s a completely different HTML at this point, but I see it as an unbroken line. That is, it is still the same HTML that we had over 20 years ago.

It’s kind of like the Ships of Theseus paradox, like your grandfather’s axe. This axe has been in the family for generations. The handle has been replaced three times, and the head has been replaced twice. It’s like this philosophical question. Is it still the same axe?

If you’re thinking to yourself, “Of course it’s not the same axe, because the handle has been replaced three times, and the head has been replaced twice,” then I would remind you that no cell in your body is older than seven years, and yet you consider yourself to be the same person you were as a child—something to think on. [Laughter]

And what this allows for, this fault tolerant nature, is that we can grow HTML and, therefore, achieve a kind of structural honesty by using the right element for the job. Again, this is an architectural term, just the idea that you use the correct structure. Other than having a façade of something, you’re actually using the right element to mark something up. So, using tables for layout, that’s not structurally honest because that’s not what tables are for.

An example of structural honesty is if you need a button on a web page, you use the button element, right? It seems pretty straightforward. Yet, and yet, and yet, time and time again, I see stuff like this. Right? Maybe you’ll have class=button, role=button. An event handler, on click, make this behave exactly like a button. Call my lazy. I would just use a button.

Then there’s CSS, which we use for our presentation. CSS is also fault tolerant, which is also extremely powerful and, I think, has added to the robustness of CSS.

Just stop for a minute and think about all the websites out there using CSS, which is pretty much all of them now, right? You think about how different they are, how varied they look, how varied they are in scope, how different their CSS must be.

Yet, all the CSS on all of those websites, it boils down to one simple pattern—I think, a beautiful pattern—and it’s this: You have a selector, you have a property, and you have a value. That’s it. That is all of CSS.

We have some, you know, nice syntactic sugar so that the machines can parse this stuff, but this is it. This is what all of CSS is. Again, that is really powerful. What’s powerful about it is the fault tolerant behaviour.

Here’s what happens if you give a web browser a selector it doesn’t understand. It just doesn’t parse whatever is in between the opening and closing curly braces. Again, what’s interesting here is what the browser doesn’t do. It does not throw an error to the end user. It does not stop parsing the CSS at this point and refuse to parse any further.

Likewise, you give it a property it doesn’t understand; it just ignores that line. The same with the value: Give it a value it doesn’t understand, it ignores that line, moves on to the next one. That’s really powerful. That means, as new features start to arrive, we can start to use them straightaway, even if they aren’t universally supported because, what’s going to happen in the older browsers? Nothing. And that’s absolutely fine.

We get a kind of material honesty of using the right CSS for the job in the same way we get structural honesty with HTML. When we used to want rounded corners, we’d have to slice up a circle and put four background images and an element. Now we can just say border-radius, right? We get this material honestly from the fault tolerant nature of CSS, just as we get a structural honestly from the fault tolerant nature of HTML.

Then there’s JavaScript, not fault tolerant. To be fair, I don’t think it could be because JavaScript isn’t a declarative language in the same way that HTML and CSS is. HTML and CSS can afford to just ignore things that they don’t understand, right? JavaScript is a programming language. It’s a scripting language.

You actually kind of want the scripting language to tell you when you’ve done something wrong. You want it to throw an error. Debugging would be really, really hard if every time you made a mistake in a programming language, your environment just want, “Eh, don’t worry about it. Don’t worry about it,” right?

JavaScript will throw an error. If you give a browser some JavaScript it doesn’t understand, it will throw an error. It will stop parsing the JavaScript at that point and refuse to parse any further. So, it’s far more fragile than HTML or CSS.

Now, that’s okay as long as we use JavaScript the right way. As long as we’re aware of that fragility in comparison to the robustness of HTML and CSS, that’s absolutely fine. It’s just, we need to be aware of having safe defaults when we’re building. The safe defaults will be, well, your HTML, your structure, needs to be in place, right? Your CSS, you can’t rely on it, but if it doesn’t work, it’s not the end of the world.

What I’m saying is use JavaScript to enhance your sites, but don’t rely on JavaScript because of that fragile nature. Again, don’t get me wrong. I’m not saying, “Don’t use JavaScript.” I love JavaScript. I’ve written books on JavaScript.

JavaScript is absolutely awesome. But relying on JavaScript is a really dangerous game because of that fragility. I kind of think of it as like the electricity of the web in the same way as, in product design, you want to use electricity to turbo charge a system, turbo charge a product, turbo charge a building. But you don’t want to rely on electricity.

Jake Archibald, who is riffing on the Mitch Headberg joke, put it nicely when he said that, “When an elevator fails, it’s useless. When an escalator fails, it becomes stairs.” So, on the web, “We should be building escalators, not elevators.”

It makes a lot of sense to me. It’s a robust way of building. Yet, and yet, and yet, I see stuff like this. This is Instagram.com if the JavaScript fails. It has finished loading at this point. Nothing more will come into the page. There are many, many sites out there like that. Effectively, what’s happened here is that JavaScript has become a SPOF, a Single Point Of Failure in the way that HTML or CSS are very unlikely to do because of their fault-tolerant nature.

It seems very strange to me when confronted with these three layers of the web—HTML, CSS, JavaScript—that we would put all our eggs into the most fragile layer, right? That we’d turn the one with the fragile parsing into our Single Point Of Failure.

It’s all fun and games to point at sites like this where, if you switch off JavaScript, you get no content, but that’s not what this is about. That’s not what progressive enhancement is about. It’s not about people who switch off JavaScript. Who switches off JavaScript, right? Browsers are making it very, very difficult for people to choose to switch off JavaScript.

It’s about JavaScript failing for reasons you don’t even know. Maybe it happens on the server. Maybe it happens on the browser. Maybe it happens on the network in between. Just circumstances that are out of your control. That’s why you want to be building in a robust way.

This is Andy Hume. He puts it this way. He said, look, “Progressive enhancement is much more about dealing with technology failing than technology being supported,” right?

It isn’t, “Oh, we need to think about making it work in browsers that don’t have JavaScript.” No. Every browser doesn’t have JavaScript until the JavaScript loads. Things happen, and you need to embrace that you’re going to lose some of those packets. If it’s HTML or CSS, that’s going to be okay. If it’s JavaScript, that’s not going to be okay if you’re relying on the JavaScript.

What Andy is saying here, I think, was nicely paraphrased or summed up in a different way by my friend David who said, “Look, it’s simple. Build your apps so they aren’t a twirling shitshow of clown horns when JavaScript breaks.” Right? Seems pretty straightforward.

Perhaps more eloquently, Derek Featherstone said, “In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve the problem with a simpler solution lower in the stack, you should,” right? “It’s less fragile, more foolproof, and it just works.”

Again, Derek is an accessibility guy, and he’s not making the argument here that it’s about access, that it’s about reach, about providing the service to everyone. No, no, he’s pointing out that, from an engineering point of view, this makes more sense. Foolproof, less fragile, it just works: that’s the reason to work this way.

What all these people are saying, effectively, is a reformulation of an existing principle, a principle by this man, John Postel. This is Postel’s Law, or The Robustness Principle:

Be conservative in what you send; be liberal in what you accept.

I see Postel’s Law in action all the time. Browsers: the way that browsers handle HTML and CSS, that fault-tolerant error-handling, that’s Postel’s Law in action. They have to be liberal in what they accept because there’s a lot of crap HTML out there, so they can’t throw an error every time they see something they don’t understand.

I see the robustness principle in action all the time, not just in web development, but in design as well. In the world of UX, let’s say you’re making a form on the web. Well, you want to be conservative in what you send. Don’t send the really long form with lots of form fields. Send as few form fields as possible down the wire. But then, when the user is inputting into that form, be liberal in what you accept. Don’t make the user format their credit card number or their telephone number with spaces or without spaces. Be liberal in what you accept. That’s just another example of Postel’s Law in action.

I love stuff like this because I’m kind of obsessed with design principles, this idea that these things are there. You might as well make them visible, make them public, write them down, put them on the wall because design principles are inherent in anything that humans build. We can’t help but imbue what we build with our own beliefs, with our own biases. That’s certainly true of software.

Software, like all technologies, is inherently political. Code inevitably reflects the choices, biases, and desires of its creators.

We talk about opinionated software. The truth is, to a certain degree, all software is opinionated. That’s okay as long as you recognise it, as long as you realise when you’re taking on the philosophy of the piece of software. Then you can decide whether you want to go along with it.

A simple example: If you’re using a graphics design tool like Photoshop to create a website. To be fair, Photoshop was never intended for creating web pages. It’s for manipulating photographs.

Let’s say you’re trying to design a web page in Photoshop. Well, you fire up Photoshop. You hit Command+N for a new document. The first thing it asks you for is a width and a height, a very fundamental design decision that maybe you’re not even conscious of making so early on in the process.

At the other end of the spectrum, if you have a Content Management System and that Content Management System has been made with certain assumptions about where the content will be viewed or what kind of devices will be viewing that content. If, in further years down the line, those assumptions turn out not to be true, you’re basically butting heads with the people who created that software. With any piece of software, you need to evaluate it on that basis.

If you’re choosing a framework, a front-end library or something, there are all these different things to evaluate. What’s the file size? What’s the browser support? What’s the community like? All really important questions, but actually the most important question is: Does the philosophy of this piece of software match my own philosophy? That way, if it does, you’ll work with it, right? It’s a tool that’ll make you work faster, and that’s the whole idea.

If it doesn’t match your philosophy, you’re going to be fighting it the whole way. This is why it’s possible for one group of developers to say, “This framework rocks,” and another group to say, “This framework sucks,” right? They’re both right and they’re both wrong at the same time. It’s entirely subjective.

We have a whole bunch of different tools we use in web development, and I kind of put them into different buckets, but these ones have become particularly popular in recent years, sort of bigger JavaScript libraries. Worryingly, people are putting more of the core content and functionality into the most fragile layer of the stack. I’ve sort of arranged them here in increasing order of opinionatedness.

If I’m trying to work in a progressive enhancement kind of way, and I take a look at Backbone, yeah, I could probably make it work. It does URL routing with a fairly light touch. I can probably work with it.

Then we get to Angular. It’s like, yeah, not so much. I mean maybe I could use a subset of Angular and still build in this progressive enhancement kind of way, thinking in this layered way. But, Angular actually has a way of thinking, right? There’s an Angular way of doing things. And, to get the most use out of Angular, you kind of need to accept its philosophy.

Then there’s Ember, which is like, no, no, no, very opinionated, like: JavaScript is everything. Forget HTML. Open body. Close body. That’s all you need, everything done in JavaScript.

If you’re going to choose to use one of these, make sure that you understand what you’re taking onboard. Make sure that you agree with its philosophy. If you do, that’s great. These are excellent tools that’ll help you work faster. But, if you don’t agree with them, don’t use them, because you’re going to be fighting the whole way.

You know, I was listening to an episode of Shop Talk Show, the great podcast that Chris Coyier and Dave Rupert do. They had John Resig on the show. They were asking him about these libraries and why he thought they were getting so popular. He, you know, would have a good opinion on this having created jQuery all those years ago.

He said something interesting about these big sort of monolithic libraries. He said, you know, “No one wants to think that what they’re doing is trivial.” I think there’s really something to that.

The way that these things are marketed is like these are for complex apps, right? You’re working on hard things. You need a really powerful framework like Angular or Ember. And it’s true; no one likes to think that they’re problems they’re working on just aren’t that difficult.

If you were at a cocktail party and someone asks you what you do, you describe your daily work, and then they said, “Yeah, that sounds pretty easy,” you’d be offended, right? That’s an insult that someone would say your work sounds easy. But if you describe what you do and someone says, “Wow, that sounds hard,” you’re like, “Yeah.” [Laughter] “Yeah, it is hard.”

You know, these libraries are like, “Oh, the stuff you work on is so hard. You need this library.” You’re like, “Yeah, I am working on really tough programs.” But these are frameworks for building elevators, not escalators.

Tom Dale is one of the creators of Ember. He said something a while back. He, by the way, is changing his tune. He’s coming around to the progressive enhancement thing, which is wonderful to see. But a while back he said, look, “JavaScript is part of the web platform. You don’t get to take it away and expect the web to work.”

Now, I disagree with this, but maybe not for the obvious reasons that you might think, which is that second clause. But actually, I take issue with the first clause, “JavaScript is part of the web platform.” What I take issue with is this framing of the web as a platform. I don’t think it is.

Again, language can be so important. Simon talked about this this morning, the importance of language. There’s this idea of political language where our words can subtly affect how we talk about things. This happens in English a lot, you know, political debates. You talk about collateral damage or friendly fire, right, weasel words that obfuscate their meaning.

But also, political language like if you were about to have a debate on tax relief. Well, before the debate has even begun, you’ve framed tax as something you need relief from, right? The same way if we’re talking about the web platform. Don’t get me wrong. There are great people using this phrase: webplatform.org, Web Platform Daily - wonderful, wonderful resources. But, the web platform …the web isn’t a platform.

If you think about what a platform is, you know, it’s an all or nothing system. I get it from a marketing point of view when you put the web on the same level of something like, you know, Flash, which is a platform, or iOS or Android, which is a platform. It’s kind of cool that the web can sort of put itself on that same level. It’s kind of amazing. I could wake up and go: I’m going to build an application. Now, should I build it in iOS, Android, or the web?

The fact that that’s even a question is kind of awesome, but it’s a bit misleading because the way that a platform works is, okay, I build something using the Flash platform and, if you have the Flash plugin, you get 100% of what I’ve built. But, if you don’t have the Flash plugin, you get nothing. 100% or nothing: those are your options.

The same if I build for an iOS device, an iOS app, and you’ve got an iOS device. You get what I’ve built, 100% of what I’ve built. But if I build an iOS app and you’ve got an Android device, you get zero of what I’ve built.

If I build something on the web, maybe you’ll get 100% of what I’ve built. Maybe close to it, 90%, 80%. But the important thing is that you don’t necessarily get zero, that you get something. And, if I’m building it the right way using progressive enhancement, you’re at least going to get the content. You’re at least going to get the HTML. Maybe you’ll get CSS. Maybe not all the CSS, but you’ll get some of the JavaScript, maybe not all the JavaScript.

The web is not a platform. The web is a continuum. To treat the web as a platform is a category error.

Just as we made the mistake of trying to get our heads around the web and thought of it as print design, now we’re making the same mistake as thinking that the web is software, just like any other software. We can’t treat building for the web the same way we treat building for any other platform. The web is fundamentally different.

I see sentiments like this. Joe Hewitt, an incredibly smart guy, frustrated by the web. He said, “It’s hard not to be disappointed by the HTML if you’ve developed for iOS, Windows, or other mature platforms as I have.” Yeah, if you’re going to judge it on that level, as if the web were a platform, I totally understand where your frustration is coming from. But the web isn’t a platform. The whole point of the web is that it’s cross-platform, that it doesn’t matter whether you’ve got iOS or Android, or what plugin you do or don’t have installed, that you get something on the web.

But what you have to understand about Joe’s frustration here when he said this, what he was trying to accomplish on the web was, he was trying to get smooth scrolling to work, which, yeah, okay, that is tougher on the web than it is in native. And all of these kinds of interactions—tapping, dragging, swiping—is kind of a pain in the ass to do this stuff on the web. But then you take a step back, and you realise, hang on, hang on. These are all sort of surface level implementation details on an interface. No one wakes up in the morning and thinks about these verbs, right? No one is like: I’m really looking forward to swiping today. [Laughter]

If you look below the implementation surface level to what people are actually trying to do, well, these verbs become more important, that people are trying to find stuff. They’re trying to publish stuff, buy stuff, share stuff. Then if you ask yourself, okay, how can I make this happen? You find you can do that pretty low down in the stack. Usually just HMTL will get you this far. Then you can enhance, and then you can add on the swiping and the tapping and the dragging and all of that stuff as an enhancement on top of the more fundamental semantic verbs lying underneath.

I think something that’s helped us get our head around this idea of what the web is, is responsive web design. Again, Simon talked about language, the importance of language. What Ethan did by coining this phrase and giving it a definition was, he helped rewire our brains to understand the web.

It’s interesting. In Ethan’s article when he first introduced the idea of responsive design, he references A Dao of Web Design by John Allsopp, building on top of that existing work. And, it is another term from architecture: responsive design. I feel like it goes hand-in-hand with progressive enhancement. If you think about what Ethan taught us was that, if you’re doing it right, layout is an enhancement.

Here’s the first website ever, the first website ever made, made by Tim Berners-Lee published on CERN. This is how it would look in a small view port. This is how it would look in a slightly larger view port, slightly larger again, right up to a wide screen view port. It’s responsive.

This may seem like a trivial, silly, little thing. Of course, there’s no CSS, right? Well, exactly. It starts off being responsive. The problem was never that websites weren’t responsive and we had to figure out how to make them responsive. No, no, no. The problem was we screwed it up. The web was responsive all along, and then we put that fixed width on it. We decided that websites were supposed to be 640 pixels wide, and then 800 pixels wide, and then 960—the magic number—wide.

Instead of talking about making a website responsive, we should actually be talking about keeping a website responsive. You get your content. You structure it in HTML. It’s responsive. You start adding CSS; the trick is that, in every step of the way, to make sure you’re not screwing up the inherent flexibility of the web. That rewiring of your brain is a different way of looking at the problem can make all the difference.

But, to think that way, to truly accept it that that’s the way to build, you kind of have to answer a fundamental question, and it’s this question: Do websites need to look exactly the same in every browser? I’m pretty sure at this point we all know the answer. If you don’t know the answer, you can find out by going to the URL dowebsitesneedtolookexactlythesameineverybrowser.com, built by Dan Cederholm, where you can see the answer, which is, “No!”

But, depending on the user agent that you happen to be visiting the site with, that will look slightly different because, hey, websites do not need to look exactly the same in every browser. Just to make sure that we’re all on the same page here, I’m going to ask for just one piece of audience participation if you could help me answer this question, please. Ladies and gentlemen, do websites need to look exactly the same in every browser?


Good. That was pretty resounding, and that’s great because, if you truly believe that, if you accept that, then suddenly all the stuff that we’re afraid of like, “Oh, my God. There are so many devices, so many different browsers, so many different APIs, so many different screen sizes.” All of this stuff that we’re frightened of, if you accept that websites do not need to look the same in every browser, then it stops being something to be frightened of. It becomes something to embrace.

It actually starts to get really fun. I’ll finish with an example of what I mean. I’ll show you a pattern. This is, I’ll admit, a very, very simple pattern. It’s a navigation pattern, but to demonstrate sort of progressive enhancement and responsive design in action.

This is a pattern I first saw in Luke Wroblewski’s old startup, Bagcheck, where, to reveal the navigation, you can see there’s a trigger in the top corner there. If you hit that, you get the navigation. Now, what’s actually happened is that was nothing more than a hyperlink to a fragment identifier at the bottom of the page. You were jumped down to the bottom of the page where the navigation sits.

I really, really like this pattern because that’s going to work everywhere, right? Following links, that’s pretty much what browsers do. So, you know if you start with this, it’s going to work everywhere. I’ve used it on websites, right? You have that trigger. You hit the trigger. It’s going to go down to the navigation. Have something to dismiss the navigation. Actually, all that’s happening under the hood is you’re just jumping to a different part of the same page.

But here’s the great thing about progressive enhancement, about building for the web, is that you don’t have to stop here. Now that you’ve got that working, you’ve got something that works literally everywhere, you can build on top of that. You can enhance that.

A website I was working on with Stephanie, actually, we had some various navigation stuff going on. You’ve got search. You’ve got the “more” link. You’ve got the Menu. To begin with, all of that worked the same way. They were hyperlinks to fragment identifier further down the page, and you just followed those links.

But then I was able to enhance and say, okay, well, what if when you hit search, it slides in from the top? We can do that. JavaScript, CSS: we can do that.

What if that “more” menu is an overlay and sort of progressive disclosure on top of the content? Yeah, we can absolutely do that. Let’s have that menu slide in, in that off-canvas kind of way. Absolutely. Not a problem. If any of that doesn’t work, that’s fine. Websites do not need to look the same or behave the same in every browser.

Now, in order to do this, I would have to make sure that the browser understands the JavaScript I’m using because, remember, JavaScript isn’t fault tolerant. I don’t need to test for HTML. I don’t need to test for CSS. I do need to test for JavaScript.

The BBC had this lovely phrase they used called “Cutting the mustard,” where you’re literally checking to see if the browser understands what you’re about to do. In my situation, I was using querySelector, and I was using addEventListener. So I have an if statement to say, “If you understand these things, great. We’re going to do some JavaScript,” and that’s it.

There are two interesting things to notice about this pseudo code here. One is, I am detecting features. I am not detecting browsers. I am not looking at a browser user agent string. After listening to PPK, I think you understand why, right? [Laughter] It’s a mug’s game. Just don’t.

The other interesting thing here is that there is no else statement, right? If the browser understands these features, it gets these features. If it doesn’t, it doesn’t. Why? Because websites don’t need to look the same in every browser. I’m not going to spend my time making an older browser understand something it doesn’t understand natively, right?

To give you an example, let’s say I’m using media queries, as I am. It’s a responsive design. And I’m building it the right way, so I’ve got my usual styles outside the media queries. Then I put all my layout styles inside media queries. Well, some older browsers like Internet Explorer 8 or 7, they’re not going to understand those styles because they don’t support media queries.

What do I do about those styles inside the media queries? What do I do about Internet Explorer 8 and 7? Nothing. It’s perfectly fine. The content is well structured. It can stand on its own. Layout is an enhancement. I don’t need to make Internet Explorer 8 understand media queries.

Now, let me check just once again. Do websites need to look exactly the same in every browser?




Marketing. Ah! That’s a good point. So what I’m suggesting here is aggressive enhancement, right? [Laughter] You give the browsers everything they’re capable of, and the other browsers that aren’t capable of it, don’t give it to them: aggressive enhancement.

People say, “I have to support IE8.” Marketing says, “You have to support IE8.” Actually, I agree. You should support IE8 and IE7 and IE6 and IE5. I think you should support Netscape Navigator 3. [Laughter]

Support. You should support those browsers, not optimise for them. There’s a big difference between support and optimisation. There’s a big difference between the marketing department saying, “You need to make sure your content is available to Internet Explorer 8,” and “You need to make sure the website looks exactly the same in Internet Explorer 8,” two very different things. I will absolutely do the first.

I support every browser. I optimise for none.

Look, I know it’s hard. I get it. You’d be like, “Yeah. You haven’t met my boss,” or, “You haven’t met my clients. I think websites don’t need to look the same in every browser, but it’s these people. They think that they do, so I’m doing it because of them.”

That seems like such a waste of your time and your talent. That, confronted with that situation, and you know full well that making something work and look the same in an older browser is just ridiculously hard and not the best use of your time. Yet, you go and do it because the boss, the marketing department, whoever, is telling you to do it. You’re solving the wrong problem.

If you are making a site look the same in Internet Explorer 6 as it does in Chrome 30 jillion or whatever it is now, then you’re solving the wrong problem. The real problem to be solved is changing the mind of the person who thinks the website needs to look the same in both those browsers. That’s the real problem is to rewire someone’s brain, probably someone in the marketing department. And I get it because that’s hard.

Technology—technology is kind of easy, right? Give me a problem. I’m going to solve it with HTML, JavaScript, CSS, and code. Give me enough time; I think I could probably solve it. Human beings? Yeah, they’re hard, a much trickier problem. But you would at least be solving the right problem rather than wasting your time and your talent trying to make something look and work the same in an older browser as it does in a modern browser.

But I do get it because it seems impossible to convince someone like that, right? Yet, things do change. It seemed impossible when the Web Standards Project was trying to convince browsers to support standards and convince us to use standards. Why would we switch from using tables for layout? We knew how to do that. It was never going to happen, and it happened. It is possible. People can change.

There’s a certain irony in this approach of progressive enhancement is that you are making sure that things are going to work further down the line as well in devices that you can’t predict. The irony here is that the best way of being future friendly is to be backwards compatible.

There’s this idea that progressive enhancement was maybe, well, it made sense ten years ago, but it’s an idea of the past. It’s a different web now. We’ve moved on.

No, progressive enhancement is the one idea that has stood the test of time so well, right? It’s a technology of the future. It’s not a technology, but a way of thinking for the future. Today we’re thinking about screens still. What about interfaces that don’t even use screens? That’s when you really have to think about your structured content first, right?

What about if the network isn’t available, this whole idea of offline first? What if we start thinking about the network itself as an enhancement? Again, this way of thinking, progressive enhancement, that will stand you in good stead.

You know what’s funny is I showed you the first website ever made, but I showed it to you in a modern web browser. When you stop and think about that, that’s kind of amazing. I showed you a website that’s over 20 years old and it still worked in a modern browser.

Here’s what it would have looked like at the time, right? This is the kind of browser that was around, something like the Line Mode Browser.

Here’s what’s really amazing. If you’re building your websites the right way, you could take a website that you’ve built today and you could look at it in a browser from 20 years ago. Kind of mind boggling, but that’s exactly the idea behind the web: that anybody could get at that, that this is for everyone.

I saw a tweet go by, and it said:

Wow. You really don’t get the momentous idea of the web until you sit next to a guy with a old Nokia and another with a brand new iPad.

Now that—that is an idea that’s worth rewiring your brains for.

Thank you.


This presentation is licenced under a Creative Commons attribution licence. You are free to:

Copy, distribute and transmit this presentation.
Adapt the presentation.

Under the following conditions:

You must attribute the presentation to Jeremy Keith.

Monday, August 24th, 2015

Jeremy Keith – Enhance! – Beyond Tellerrand Düsseldorf 2015 on Vimeo

The video of my talk at this year’s Beyond Tellerrand. I was pleased with how this went, except for the bit 16 minutes in when I suddenly lost the ability to speak.

Jeremy Keith – Enhance! – beyond tellerrand DÜSSELDORF 2015

Saturday, August 15th, 2015

Dave Shea – – beyond tellerrand DÜSSELDORF 2015 on Vimeo

A wonderful, wonderful history of the web from Dave at this year’s Beyond Tellerrand conference. I didn’t get to see this at the time—I was already on the way back home—so I got Dave to give me the gist of it over lunch. He undersold it. This is a fascinating story, wonderfully told.

So gather round the computer, kids, and listen to Uncle Dave tell you about times gone by.

Dave Shea –  – beyond tellerrand DÜSSELDORF 2015

Monday, May 11th, 2015

100 words 050

I spoke at the Beyond Tellerrand conference today. I wasn’t expecting to speak at the Beyond Tellerrand conference today.

Marc asked me just a few days ago if I might be able to step into the breach. I was going to be attending the conference today anyway—my flight back to Brighton was in the evening—so I said sure, why not?

It was fun. Except for the moment when my throat decided it didn’t want to cooperate with this whole public speaking thing and just closed up for a minute or so. That was just a little bit disconcerting.

Thursday, April 16th, 2015

Creating the Schedule // beyond tellerrand

Marc and I have chatted before about the challenges involved in arranging the flow of talks at a conference. It’s great that he’s sharing his thoughts here.

Tuesday, September 24th, 2013

Beyond Tellerrand

A look beyond the edge of the plate. This presentation on digital preservation and long-term thinking was the opening keynote at the Beyond Tellerrand conference held in Dusseldorf in May 2013.

I’m going to do it in English if it’s all the same to you. I think it’d be better for everyone if I do it in English.

So here’s the thing with this conference, Beyond Tellerrand. I was here two years ago doing a workshop before the actual conference. The title of the conference, Beyond Tellerrand; it took me a long time to parse it. I thought, okay, well, Tellerrand is a place, like maybe it’s somewhere in Germany. Beyond Tellerrand. Or maybe it’s another planet like Mars or Venus or something. And the thing was, my wife pointed out no, it’s the German Teller Rand: edge of the plate. I was like, “Oh, okay!”

So what he’s done is, Marc has translated half of this German thing and left the other thing in German, right? So there’s über den Tellerrand hinaus schauen, right? That’s the phrase in German; looking out beyond the edge of the plate. Nice job, confusing my brain there, Marc, by only translating half of the phrase.

But I thought, you know, what a great topic to talk about. Looking out beyond the edge of the plate: über den Tellerrand hinaus schauen. Because I think as web designers and web developers, we have a lot on our plate.

It seems like every year, every week, there’s more and more getting thrown on our plate, right? HTML5, CSS3, JavaScript, CoffeeScript, Yeoman, Grunt, Less, SASS, Git, Node, Backbone, Ember, Desktop, Tablet, Mobile, Glass, right? There is always something new to learn when it comes to web design and web development. And I kind of miss the old days when it was HTML and how to FTP to a server. That’s all you needed and you could make a website.

One of the reasons I kind of miss that is that I actually think it is pretty powerful, because it meant that the barrier to entry to getting something online was very low. And I think we need to keep it low. I think it’s really important.

That’s not just my opinion…

There was this beautiful moment at the opening of the Olympics last year. Tim Berners-Lee had one tweet to show the world, and this is what he said: he said, “This is for everyone”. Talking about his gift to the world, right? The World Wide Web, that this is for everyone. And that’s why I feel like we should be aiming to keep the barrier to entry nice and low. And if things do get more complex and the barrier to entry creating a website gets higher and higher, it won’t be for everyone; it’ll just be for us professionals. And I think that’ll be kind of sad. I think that would be a loss.

I do take hope, however, from what I’ve seen recently. So a friend of mine—a colleague of mine; Josh Emerson works at Clearleft—he’s been doing work at local schools. He’s been involved in this whole Code Club thing, which is really cool; teaching kids how to programme with Scratch. And just a couple of weeks ago he was teaching kids how to make websites using HTML, CSS.

It’s awesome. I just absolutely love what they’ve done. I genuinely love this stuff. And these are, like, seven, eight year old kids. One made a page about how much he or she loves chinchillas. It’s like a chinchilla fan page. I love this.

Another one made a website about their pets. And you’re not seeing the animation on the background. It’s a tiled animated .gif. But I genuinely love this stuff.

Another one “…I’ll make a web page about sweets. I love sweets.” I love the way the tone of voice is kind of trying to be authoritative, almost like the Wikipedia of sweets:

Bubblegum is a sticky thing you chew and can blow bubbles with it.

I love this. I love this because it showed me that, you know, maybe the barrier to entry isn’t as high as I was worried. Anybody, even kids can make a website if they have a good teacher.

Because what this reminds me of—certainly the look and feel—is MySpace.

Garish colours and big, bloated pages with automatically playing music and animated gifs, right? Nasty, nasty stuff, but wow, how empowering. I mean, really democratic in a way, the fact that anybody could make a MySpace page …and anybody did make a MySpace page. And the results, sure, were pretty nasty some of the times, but I really love it. Genuinely, I really, really think that MySpace was fantastic.

I know for a fact that there are web designers and web developers working today because of MySpace. That was the entrance to it. I remember a few years ago, Ze Frank used to do The Show. Remember The Show with Ze Frank? No? Just me! He had this competition called “I knows me some ugly MySpace”, where he was getting people to nominate the ugliest MySpace page they could possibly find. And someone sent him an email saying, “You’re just being mean, you’re being cruel, you’re mocking people who have less taste than you.” And his response to this is just absolutely wonderful, and for me, nails the democratising power of technology and the web for creativity. And rather than paraphrase it, I’m going to just let Ze give it to you.

For a very long time, taste and artistic training have been things that only a small number of people have been able to develop. Only a few people could afford to participate in the production of many types of media. Raw materials like pigments were expensive. Same with tools like printing presses. Even as late as 1963 it cost Charles Peignot over six hundred thousand dollars to create and cut a single font family.

The small number of people who had access to these tools and resources created rules about what was good taste or bad taste. These designers started giving each other awards and the rules they followed became even more specific. All sorts of stuff about grids and sizes and colour combinations, lots of stuff that the consumers of this media never consciously noticed. Over the last twenty years, however, the cost of tools relating to the authorship of media has plummeted. For very little money, anyone can create and distribute things like newsletters or videos or badass tunes about ugly.

Suddenly, consumers are learning the language of these authorship tools. The fact that tons of people know names of fonts like Helvetica is weird, and when people start learning something new, they perceive the world around them differently. If you start learning how to play the guitar, suddenly the guitar stands out in all the music you listen to. For example, throughout most of the history of movies the audience didn’t really understand what a craft editing was. Now as more and more people have access to things like iMovie, they begin to understand the manipulative power of editing. Watching reality TV almost becomes like a game as you try to second guess how the editor is trying to manipulate you.

As people start learning and experimenting with these languages’ authorship, they don’t necessarily follow the rules of good taste. This scares the shit out of designers.

In MySpace, millions of people have opted out of pre-made templates that work, in exchange for ugly. Ugly when compared to pre-existing notions of taste is a bummer, but ugly as a representation of mass experimentation and learning is pretty damn cool.

Regardless of what you might think, the actions you take to make your MySpace page ugly are pretty sophisticated. Over time, as consumer created media engulfs the other kind, it’s possible that completely new norms develop around the notions of talent and artistic ability.

That. Exactly that. And if the price to pay for anyone being able to publish on the web is that we get ugly pages, I’m willing to pay that price. I know what I would prefer.

Now if we’re going to talk about anybody being able to publish ugly web pages, then of course there’s one website I have to mention. GeoCities.

1994 I think it was founded. Okay, who had a GeoCities page? Wow! Fantastic! Yes! And here you are today at a web design conference, right?

At one time, GeoCities was the third most visited website on the web. The third most visited website. It truly embodied that ethos that Tim Berners-Lee was talking about. It really was for everyone. Sure, you know, tiled backgrounds, garish colours, automatically playing background music, Java Applets, all that stuff, but it really, I mean as just a cultural touchstone to show what the web was like in the 1990s, it’s a really remarkable piece of our culture.

Phil Gyford put it nicely, saying that:

GeoCities sites showed what normal, non-designer people will create if given the tools available around the turn of the millennium.

GeoCities was bought by Yahoo in 1999. On October 26th 2009 Yahoo shut down GeoCities. And when I say shut down, I don’t mean they mothballed it. They deleted everything.

They deleted everything that everybody had ever put on there.Sure: garish backgrounds and automatically playing music and all that ugly stuff. But stuff that people had grown up with, that people had poured their hopes, their dreams into this stuff. Destroyed. Not just mothballed. Destroyed. Completely destroyed.

I was really upset about this at the time, and frankly I’m still pretty fucking pissed off about it now. I was talking about it on Twitter last week and somebody said, that’s a long time to hold a grudge. I was like, yeah, that whole library of Alexandria thing; I’m still kind of miffed about that as well.

I remember people who were working at Yahoo at the time trying to explain to me why they had to destroy this. It was like, “Well, we don’t have permission from the users to keep it.” What, so you have permission to completely destroy it? Doesn’t make any sense!

Seven million websites. Seven million personal home pages gone. Tens, hundreds of thousands of outbound links from Wikipedia stopped working on one day.

Phil Gyford again, saying:

As companies like Yahoo! switch off swathes of our online universe, little fragments of our collective history disappear. They may be ugly and neglected fragments of our history, but they’re still what got us where we are today.

Ugly and neglected fragments. That sums it up nicely. But very much part of our history. I don’t just mean our history on the web. Our history as human beings.

And you can say, well they were just silly pages, the majority of them; not all of them contained useful information (although there was a lot of useful information on there). But the point is, you can’t know today what the value of something is going to be in the future.

My friend Bruce was helping someone. This was a widow; her husband had published all his poetry on GeoCities and only on GeoCities. And you might say, well, she should have backed it up then, right? She should have known better. But the point is, when all these services are asking you to publish, they never say, and don’t forget to make a back-up because we might get bought up and we might delete all your stuff. They neglect to mention that at the moment they’re asking you to publish. So all of her husband’s poetry was lost. Bruce was able to find a back-up, luckily.

It was thanks to this guy, my friend Jason Scott. Jason Scott founded the Archive Team. The Archive Team set about trying to save as much of GeoCities as they could.

Is anybody here with Archive Team? No. Okay, because I know there’s some popularity in Germany with Archive Team doing great work.

Jason had something to say about the GeoCities shutdown at the time. He said:

When history takes a look at the lives of Jerry Yang and David Filo, this is what it will probably say: two graduate students, intrigued by a growing wealth of material on the internet, built a huge fucking lobster trap, absorbed as much of human history and creativity as they could, and destroyed all of it.

Which is a pretty good description of what happened.

The mealy-mouthed justifications I still hear from people today: “Well, that’s business.” Just like I’m sure… “Well, that’s war.” You know, all’s fair.

I don’t buy it. I think we can do better. I think we can try. It takes effort, it definitely takes effort. That’s why we have people like Archive Team. Archive Team does not ask for permission. They have a watchlist of endangered sites, and frankly, let’s face it, MySpace probably doesn’t have that long to live, and that will also be a tragedy.

The Archive Team has been doing such good work, they ended up getting subsumed into Archive.org who are the Internet Archive. Jason Scott is certainly one of my heroes. Brewster Kahle, from the Internet Archive is another hero of mine. He’s like the Bruce Wayne of digital preservation, because he’s got money, he’s a rich guy, and instead of squandering it on stuff, he spends it on trying to preserve our online history.

One of the things that we say here all the time is bits in and bits out. And that is basically just an even shorter way of saying, universal access to all knowledge.

Well, do you go and put it into a cloud which really needs putting into corporate hands; somebody else might turn it off at any moment, like a Yahoo! video that’s already gone; Google video that’s already gone; GeoCities that’s already gone. You Tube. Oh, it’s not going to last for everything. I don’t think so. Flickr? Not even. So how do you go and try to give things away in a professional way? Access drives preservation.

Access drives preservation. I firmly believe that, and as much as I love the Internet Archive and the amazing work they’re doing—essentially creating a back-up of the internet—it’s at a different URL, and when it comes to access, the very idea of URLs is really what drives it, I think. So it’s wonderful that we have saved a lot of GeoCities. We’ve saved a lot of stuff that did get switched off, but they’re at different URLs. And those links break; it kind of breaks the fabric of the web itself.

Taking care of stuff: it’s work. I’m not going to deny that; it takes work to keep stuff on the internet, but that’s no reason to give up. We should be working at this. But instead, it seems to me we’re more interested in chasing the fast and the shiny, and I don’t just mean in technology, but in terms of what our goals are. What we set out to do, we’re chasing our dream by founding a little start-up. That’s our dream.

Bruce Sterling was in Berlin last month giving a talk, and I like the way that he described a typical Silicon Valley start-up:

In a start-up world, you work hard and you move fast in order to make other people rich. You’re a small elite of very smart young people who are working very hard for an even smaller elite of mostly baby-boomer financiers.

I do love these services that make it easier for people to publish, don’t get me wrong: that’s what a lot of these start-ups do is they lower the barrier to entry, which I’m really happy with. I’m less happy with their blasé attitude to the data that they collect from people.

It tends to be a very one-way conversation when you sign up to a website. Here’s our Terms and Conditions. Do you agree, yes or no? You don’t get to have that conversation: but wait a minute; what’s going to happen to my data, what are your plans? Are you going to get bought up?

So if your start-up is successful, you launch, you’ve got money, you launch. What you’re aiming for basically is a money pot rather than making the web better, a lot of the time.

If you’re unsuccessful, you shut down, you delete all the data.

If you’re successful, you get bought up by a bigger company. Then you shut down and you delete all the data. It’s not a matter of if: it’s a matter of when.

I can give you some examples of just how happy these start-ups are when they get acquired and they shut down and take all the data.

We’re excited to announce that Wavii has teamed up with Google!

We’re extremely excited to announce that Summify has been acquired by Twitter!

At a time that represents new beginnings, we’re thrilled to share the exciting news with you, NabeWise has been acquired by Airbnb.

We are super-excited to announce that Jive has been acquired by Yahoo!

Today we are excited to share the news that Pinterest has acquired Punchfork.

Today we are excited to announce that Google has acquired Picnik.

So we’re excited to announce…

you get the picture, right? It’s always the same message. You can find all of these, there’s a blog called ourinrediblejourney.tumblr.com. Because they always thank you. They thank you for joining them on their “incredible journey”.

We’re very happy that Six Apart wants to invest in growing the vision that we founders of Pownce believe so strongly in, and we’re very excited to take our vision to all of Six Apart’s products.

This one actually really hurt more than most! I was a big user of Pownce. And they destroyed it. They got bought by Six Apart. And Six Apart said, “No, no, it’s okay; we’re going to give you a free Vox account, so why don’t you just open up an account on our other service called Vox, and put all your content there?”

Vox got shut down. Took millions of URLs. Oh, and you know what they said when they were shutting down, what they said at Vox? They said, no that’s okay, why don’t you put all of your content on Posterous.

And last week:

Everyone, I’m elated to tell you Tumblr will be joining Yahoo!

..you can just imagine them sitting there with the Thesaurus going, “I can’t say excited. Elated! Perfect!”

Some people have had enough of this. We’ve been burned enough times. I do think it’s only a matter of when and not if you just get burned. It used to be that the more tech savvy people like us used to be the early adopters of all these services, and now we’re more like the conscientious objectors. We’re the ones being very wary.

For example, when Google launched Google Keep just one week after announcing they were shutting down Google Reader, strangely, people weren’t so excited. The usual tech community weren’t, “Oh, I will sign up for your product called Google Keep.” “Yahoo Preserve.”

We could do something about this. First and foremost, like I said, we could make it more of a two-way conversation.

There’s a great article in Contents magazine about this, about what could services do. Some simple rules they could maybe follow, the first one being the most important. To treat our data like it matters. That really isn’t asking so much. It is, after all, our data. If I’m going to give you my hopes, my dreams, my poetry, is it asking that much that you have some plan for looking after it, beyond getting acquired by exciting company?

No upload without download. If you close a system, support data rescue. Not asking that much. Even, like I said, if the URLs disappear, that does hurt. Even if we do manage to back the stuff up, it is a shame. But it would be nice if some companies just had T and Cs.

Actually the guys from Posterous, they’ve just launched some new service, and they do have like a declaration that says, this is what we’re going to do with your data; we’ve this long-term plan for it. I think clearly they were a bit burned by what happened to their start-up. Not happy about it.

It takes work, I won’t deny it. It takes work. And right now you’d be in the minority if this is the area where you’re going to focus your attention, because it’s the unsexy side of start-ups; it’s the unsexy side of the web.

There are a bunch of freaks and geeks working on this stuff. What’s the alternative to giving all our hopes and dreams and poetry to these services? Well, we could host it ourselves. Go back to having our own websites. There’s these gatherings, like Indie Web Camp.

This was at Indie Web Camp in Portland two years ago. It’s still too geeky, right?; the whole self-hosting thing; the barrier to entry is still too high to have your own website with the kind of tools that you get from these larger companies when it comes to publishing on-line. And frankly, the whole Indie Web thing right now, we probably come across like survivalists in Montana in a bunker holed up with our tinned food and our rifles that we’re polishing, waiting for the data apocalypse.

But my point is, there’s an opportunity here. There’s not many start-ups looking at this audience. There’s not many start-ups looking at this opportunity. It’s like the Marty Neumeier thing, right? You shouldn’t be doing what everyone else is doing. When other people zig, you want to zag, right? And right now, every other start-up is asking users to suck up their data so they can shut it down when they get bought up by Google or Yahoo or Facebook or Twitter. Well, maybe if you’re going to start a start-up, maybe you should zag, maybe you should look at enabling people to publish at their own URLs, the whole Indie Web thing.

Right now, it’s just a bunch of us freaks and geeks, like I said. Some pretty smart people here, right? Tantek Çelik. See that guy over in the corner standing up? That’s Ward Cunningham. He invented the Wiki. Smart guy. There’s some smart people working on this.

I think the first step is actually really, really simple. The first step is just acknowledging that there’s a problem. The first step is questioning the next time somebody says this:

The internet never forgets.

Eskimos have fifty words for snow. Everyone at Columbus’s time thought the earth was flat. Yeah, yeah, yeah, that’s true …No. All bollocks, all of those statements, complete and utter bollocks. And once you start to realise, for some people want to believe this. It’s said like a folk saying almost. “Be careful what you put online because the internet never forgets”, and go, “oh, that’s true, I need to be careful.” Bollocks! I mean really, look at the data. How long does data tend to last online? Getting something to be online for longer than a decade is hard work. A decade is not that long.

Okay, I should maybe calm down. I was supposed to be looking beyond the edge of the plate here, and instead I’m showing you the naked lunch.

My point here is that it’s good to have a knowledge of our history, of where we’ve come from, so we can avoid making the same mistakes over and over and over again.

As Josh Clark says, I don’t think you can be a futurist without being a historian. We tend to be very bad at that in our industry. Maybe it goes hand in hand with technology, that we’re always chasing the next thing. We are always looking forward, and that’s great, but it does pay to look back, to know where we’ve come from.

I say this over and over again, but in web development, I see problems cropping up that appear to be new problems, but actually seem really familiar to me. Like when Ajax hit the scene, 2005, 2006, there were all these issues with the back-button and with bookmarking and people were like, oh this new problem is really hard, and I was like, hang on a minute: I remember frames. It was all the same issues. And now with responsive images, there’s all sorts of hard things we’re tackling. What if you had the really low fidelity image and you swap it out with a higher fidelity image. I remember lowsrc!

You’re about the only person that remembers lowsrc apart from me! Brilliant! I’m showing my age, yes, but my point is…OK, some other person remembers …okay, who remembers lowsrc? Wow! This is my audience! Fantastic!

Knowing history allows you to acknowledge when it repeats itself, you go, “this seems familiar; we’ve been here before, I remember this.” And of course, the reasons were much the same, right? The reasons why we’re trying to solve responsive images as well. The bandwidth is really bad on mobile connections, and in the nineties, the bandwidth was really bad, full stop. So: same problems, same solutions.

Though I do like to look to the future as well. I’m an avid reader of science fiction. I’m a nerd; I’m a nerd. Science fiction at its best, I think, doesn’t necessarily ask the question, “what’s next?” What science fiction does is it asks the question, “what now?” What’s happening right now, and sort of projects forward.

I kind of think of science fiction as like programming. You have your parameters set up, you have your variables, and you run the simulation. You let it go and you see, taking the world as it is today, how could it appear in the future? I think that’s what a lot of good science fiction does. And it can be an inspiration to us. It can drive us forward, and we all know the stories of how science fiction has influenced actual technology. Flip-phones from Star Trek, all that kind of stuff, so it’s not just that I want to get obsessive about history; it’s more that I want to think about time: time on a longer scale than we’re used to. Instead of just looking at what’s on our plate; really looking beyond the edge of our plate, to a longer timescale.

There’s an organisation that does this. Specifically the Long Now Foundation. This is my membership card. Not made of card, it’s made of metal because hey, that’s a durable material, right? The idea is we’re thinking long-term here. If you get a platinum card from Long Now Foundation, it is literally a platinum card.

So the whole point here is to think in longer timescales. Like when they read out their dates, they use five digits instead of four for the years, so that they’re Y10K compliant!

This is probably their most famous project, the Clock of the Long Now. I urge you to check it out; it’s got fantastic design principles behind it. The idea here is to build a clock that tells time for ten thousand years. This is the first prototype model; this is in the Science Museum in London, and you can go along and check it out.

The point here is, if this was just a thought experiment, it would basically be an art piece. It wouldn’t really confront the user with feelings …we have to actually build it. Genuinely build something that’s going to tell time for ten thousand years.

So this is Mount Washington in Eastern Navada, which is a relatively geologically stable spot on the North American continent, which is something you have to think about if you’re building for ten thousand years, and they’re re-purposing a mine, cutting away the steps down there, and that will be the location of one of the clocks, although work is progressing faster in a site—I think it’s in East Texas—kindly donated by Jeff Bazos, so there are now two clock sites.

I really like this. I love the fact that they’re actually doing it, that they’re really going for it. The whole idea is to think longer. How would you tell time for ten thousand years? And I take issue with some of their decisions. I look on the website and the Wiki and stuff, and I think, I’m not sure that’s the best way, but I’m getting involved: I’m thinking about it. At least I’m starting to think in timescales longer than what we’re used to.

They have another project called Long Bets. So on Long Bets, you go to the website and you can make a prediction. You can say, in ten years, in twenty years, in thirty years, whatever, this will happen. And it remains as a prediction, until somebody challenges you. And once the prediction is challenged, now it becomes a bet. And just to keep the barrier to entry nice and high so that you don’t go making tens of predictions every day, it costs you money. You put your money down, and you nominate a charity. You say, if I win, the money goes to this charity, and if your prediction is challenged, the other person nominates a charity, so if they win, the money goes to their charity. I like this a lot and there are some really interesting bets on there. Some of them will remain untested for millennia, but some of them are due to run out in the next five, ten, twenty years. Really interesting. Particularly on the website. We don’t usually think in these timescale on a website.

So I’ve got a prediction that was challenged and turned into a bet, here’s the URL: longbets.org/601, and this is the text of my prediction/bet.

The original URL of this prediction will not exist in eleven years.

So I made the bet in February 2011, so this is the date that it runs out: 22 February 2022. I like the alliteration of the date, that’s kinda why I chose eleven years rather than ten. Matt Haughey from MetaFilter has challenged me. If I win, the money goes to Bletchley Park. If he wins, the money goes to The Computing History Museum in California.

So I basically thought, this is a win-win situation for me, because I want URLs to survive, so if I lose, the URL is still alive in ten years, that’s great, and if the URL disappears, which is bad, at least I get some money. Until somebody pointed out, but if the URL disappears, doesn’t that mean the organisation’s gone and you’ll never see the money? I s’pose, okay. Anyway, I look forward to this date. At least I’m thinking in terms of a decade now, and on that date, I hope to have a drink with Matt in this place, which is the Salon of The Long Now, which doesn’t exist yet; it’s currently being built in San Francisco, but I’d like to toast somebody’s victory there in 2022.

(Alcohol. Remarkably long-lived organisations behind alcohol. There’s a Wikipedia page of the longest-lived companies; a lot of them in Japan. Quite a few of them in Germany. Hotels and pubs and stuff, where they’ve survived for centuries, even over a thousand years. There may be something to that.)

You’re going to hear a lot of words of wisdom from Mandy today. I particularly like this one, that we should be measuring our work success in decades, not months or years.

It’s interesting, on the web, I’ve found that the timescales that have started to matter the most to me are on the extremes, that I’m thinking about where our sites will be in ten years, twenty years, fifty years, right? This really matters, and the other part that I get really obsessive about is performance and how quickly something loads and we’re talking seconds and micro-seconds. Those two extreme timescales are the ones starting to matter most and most to me, and I’m less interested in launching next month or what’s the newest framework this week, or what’s the hot trend this year.

So, it kinda does take time and it takes work. As I said, the biggest issue is confronting that there is a problem when somebody says, “the internet never forgets.” Call bullshit on that.

And taking care of our URLs. It shouldn’t be such hard work, but for some reason it is. For some reason, keeping a URL alive seems to be hard work. I guess it makes sense when you think about domain names, because you might think you own a domain name, but you don’t really. You rent a domain name, and when you rent a domain name, it’s usually in one year, two year, maybe five year increments. It’s not actually that long.

So Tim Berners-Lee said, “cool URIs don’t change” (URIs, URLs: much the same thing). What I like is he’s got the asterisk at the end of the sentence to qualify it, to say:

historical note: at the end of the twentieth century when this was written, cool was an epithet of approval, particularly amongst young, indicating trendiness, quality or appropriateness.

I like that. It’s like, long-term thinking from Tim Berners-Lee, the same guy who said, “This is for everyone.” The web has a certain spirit; it always has, from the start. And this is part of it, that the web is for everyone, that the barrier to entry should be that low that everyone should be able to publish.

There’s another aspect too, and that’s around not needing to ask for permission. This wasn’t so obvious at the birth of the web, that I should be able to just link to a webpage, and not have to ask permission from the page I’m linking to whether I can do that or not. There were even court cases back in the nineties to figure this stuff out. Is that okay? Can we do that? Yeah, we can. We’re used to it now; we’re blasé about it, but actually this is really, really powerful. I can link to anyone. I don’t have to ask permission.

Steve Jobs said, you don’t need anyone’s permission to be awesome …which is ironic, because on the App Store, you need someone’s permission!

And don’t get me wrong, I see some great work being done there. I’ve seen a lot of people who used to do fantastic work in the world of Flash, really experimental, maybe more art-based stuff, moving to iOS and making fantastic work there, and I understand it actually; I understand why the people who were into Flash are now really into the native apps on iOS, maybe Android. Because when they were making that Flash stuff, it was the Flash stuff that mattered: it wasn’t the web that mattered. The web was just a delivery mechanism. Not judging, I’m just stating that’s the way it was. So the work they were doing, which was amazing work, it was on the web, but it wasn’t really of the web. So when a new delivery mechanism comes along—like an App Store—they can very easily shift to that, say, oh that looks cool, I’m going to try that. I understand it.

Whereas I …I’m much more cautious and …I don’t like how this feels; I don’t feel like it’s part of this larger whole. What it feels like actually is more like when the web first came along, we had the option of producing CD-Roms as well, and a lot of people did. Interactive CD-Roms.

Who remembers Encarta? Right, Encarta. What would you rather work on? Encarta or Wikipedia, okay? I kinda think that a lot of the native apps in App Stores have this kind of like making CD-Roms. You can do a lot more in some cases with them. They can be more “rich” but they’re isolated, they’re just islands, they’re pockets that can’t interlink between one another; they’re cut off from one another.

And I know I’m going to come across like I’m an old fuddy-duddy, I’m one of these old web guys and I’m scared by the new App Stores, but that’s the way things work now and I should just get with the programme and I’m dragging my heels, but actually, no! My point is that the App Stores are a step backwards. The App Stores are trying to bring us back to a world before the web.

This is what the world before the web was like. We had limited shelf space; it’s a world of bits, right? Books and movies and stuff. You couldn’t buy them all, you couldn’t choose from all of them because there’s a limited amount of shelf space. Somebody had to decide what goes on the shelves, so we had taste-makers, we had companies who decided what you’re going to read, what you’re going to watch, what you’re going to listen to, right? Taste-makers. It was a world where there was the publishers and there was the consumers. Separate entities. And the web comes along and it really threatens that. It threatens those kind of companies; it threatens totalitarian regimes. All of these kind of industries that are based on dictating something to you rather than allowing choice.

So, when I say that things like the App Store feel like they’re isolated and small and they’re returning to this world of limited shelf space, it’s because I feel like these people are trying to put the genie back in the bottle; the web scared the shit out of them, and they want to return to this kind of world.

You think about the industries …the music industry and the film industry and newspapers and magazines. These are the people who are creaming their pants about iPad apps, right, because they see an opportunity to return to a world of producers and consumers, and try and put that genie back in the bottle which is the web which has just flattened all of that, allowed anyone to become a publisher. This is for everyone. They’re quite rightly threatened by that. And you can understand it; it’s their business model.

So for me, the web just feels like it aligns more closely with my own personal philosophy, and I don’t want to judge anyone who uses different platforms; it just means you have a different philosophy to me. The web tends to be diverse and open, and that means messier. Let’s face it: that means horrible websites with nasty colours and bloated images and all that stuff, but if that’s the price we pay, I’m very happy to pay it.

I’m not comfortable with a system that has rules about who can publish what, a system that rejects an app like this: this is a drones app. It doesn’t have any graphically explicit material; it doesn’t show the results of drone strikes. All it is, is just news reports and a map of where and when the latest drone strikes are taking place. But this was rejected from the App Store.

The App Store has rules, and even if you’re not building something political—you’re building a fart app or whatever—the fact that you’re using an eco-system that has rules that explicitly say:

We view apps different than books or songs. If you want to criticise a religion, write a book. If you want to describe sex, write a book or a song.

Those are the terms and conditions for the App Store. Whereas on the web: go crazy. Do what you want. You do not need to ask anyone’s permission to publish on the web.

There’s another part of the spirit of the web that I think should infuse what we do, if we want to keep it alive. And that’s the fact that Tim Berners-Lee just gave it away. More than the technology that he gave away, I think was this act that really mattered most: that he didn’t try and patent the web, he didn’t try and patent any of that technology. He gave it to the world for free. This is for everyone.

That’s the kind of spirit that imbues a lot of our technology. It would be a very, very different world if the rules of the App Store were to apply to the fundamental technology we use every single day, but it makes sense when you consider where the web comes from, right? It comes from a place that’s working on huge problems.

Talk about long-term thinking: the fundamental issues of science. Okay, let’s take a crack at it. Step one, we need to build a giant underground ring underneath Geneva and start smashing particles together. Amazing, fantastic!

The web was like a by-product. The scientists at CERN were like, “oh yeah, the web thing, that was kinda cool. But this …this is the real issue.” The web’s like a by-product. But the spirit of the web is imbued by the kind of collaboration that happens at CERN.

The web was built to enable collaboration between scientists. There’s this myth that you often hear the web defined as, “Oh, it was to enable the sharing of documents between scientists.” No. Not true. See what Tim Berners-Lee originally said. It was about enabling collaboration. It happened to be that sharing documents was maybe the first and easiest way to do that, but the goal was to enable collaboration, and some of my favourite websites today still do that. Enabling collaboration. Furthering science.

Like the guys from Zooniverse build these fantastic sites. Have you seen Galaxy Zoo? This allows you to classify galaxies. You’re shown a picture of a galaxy and you just answer some questions. Does it look spiral? Does it look globular? This kind of stuff. Stuff that’s actually very easy for a human to do, but very, very hard for a computer to look at an image and parse an image. And you might think, well you know, that’s just crowd-sourcing. There’s all sorts of start-ups do the crowd-sourcing thing. No, no, this is science. You’re contributing to science, and if one of your findings—like a galaxy that you classified on Galaxy Zoo—gets used in a paper, you are a co-author of that paper. You are a scientist, collaborating on the web.

They do fantastic work. There’s a whole bunch of things now like classifying craters on the moon, solar flares. One of my favourites was Old Weather.

This was taking log books from the turn of the twentieth century, early twentieth century ships’ log books that are written in hand-written scrawl, with observations of the day’s weather. And you’ve got to try and decipher it because you’re looking at what a human being has written, very hard for a computer to parse; not that difficult for us to do. And you put that data into a machine-readable form. The point here being, that now climatologists who want to build data models of the climate over the twentieth century now have data going back to before we had electronic instruments, before we had the kind of instruments we use today.

Really fantastic example of collaborating on the web. Makes total sense. This is what the web is all about, right? Collaborating.

I was very, very lucky to get to go to CERN last year and go to the room, see the place where the web was born. The guy who works there now, he’s getting really pissed off with all the people coming by: “I’m trying to work here!”

And I expected I would be really impressed by being at the place where the web was born (less impressed with the typography, but you know). But what really got to me was understanding how things got done there.

This guy, Christoph Remsber—he’s from Bonn—was explaining, it’s one giant hack day. It’s basically like someone goes, “I’ve got an idea for an experiment. Who wants to help me?” Here’s the idea, and someone goes, “yeah, I can help you with that.” And it doesn’t matter whether you’re a student or a Nobel prize winning physicist. These teams from all over the world come together to try out this experiment. Absolutely fantastic.

And Christoph was telling me, when he first came to CERN from Bonn—which was still the capital at the time because it was 1989—you now, it was his first chance to interact with people from all over the world. People from East Germany. People from China. That summer, he’s doing experiments with Chinese students while tanks are rolling back in Tiananmen Square. From Russia, from all over. And they collaborated together, and then they go back to their countries, maybe with some of that spirit.

Then Tim Berners-Lee creates the World Wide Web that year, maybe with some of that spirit. I started to understand more where the web was coming from, that spirit of collaboration.

This is the original proposal, this is “Information Management: A Proposal” by Tim Berners-Lee. You can see the scrawl at the top from his boss, who’s written “Vague, but exciting!”

And I mentioned using science fiction to think about the present and to think about the future. This author, Arthur C Clarke; I know that he had an influence on Tim Berners-Lee. He wrote a short story called Dial F for Frankenstein, when all the telephones in the world getting hooked up at the same time, essentially creating this enormous worldwide network. That influenced Tim Berners-Lee.

There’s another book by Arthur C Clarke—The Fountains of Paradise—that very much influences my long-term thinking. This is a book that looks at the building of a space elevator.

It would have to happen on the equator. The technology: we’re just about there now, with carbon nano-tubes, we could do it. It would be an enormous undertaking. It would require CERN-level co-operation and Apollo-level funding, but we could do it: we could build a space elevator. Get us off this planet. Spread further out into the solar system.

Talk about backing up our data. Right now, all of us, all our culture, we’re all stored on one planet. We need off-site back-up. A space elevator can deliver it. The dinosaurs died out because they didn’t have a space programme.

So every now and then I just like to think about that, what I’m working towards when I’m building websites, when I’m contributing something to the larger whole, thinking that, well maybe this will contribute to more collaboration, better understanding. We’re sharing knowledge, we’re preserving knowledge. Get out there, start mining asteroids, start building colonies in the solar system, start building generation starships, start spreading out to the Galaxy.

So, over the next couple of days, you’re going to hear a lot about technologies, about your day to day work, and you’re going to have a lot to absorb, but at the same time, I’d like you to think beyond the edge of the plate, and try and think in longer timescales as well. Thank you.


This presentation is licenced under a Creative Commons attribution licence. You are free to:

Copy, distribute and transmit this presentation.
Adapt the presentation.

Under the following conditions:

You must attribute the presentation to Jeremy Keith.

Monday, July 15th, 2013


The weather is glorious right now here in Brighton. As much as I get wanderlust, I’m more than happy to have been here for most of June and for this lovely July thus far.

Prior to the J months, I made a few European sojourns.

Mid-may was Mobilism time in Amsterdam, although it might turn out that this may have been the final year. That would be a real shame: it’s a great conference, and this year’s was no exception.

As usual, I had a lot of fun moderating a panel. This time it was a general “hot topics” panel featuring Remy, Jake, Wilto, and Dan. Smart, opinionated people: just what I want.

Two weeks after Mobilism, I was back on the continent for Beyond Tellerrand in Düsseldorf. I opened up the show with a new talk. It was quite ranty, but I was pleased with how it turned out, and the audience were very receptive. I’ll see about getting the video transcribed so I can publish the full text here.

Alas, I had to miss the second day of the conference so I could down to Porto for this year’s ESAD web talks, where I reprised the talk I had just debuted in Germany. It was my first time in Portugal and I really liked Porto: there’s a lot to explore and discover there.

Two weeks after that, I gave that same talk one last spin at FFWD.pro in Zagreb. I had never been to Croatia before and Jessica and I wanted to make the most of it, so we tagged on a trip to Dubrovnik. That was quite wonderful. It’s filled with tourists these days, but with good reason: it’s a beautiful medieval place.

With that, my little European getaways came to an end (for now). The only other conference I attended was Brighton’s own Ampersand, which was particularly fun this year. The Clearleft conferences just keep getting better and better.

In fact, this year’s Ampersand might have been the best yet. And this year’s UX London was definitely the best yet. I’d love to say that this year’s dConstruct will be the best yet, but given that last year’s was without doubt the best conference I’ve ever been to, that’s going to be quite a tall order.

Still, with this line-up, I reckon it’s going to be pretty darn great …and it will certainly be good fun. So if you haven’t yet done so, grab a ticket now and I’ll see you here in Brighton in September.

Here’s hoping the weather stays good.

Friday, June 14th, 2013

Jeremy Keith – Beyond Tellerrand – beyond tellerrand 2013 on Vimeo

I gave the opening keynote at the Beyond Tellerand conference a few weeks back. I’m talked about the web from my own perspective, so expect excitement and anger in equal measure.

This was a new talk but it went down well, and I’m quite happy with it.

Jeremy Keith – Beyond Tellerrand – beyond tellerrand 2013

Monday, November 26th, 2012

beyond tellerrand 2012 session videos | Indiegogo

Marc Thiele, the lovely organiser of the Beyond Tellerand conference, needs our help recovering the video footage from this year’s event:

The HDD with all recordings (16 talks, 2 cameras) crashed. After sending the HDD to a recovery center they sent me a quote about 2832 Euro for the recovery job.

That’s about $4000. So far it’s three quarters of the way there already! Let’s see if we can hit that target.