Tags: story

677

sparkline

Wednesday, January 22nd, 2020

Building

The opening presentation from the New Adventures conference held in Nottingham in January 2019.

Good morning, everybody. It is a real honour to be here. As Simon said, I was here six, seven, eight years ago attending this conference because it’s such a great conference. I’m kind of feeling the pressure now that I’m up here on the stage speaking at this conference. I’m just glad I’m on first so I can get it over with and then listen to all these great talks.

I’m here today to talk to you …which is kind of weird when you think about it. I mean, first, the fact that it’s me up here on the stage through some clerical error.

But also, I’m going to talk to you. I’m going to vibrate air over my vocal cords and move this big meaty piece of flesh inside my jaw up and down vibrating the airwaves and you’re going to listen to me doing that. It seems like a crazy thing to do except for the fact that, of course, I’ll be using language.

Language

Maybe the great distinguishing feature of our species, language. The great leap forward that happened—who knows—50,000, 100,000 years ago when we, as a species, developed language. With language, by moving those vocal cords and that big piece of flesh in my jaw, we can tell stories. I can recount something that happened in the past.

Perhaps more amazingly, we can imagine things that might come to be. I could tell you something that might happen in the future. So language is a kind of time travel.

It’s all possible because we’re speaking the same codebase. The particular language I’m talking now is English. As long as you can decode English then all these noises I’m making will make sense to you even if there isn’t actually any information in the words. I can say Chomsky’s famous one.

Colourless green ideas sleep furiously.

You can parse that. It doesn’t make any sense, but you can parse it.

Most of the time, the sentences we use also convey some kind of information. Language is not just time travel. Language is also communication.

There can be an idea that’s sitting in my head and I’ll, you know, vibrate the air and vocal cords, flap this big fleshy thing in my jaw around, and transfer the idea from my head to your head. Language is almost like a virus. You can’t help but take the idea in.

I can say to you, “Don’t think of an elephant,” right? Now you’ve just thought of an elephant. It’s the language equivalent of the chicken game which, if you haven’t played before, sorry. You’ve just lost.

Chicken game. Don’t look at this chicken. Game over.

This sentence, “Don’t think of an elephant,” is actually the title of a book by George Lakoff. George Lakoff is a linguist. He’s written many books. He wrote Women, Fire, and Dangerous Things. He wrote this, Metaphors We Live By, because he’s kind of obsessed with metaphors.

We use metaphor all the time in language. We use conceptual metaphor, so when we take one idea and we use the language of that idea to talk about a different idea. The classic example being something intangible.

Let’s say time. How do we talk about time when we can’t touch it, we can’t feel it, it’s intangible? Well, we use metaphor.

We talk about time as though it’s a physical object moving through space. We say time flies or time drags or we talk about time as though it’s a resource. We talk about saving time, wasting time.

You can’t do any of those things with time. That’s not how time works. But the metaphor is very helpful.

The other kind of metaphor is the cognitive metaphor. This is what George Lakoff is interested in, particularly in things like political language. How we frame a debate can tip the scales of how that debate would unfold. If we were about to have a debate about tax relief, well, before the debate has even begun, we’ve framed taxation as something you need relief from and the scales have been tipped.

I’m very interested in this idea of metaphor, analogy, and simile and how we talk about the work we do. It’s such a young industry. What we do is we borrow from other industries. We’re not the first to do this. There’s a great book called Understanding Comics by Scott McCloud. Who’s read Understanding Comics? It’s great.

It’s about comics but, really, it’s just a fantastic book. It’s written as a comic. In it, Scott McCloud makes the point of this new medium, comics, had to kind of borrow from the existing mediums that came before. He points out that this isn’t new. He says:

Each new medium begins its life by imitating its predecessors. Many early movies were like filmed stage plays. Much early television was like radio with pictures.

Right? That it takes time.

Now, this idea of a new medium having to borrow the tropes and the language of the medium that came before, this idea pops up again on the web in this article published in the year 2000 by John Allsopp on A List Apart, A Dao of Web Design. Can I get a show of hands of who’s read A Dao of Web Design? Awesome. You are my people. The rest of you, please read it. It’s such a wonderful article.

It’s crazy that I’m standing up here recommending, “Oh, yeah, you should totally read this article from the year 2000,” but it is relevant. It’s amazingly relevant still today. It’s maybe more relevant today than when it was written. 
In the article, John says:

When a new medium borrows from an existing one, some of what it borrows makes sense, but much of the borrowing is thoughtless, it’s ritual, and it often constrains the new medium. Over time, the new medium develops its own conventions, throwing off existing conventions that don’t make sense.

Now, at the time John was writing this, 2000, of course, we were borrowing from what had come before in the previous medium and that was print. We were trying to figure out how do we get the same level of control that we were used to in the world of print on the web. We did that using clever techniques thanks to David Siegel who wrote this book, Creating Killer Websites. David Siegel, if you don’t know the name, you’re certainly familiar with his work because he’s the guy who came up with the idea of using tables for layout or having a one-pixel by one-pixel spacer GIF.

Hey, listen. That was the only way we could do it back then. They were hacks, yes, but they were necessary hacks. He did actually recant. Years later, he wrote a piece that said, the web is ruined and I ruined it. This may be overstating the case, but you know.

He was pointing out we could use these techniques, these hacks to constrain Web and make it work like print. We could get pixel-perfect control. John Allsopp, in his article, he’s kind of pushing against and going, no, no, no:

The web is a new medium. It has emerged from the medium of printing whose skills and design language and convention strongly influence it. It is too often shaped by that from which it sprang. Killer websites are usually those which tame the wildness of the web, constraining pages as if they were made of paper. Desktop publishing for the web.

So, I mean, John totally acknowledges that there is a lot to learn from this rich, rich history of print and, before print, just writing. This is clearly the second great leap of our species. We had language where we could communicate ideas, tell stories, imagine the future—as long as we’re in the same physical space—and then we came up with writing. Now we can communicate, re-viral ideas, talk about the future and the past, and we don’t even have to be in the same physical place. Someone who died centuries ago can put an idea in your head by putting language onto a medium like vellum or, later, paper.

You can see this evolution over centuries from illuminated manuscripts to the printing press, Gutenberg, until we get to the 20th Century and we really start to refine the design. We got the Swiss School of Design, the fonts, typography, and the grid system. There’s a lot to learn here.

The Book of Kells. Gutenberg’s bible. Grid Systems.

What’s interesting to me, though, is what seems to be this battle of extremes. We’ve got David Siegel talking about desktop publishing for the web, effectively, and John Allsopp talking about, “No, the web is its own medium. It needs to have its own conventions.”

They seem to be at opposite ends of a spectrum. Yet, they actually have a commonality because, on both sides, when they’re talking about this, they’re talking about websites — web sites. Now, that in itself is a metaphor. You don’t have physical sites on the web. It’s intangible like time. Yet, we chose this metaphor. The idea of a site, a place where you go to a physical place.

Site actually is pretty good with connotations of a building site, a construction site. That was literally the metaphor in the ’90s. The web is like a construction site. It kind of is constantly under construction. Oh, you want the full nostalgic effect?

Under construction.

There we go. We’re back to Geocities. But I feel like then we decided to grow out of this metaphor and use more grownup metaphors. We got professional. We had to borrow from other industries, other mediums, and here’s one that people are very fond of borrowing: architecture—describing what we do as architecture.

Architects

Whether it’s on the design side or the development side, talking about us as architects. It seems like a very appealing industry to borrow from, which is fascinating. If you ever talk to architects, man, it’s a shitty industry. Spec work, awards, and competition, it’s not a great industry.

But we seem to hold it up as, like, “Oh, yeah, we’re like architects because architects are awesome.” I think of Hollywood because every Hollywood movie that has an architect in it, the architects are always really nice people. They’re always like the protagonist, never the antagonist. The architect is never the villain.

It’s fair enough. It’s fair enough to borrow things from something like architecture. For example, I know plenty of designers who would say that this book is the best book about UX that they’ve ever read, 101 Things I Learned in Architecture School by Matthew Frederick. It was published in 2007. It’s not written for UX designers. It’s not written about the web, but there are lessons in there that are directly applicable.

There are other works from the world of architecture that have definitely influenced the work we are doing today like the classic from Christopher Alexander, A Pattern Language. Now this—I say classic rightly—this is a classic book. A classic book is a book everyone has heard of and nobody has read.

That is certainly the case here. Published in 1977, and it influenced lots of people doing things in the digital space. Ward Cunningham, the inventor of the wiki, he said, yeah, he was really influenced by A Pattern Language.

The idea of a pattern language, it’s architecture, but breaking things down into components that you could change the parameters we used in public spaces, buildings, things like that. It’s a modular approach. Later on, in the software world, a gang of four, they wrote Design Patterns: Elements of Reusable Object-Oriented Software, and they were directly influenced by Christopher Alexander, this idea of a pattern language, components, patterns, modularity.

What’s interesting is there’s another book by Molly Wright Steenson, you may remember was a blogger, Girl Wonder. She worked in the world of architecture and she’s written a book about the influence of architects and designers on the digital space. Richard Saul Wurman, and information architecture. There’s a very direct metaphor there, but also Christopher Alexander.

She points out, actually, the funny thing is, he’s had way more of an influence in the digital space than he ever had in architecture. Most architects don’t like him. They think he’s a bit preachy. But his influence in the digital space is massive. Here I am talking about modularity, components, and patterns. Well, I mean, that is so hot right now. Design systems, we’re breaking things down into patterns. 
In fact, I ended up organizing a conference in 2017, purely about design systems, pattern libraries, styles, all this stuff called Patterns Day. It was great. We had these wonderful speakers. Jina Anne was there, Rachel Andrew, Alla Kholmatova, Alice Bartlett. It was great.

But, by the end of the day, I was kind of half-joking as saying, we should have had a drinking game where, every time someone referenced Christopher Alexander, we had to take a drink because his spirit loomed large over this. Actually, the full rules of the drinking game I came up with afterward where any time someone references Christopher Alexander, you take a drink. Any time someone says Lego, you take a drink. Any time someone says that naming things is hard, take a drink. Any time someone says atomic or atomic design, take a drink. Anytime someone says bootstrap, you puke the drink back up.

A Pattern Language is a work of architecture that directly not just influenced but is still influencing our work today; the idea of breaking things down into components to reuse.

Now, there’s another work from the world of architecture that has a big influence on me. It’s a classic book, again, How Buildings Learn. It’s the best book I’ve never read, published in 1994, by Stewart Brand. There was also a TV series that went with this that’s pretty fascinating.

In this, he talks about the work of a British architect named Frank Duffy and Duffy’s idea of something he called shearing layers. What Duffy said was that a building properly conceived is several layers of longevity. He kind of broke these down. You’ve got the sites that the building is on. We’re talking about geological time scales.

Then above that, the structure you hope will last for centuries. Then you’ve got the infrastructure inside the building that you might have to swap out every few decades. Change the plumbing. Then you’ve got the walls and the doors. You can change them every so often until you get into the room. You’ve got furniture, which you can move on a daily basis.

The time scales get faster as you move inward. He diagrammed it like this. This is shearing layers diagrammed for the building. I find this really interesting, this idea of different time scales.

Shearing layers.

But there’s another factor here I’m kind of fascinated by, which is that each layer depends on the layer below. You can’t have a structure until you’ve got a site to build on. You can’t have furniture inside a room until you’ve got the room. You need to have the walls there. Each layer is building on top of what’s come before. You can’t jump straight ahead to furniture without first having all those other layers.

Now, this reminds me of another idea that the writer Steven Johnson talks about a lot in his work, for example, this book, Where Good Ideas Come From. This is the idea of the adjacent possible, that certain inventions leap forward that can’t happen until other things have happened before them.

There’s a reason why the microwave oven wasn’t invented in medieval France. Too many other things had to be invented first before something like the microwave oven becomes inevitable.

Everything we do is kind of built on this idea of the adjacent possible because businesses and services on the web are on top of a whole bunch of layers of adjacent possibilities. You can’t have Twitter, Facebook, or Wikipedia until the web exists. The web itself is built on all of these layers that have to happen first.

We have to have the Industrial Revolution. We have to have electricity. Then somebody has to create circuitry. We have to get to the idea of having computers and then networked computers, something like the Internet. Then the web becomes possible. Once the web is possible, then all these businesses on top of the web become possible.

This idea of the adjacent possible, the shearing layers, they kind of fascinate me because I’m seeing a parallel there.

Now, Stewart Brand, who wrote about shearing layers and architecture, he revisited this idea of shearing layers and took them out from the world of architecture in a later work called The Clock of the Long Now. Stewart Brand is one of the founders of the Long Now Foundation. If you haven’t heard of it, it’s an organization dedicated to long-term thinking. I’m a card-carrying member. The card is designed to last for a few thousand years as well.

They’re currently building a clock that will tell time for 10,000 years. Brian Eno has written an algorithm for the chimes so that when it chimes once a century, it will never be quite the same chime. It’s encouraging long now thinking.

In this book, the full title of the book being The Clock of the Long Now: Time and Responsibility: The Ideas Behind the World’s Slowest Computer, he extrapolates shearing layers into something he calls pace layers. If you take the shearing layers model and look around you, it’s everywhere. It’s kind of like systems thinking, the Donella Meadows idea that systems are everywhere.

Pace layers.

It’s kind of true. You look around these pace layers; shearing layers applied to the real world are everywhere. The example he gives is our species. If we look at the human race, we have these different time scales. The slowest is our physical nature as in our DNA, our physiological nature. That takes millennia to change. Physiologically, there’s no difference between a caveman and a spaceman.

Above that, you’ve got culture. This takes centuries, maybe longer, to accumulate over time.

Then systems of governance; not governments — governance. How are we going to run the societies?

An infrastructure, you want that to move faster, but not too fast or it could be very disruptive. 
Then you get into commerce, trading. Very fast-moving.

Then, finally, you’ve got fashion, which is super-fast. By fashion, he means things like popular music, anything that’s supposed to move fast. If fashion moved slowly, that wouldn’t be a good thing. It’s meant to move fast. It’s meant to try things out. “What about this? No, what about this? Try this.” Right? You don’t want for the things further down.

He’s mapped this onto these layers. From shearing layers, we go to pace layers. They have different timescales.

I’m talking about the difference between these really fast layers at the top, you know, “What about this? Try this? Today, we’re doing that,” compared to the really slow layers at the bottom that move slowly and are resistant to change.

He says:

Fast learns but slow remembers. Fast proposes and slow disposes. Fast is discontinuous but slow is continuous. Fast and small instructs slow and big by a crude innovation, an occasional revolution, and slow and big controls small and fast by constraint and constancy. Fast gets all our attention, but slow has all the power.

Now, once I was exposed to this idea and this virus had landed in my head, I found that I couldn’t get it out of my head. I started seeing pace layers everywhere. At Clear Left, where I work, it’s a running joke. On every project, we have a kickoff. It’s like, what’s the time to pace layers? How long will it be before someone makes a pace layer analogy? It’s like my brain has now been rewired to see pace layers everywhere.

It’s like, you know, the first time that someone points out the arrow in the FedEx logo. There was your life before that and there’s your life after that.

You’ve all seen the arrow in the FedEx logo. Yeah.

What about Toblerone? You’ve all seen the bear? Ah, yeah! Right? You will never be able to unsee that.

Consider the duck.

It’s a perfectly normal, ordinary duck. Agreed? But then your brain is exposed to the idea that all ducks are actually wearing dog masks.

All ducks are actually wearing dog masks. Now, when I show you the same picture of the same duck—

—you will never be able to unsee that. That’s how my brain feels when it comes to pace layers. I see them everywhere. It’s like the crazy wall part of the serial killer’s lair in the murder mystery. It’s just pace layers.

I couldn’t help but apply pace layers to the work we do mapping our medium to pace layers. Let’s try it with the World Wide Web.

The layers of the web.

Well, we build on top of the Internet. We can’t have the web before having Internet. At the very bottom layer, you’ve got the protocols of the Internet itself, you know, TCP/IP, which have been pretty much unchanged for decades. They were there from the ARPANET before the Internet. It’s a good thing that they’re unchanged. You would not want to be swapping out that low layer very quickly.

Above that, we have all the different protocols we use, protocols for email, protocols for file transfer, and protocols for the World Wide Web, HTTP, the hypertext transfer protocol. Now, this has evolved over time. We now have HTTP2, but it’s been a slow process and that feels right. Again, we shouldn’t be swapping out too quickly, but it’s a bit faster moving than the Internet protocols. 
On top of HTTP, we can put our URLs. Now, I would love it if URLs were right down at the bottom layer and they were permanent and they never changed and they never went away. That is the web I want, but I must acknowledge that, alas, you have to work hard to keep URLs alive. They do change. They do move. They do get destroyed, which is a bit of a shame, but we can work at it, people. We can work on keeping our URLs alive.

What we put at that those URLs, at the simplest level, we’ve got HTML. It was there from the start. From day one of the web, HTML was there and it’s still there today, but it’s evolved. It’s changed over time. Initially, HTML had 21 elements and now it’s got 121 elements, so it’s evolved.

But it feels like you can keep up with the pace of change. The last big evolution of HTML was 2010, later, with HTML5. We do get new editions every now and then, but it’s fine. We can keep up with it.

Then CSS, CSS changes may be more — definitely changes more rapidly than HTML. That feels like a good thing. We kind of want more. Give us some more CSS and now we’ve got Grid and we’ve got Flexbox. We’ve got all these great, new CSS things. Custom properties.

I don’t feel too overwhelmed by that. I still feel like, “Oh, no, this is good. We’ve got new CSS. I’m feeling I can keep on top of this, you know, read the right articles, read the right books, try them out. It’s fine.”

Then there’s the JavaScript ecosystem.

Specifically, the ecosystem, not the language, because the JavaScript language itself doesn’t actually change that often. ES6 or ES2000, whatever we’re talking about the evolution to the language, they’re not so rapid that we’d get overwhelmed. But the language ecosystem, the culture of JavaScript, that feels overwhelming to me. Right? Since I’ve been speaking up here, two new JavaScript frameworks have been released.

The pace, I constantly feel like I’m falling behind like, “Oh, I haven’t even heard of this new thing that apparently everybody is using.”

Does anyone else feel overwhelmed by this pace of change? Okay, good. Keep your hands up for a sec and just look around. All right? You are not alone. This turns out to be normal.

But here’s the thing. By mapping these different rates onto this model of pace layers, I actually start to feel better about this because let’s say the JavaScript ecosystem is fashion: “It’s going to do this. No, no, today we’re doing that. Try this. Try that.”

Whereas, “Oh, okay. It’s supposed to move fast. It would be bad if it moved slow. It’s meant to be trying stuff out. We see what sticks.”

With fashion, the best of pop music will probably last and find its way down the layers into culture, a slower pace layer. With the JavaScript, the patterns that work in JavaScript may find their way down into the slower moving layers.

To give you an example, when JavaScript was first invented—I’m showing my age here—I remember the common use cases were rollovers, image rollovers. And form validation, so mousing over something and changing how it looks, we’d use JavaScript for that. If someone is filling in a form and there’s a required field, we’d use JavaScript to make sure that required field was filled in.

These days, we wouldn’t even use JavaScript for either of those. We’d use CSS to do rollovers. We’d use HTML to add just one required attribute. The pattern, it stuck. The spaghetti stuck to the wall and it moved down the layers into something more stable.

That’s what JavaScript is kind of supposed to do. When we’re trying to responsive images, we had JavaScript solutions until we got to something that was further down the stack in HTML.

I do feel overwhelmed by the pace of change. But I’m starting to feel a little better about feeling overwhelmed, that it’s okay. JavaScript is meant to feel overwhelming. It’s where we try stuff out. It’s where stuff moves fast.

Now, the other thing I realized by mapping our technology stack of the web onto this pace layer model is that this is how I build. When I’m building a website, I pretty much start at the third layer. I don’t worry about, is the Internet on.

I start with URLs. I think URL design is a really good place to start designing. It is a design discipline, a neglected one, but it is design. Then I think about the content and then structure that content using the best available markup of HTML. I think about the presentation may be on a small screen first and then the presentation on larger screens using CSS. Then start thinking about extra behaviors that I can’t get with HTML and CSS, so I reach for JavaScript to add those extra behaviors.

This seems to me to make sense as a way of building on the web because it maps to the structure of the pace layers of the web. But it’s also a testament to the flexibility of the web that you don’t have to build this way. If you don’t want to build in this layered way, you don’t have to.

In fact, you can build like this. You can put something that’s on the Internet, but you just do everything in JavaScript. URL routing, let’s do that in the browser in JavaScript. The Document Object Model, let’s generate that in the browser in JavaScript. CSS, apparently we’re doing it in JS now.

Everything in JavaScript. This is an absolutely legitimate choice. You can choose to build things on the web like this. The web allows this. Again, it’s a testament to the flexibility of the web.

Now, personally, I don’t build like this and this doesn’t feel quite right to me. It doesn’t feel like it maps to the web too well. It kind of turns it into this all or nothing situation where, as long as we’ve got JavaScript, everything is going to be great. But if we don’t, there’s nothing.

You end up with this situation where we’ve turned what we’re building on the web into a binary situation. Either it works great or it just doesn’t work at all. There’s this kind of single point of failure there with the JavaScript.

Now, this model makes complete sense in other mediums. I think other mediums have influenced our thinking on the web. Maybe we’ve borrowed the metaphors of these other mediums.

For example, if you’re building a native app, this makes complete sense. If you’re building an iOS app and I have an iOS device, it works great. I get 100% of what you designed. But if you build an iOS app and I have, say, an Android device, it doesn’t work at all. You can’t install an iOS app onto an Android device. Those are your options: either it works great or it doesn’t work at all. This mental model makes complete sense in that field.

On the web, because we can have this layered approach, that means we can build like this. We can go from something that doesn’t work at all to something that just about works—maybe it’s just text on a screen—to something that works fine—maybe it’s missing a bunch of behaviors, but the user can accomplish what they want to do—to something that works well, but maybe the latest and greatest browser APIs aren’t supported by a particular browser—and then to something that works great like the latest browser running the best device, great network.

Building in layers.

Most people are going to be somewhere on this continuum. Maybe nobody is going to get 100% of what you hope they get, but no one is going to get zero percent either as long as you’re building in this way, as long as you’re building with the grain of the web, building in layers, one thing on top of the other.

I’m going to quote Ethan here. Hi, Ethan. Ethan said:

I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it would change if, say, fonts aren’t available or JavaScript doesn’t work or if someone doesn’t see the design as you and I might and is having the page read aloud to them.

In a way, this is a way of busting assumptions, the what-ifs. What if something isn’t supported? By building in a layered way, it’s okay. Everything will fall back to the layer below, the adjacent possible.

Now, Ethan, of course, we all know from this article, Responsive Web Design, published on A List Apart. When was that? 2010. My God, nine years ago. Ten years after, John Allsopp published A Dao of Web Design on A List Apart. One of the first things Ethan does in this article is to reference A Dao of Web Design. You could say that Ethan was building on top of that foundational layer that was set by John Allsopp.

Architecture again. Responsive web design. The reason why Ethan chose that term was because there was this idea in architecture called responsive architecture about buildings that could respond to the conditions of the people in the buildings. That made a really good metaphor for talking about the web on large screens, small screens, and everything in between.

This architecture thing, as a metaphor, it’s not bad. We can learn from it. I think, just be careful not to take it too far.

It’s not the only metaphor we use. Here’s another one. When we don’t talk about ourselves as architects, we’re engineers. Yeah.

Engineers

It sounds good. This one predates the web. We’ve been talking about the idea of software engineering for a long time. I’m very partial to this term: software engineering. Not so much for the term itself. Not that I think it’s a particularly good metaphor, but from where it comes from, which is fricken’ awesome.

Margaret Hamilton.

The term “software engineering” comes from Margaret Hamilton. Margaret Hamilton was in charge of the onboard flight software on the Apollo moon landing. This is engineering. That is the code base she’s standing next to there, which would then literally be woven into the computers onboard Apollo.

But as a metaphor, engineering, well, there’s a whole bunch of different kinds of it. What kind of engineer are we talking about here? Is it material engineering, structural engineering, chemical engineering, aeronautical engineering? They all have commonalities. One being, as an engineer, you’ve got to know two things. There’s the materials you’re going to be working with and the tools you’re going to use to shape those materials.

Now, I think that we can use that metaphor and apply it to the web. I would say the materials on the web are HTML, CSS, and our JavaScript, hopefully in that order. Then we’ve got the tools we use to design for the materials of the web. 
Now, the most obvious tools we could think of are graphic design tools. We started using Photoshop even though that was never intended for Web design. Since then, we’ve evolved and we’ve got tools that are much more focused on the web, things like Sketch, Figma, and all this kind of stuff.

These are obvious tools we use to build the web, but there are less obvious tools. If you were working on a Web project, these tools also get used. You’re going to be talking over email. You’re going to be communicating over Slack, organizing spreadsheets, spreadsheets people.

We talk about these as productivity tools, though sometimes I know it feels like they are reducing productivity rather than increasing it. But it’s kind of a misnomer when you think about productivity tools. All tools are productivity tools. That’s literally what tools are for is to make you more productive.

I think we should acknowledge that these are legitimate design tools. You can’t launch a project without putting in some time and some kind of communication tool.

Then when it comes to the actual welding of these materials, we’ve got a whole bunch of tools that sit in our machines or sit in our Web servers. Now I feel like I’m back up at that top layer of the pace layers and I’m getting overwhelmed with the task runners, the build tools, the chains, the transpilers, and the preprocessors. Apparently, it changes every week. Oh, you’re still using Grunt? No, we’re using Gulp. No, Webpack. That’s what’s so overwhelming.

It also feels like it’s quite complicated. This is complicated stuff, but it’s like we’ve chosen it. We’ve chosen to make our lives complicated, in a way.

I’ll tell you what it reminds me of. Do you remember that startup, Juicero?

Where they sold a big, expensive, complicated machine to make juice, but you had to buy exactly the right juice packets to put in the big, expensive machine to make the juice. It works. It works great. The big, expensive, complicated machine does its job but somebody noticed that you could actually just take the packets and squeeze them by hand and it still produces juice. I’m just saying that squeezing by hand is still an option. You can build websites by squeezing by hand. (I think this metaphor has been stretched just about as far as it can do, so I will leave it there.)

There’s this other kind of spectrum, I guess, between the materials and the tools and then the people that will be exposed to the materials and the tools. They kind of fall into two categories: the engineers themselves and the end-users.

When we’re evaluating our tools and asking, “Is this the right tool to use?” we should evaluate it from our perspective, yes, “Is this going to be a helpful tool to me as an engineer?” if we’re using that metaphor. But I strongly feel we should also ask, “Is this going to be useful for the end-user?”

If those two things come into conflict, what then? Do we privilege our own experience over the user experience? I would hope not. I worry that, in a lot of tool choices, particularly on stuff that gets sent down to the browser. “Oh, I’m going to use a CSS framework.” Great. Good for you. That’s helping you out but now the user has to pay the cost of the benefit that you get from that CSS framework because they have to download the whole CSS framework.

Sometimes, these things come into conflict and I feel like maybe we privileged the developer experience over the user experience and that worries me. The other time they don’t come into conflict. All those tools like preprocessors and task runners that just sit on your own computer, no direct effect on the end-user experience. Frankly, use whatever you like. It doesn’t make a direct effect on the end-user experience.

When we’re evaluating tools, there are all these questions to ask. Who benefits from the tool? If I choose to use this tool, will it benefit the users? Will it benefit the engineers? Neither? Both?

There are other questions we ask like, well, just how good is this tool? To evaluate that we ask; yeah, how well does it work? Does this tool do what it says it will do well?

This, of course, is a completely valid question to ask but there’s a corollary that I think is more valid and that’s to ask not just how well does it work but how well does it fail?

What happens when something goes wrong?

This is exactly why I think this layered approach makes sense because, if you build in this layered way, each one of these layers can fail well. If you build like this, then JavaScript can fail well. What if something goes wrong and you’ve got an error in your JavaScript? You fall back to something that still works. Not as great as it worked before, but it still works. It fails well.

These technologies on the web, they fail well by design. CSS fails well. Use a CSS property the browser doesn’t understand or CSS value. The browser just ignores it. It fails well.

HTML: Make up an HTML element. Throw it into a webpage. The browser doesn’t throw an error. The browser doesn’t stop parsing the webpage. It just ignores it and moves on. It fails well.

It actually makes sense to not jump ahead to the powerful stuff, to the top of the pace layers, but to try and build in layers and stay low for as long as possible. This is actually a principle, a principle that underlies the architecture of the web itself called the Principle of Least Power. You should choose the least powerful language for a given purpose, which seems really counterintuitive.

Why would I choose the least powerful language to do something? Surely, I want more power. The idea here is the power comes at an expense. Power comes at the expense of complexity, fragility. The more powerful technology is maybe more likely to fail badly.

Derek Featherstone put it well. He said:

In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof. It just works.

The example there was rollovers. How are you going to do rollovers? Do it in JavaScript? No, do it in CSS. :hover - done. Right? Oh, you need to make an interactive button? Use the button element. Be lazy.

This makes a lot of sense, the Principle of Least Power. It makes a lot of sense to me on the web, especially when you combine it with a universal law that definitely applies on the web, and that’s Murphy’s Law:

Anything that can possibly go wrong will go wrong.

This comes directly from the world of engineering. Edward Aloysius Murphy Jr. was an aerospace engineer. It’s because he had this attitude, he never lost anybody on his watch.

I think we tend to dismiss things going wrong as edge cases. We kind of assume the average output. Other industries, when they’re making cars, they test them. They strap crash test dummies in. They smack them into walls at high speed.

To be fair, a lot of the reason why they have to do that is because of regulation. They didn’t necessarily choose to do it, but still. Can you imagine if they went, well, actually, we realize that most people are going to drive cars on roads and people driving into walls is an edge case, so we’re not going to worry too much about that?

Now, obviously, you want to hope for the best but you should prepare for the worst. Trent Walton said:

Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability.

The web’s inherent variability, that gets to the heart of it.

Dave Siegel was trying to battle with the pixel-perfect labels was the web’s inherent variability. What John Allsopp was calling was for us to embrace the web’s inherent variability. It’s a feature, not a bug.

Are we engineers? Can we call ourselves engineers? Well, let me tell you something from the world of structural engineering.

This is the plan for the Quebec Bridge in Canada, a cantilever bridge. Construction started at the start of the 20th Century. There was a competition to see who get to design and build a bridge because that’s the way the industry works.

The engineer in charge was named Theodore Cooper. Now, originally, the bridge was meant to be 490 meters long but Theodore Cooper changed the specification to make it 550 meters long, mostly because, up in Scotland, the Firth of Forth Bridge, that was the longest bridge in the world at the time, longest cantilever. He wanted this bridge to exceed that, so he made the bridge longer but he did not recalculate the already high stresses being placed on the material of the bridge.

Oh, also, Theodore Cooper refused to work on site. He was down in New York, supposedly overseeing construction from New York. And when it was proposed that somebody should check his calculations, he took that as a personal afront and said, “No, no, no. No, no, that won’t work,” so there was no code reviews happening on this project.

Now, someone was onsite, the young engineer named Norman McLure. By 1907, August 6th, he had started to notice that the steel was bending, getting a lot of stress. Then again, on August 27th, it had got worse.

Cooper was notified down in New York. He did send a telegram back to Quebec. He said, “Place no more load on Quebec bridge until all facts considered - stop.” But he was inferring that the work should stop. He never explicitly said, “Stop the work right now,” so the telegram was ignored and work continued.

On August 29th, 1907, the bridge collapsed. It was shortly before the end of the day. The whistle was just about to blow to signal the end of the working day. There were 86 workers on the bridge and 75 of them died.

Now, something started happening in Canada a few years after this, by 1925. Engineering schools in Canada started holding private ceremonies around graduation time. This was a ceremony that was separate from qualifications. This wasn’t about whether you were qualified to be an engineer. This was called The Ritual of the Calling of the Engineer. You would speak an obligation penned by Rudyard Kipling, which I won’t repeat here because it’s meant to stay within the confines of this ritual.

You would also receive an iron ring. This iron ring would be a symbol of pride of being an engineer, but also a symbol of humility. For the longest time, the myth persisted that the iron itself was made from the steel in the Quebec Bridge. It’s not true, but the Quebec Bridge certainly looms over the idea of the iron ring. You’d wear it on the little finger of your working hand, so it would brush against the paper or the computer keyboard during your working day as a constant reminder of your responsibility as an engineer.

The iron ring.

When we call ourselves engineers, I do have to ask, have we earned it? Do we take our responsibility seriously?

Maybe we don’t call ourselves engineers, but then what do we call ourselves? Does it even matter?

Builders

Well, we could go back to that original metaphor from the ’90s, under construction. Maybe we’re builders. We build things. The web is under construction. We’re the ones constructing it. It’s not so bad, you know, to be the ones literally building the web. It’s kind of awesome when you think about it.

Christopher Alexander, when he was talking about his reason for coming up with A Pattern Language, was because he said:

Most of the wonderful places in the world were not made by architects but by the people.

Maybe we’re at the bottom of the layer stack here as workers just building the web, but maybe we also have all the power — more power than we realize. Our collective power is greater than anything any architect could wield.

Yeah, maybe we’re builders. Maybe we’re bricklayers. I know Simon comes from a long line of bricklayers. It is a noble profession. Think about what our building blocks are, the building blocks of the World Wide Web.

The World Wide Web, I think, is the next great leap forward. We had language, writing, the printing press, and now hypertext in the form of the Word Wide Web. Who gets to build it? We do with this kind of building block: the URL, a link. What an amazing building block that is.

I can make a webpage and put two links on it linking to two different things. That combination of those two links has never existed before in the history of the web. We’ve created something new, link by link, building block by building block, building in layers.

I’m reminded of an apocryphal story may be from medieval times—who knows—a traveler coming across three workers. All three workers are doing the same thing. They’re building. They’re moving stones. They’re putting stones one on top of the other.

The traveler says to the first builder, “What are you doing?”

He says, “Oh, I’m moving stones.”

He says to the second builder, “What are you doing?” 
He says, “I’m building a wall.”

He says to the third builder, “What are you doing?”

He says, “I’m building a cathedral.”

They’re all doing the same task but thinking about it in different ways. Maybe that’s what we need to do. Forget about labels, metaphors, architecture, engineer, building, whatever. Just think about what a privilege it is to be doing this, to embrace the fact that we are the builders. We are the bricklayers.

Today, for example, we’re going to hear from quite an amazing collection of bricklayers that I’m really looking forward to hearing from. I want to hear what they’re building. I want to hear their stories of how they built it, why they built it.

But to do that, I need to stop moving air over these vocal cords and flapping this fleshy piece of meat around in my mouth and just stop talking. Thank you for listening.

Living in Alan Turing’s Future | The New Yorker

Portrait of the genius as a young man.

It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.

Thursday, January 2nd, 2020

The Rise Of Skywalker

If you haven’t seen The Rise Of Skywalker, avert your gaze for I shall be revealing spoilers here…

I wrote about what I thought of The Force Awakens. I wrote about what I thought of The Last Jedi. It was inevitable that I was also going to write about what I think of The Rise Of Skywalker. If nothing else, I really enjoy going back and reading those older posts and reminding myself of my feelings at the time.

I went to a midnight screening with Jessica after we had both spent the evening playing Irish music at our local session. I was asking a lot of my bladder.

I have to admit that my first reaction was …ambivalent. I didn’t hate it but I didn’t love it either.

Now, if that sounds familiar, it’s because that’s pretty much what I said about Rogue One and The Last Jedi:

Maybe I just find it hard to really get into the flow when I’m seeing a new Star Wars film for the very first time.

This time there were very specific things that I could point to and say “I don’t like it!” For a start, there’s the return of Palpatine.

I think the Emperor has always been one of the dullest characters in Star Wars. Even in Return Of The Jedi, he just comes across as a paper-thin one-dimensional villain who’s evil just because he’s evil. That works great when he’s behind the scenes manipulating events, but it makes for dull on-screen shenanigans, in my opinion. The pantomime nature of Emperor Palpatine seems more Harry Potter than Star Wars to me.

When I heard the Emperor was returning, my expectations sank. To be fair though, I think it was a very good move not to make the return of Palpatine a surprise. I had months—ever since the release of the first teaser trailer—to come to terms with it. Putting it in the opening crawl and the first scene says, “Look, he’s back. Don’t ask how, just live with it.” That’s fair enough.

So in the end, the thing that I thought would bug me—the return of Palpatine—didn’t trouble me much. But what really bugged me was the unravelling of one of my favourite innovations in The Last Jedi regarding Rey’s provenance. I wrote at the time:

I had resigned myself to the inevitable reveal that would tie her heritage into an existing lineage. What an absolute joy, then, that The Force is finally returned into everyone’s hands!

What bothered me wasn’t so much that The Rise Of Skywalker undoes this, but that the undoing is so uneccessary. The plot would have worked just as well without the revelation that Rey is a Palpatine. If that revelation were crucial to the story, I would go with it, but it just felt like making A Big Reveal for the sake of making A Big Reveal. It felt …cheap.

I have to say, that’s how I responded to a lot of the kitchen sink elements in this film when I first saw it. It was trying really, really hard to please, and yet many of the decisions felt somewhat lazy to me. There were times when it felt like a checklist.

In a way, there was a checklist, or at least a brief. JJ Abrams has spoken about how this film needed to not just wrap up one trilogy, but all nine films. But did it though? I think I would’ve been happier if it had kept its scope within the bounds of these new sequels.

That’s been a recurring theme for me with all three of these films. I think they work best when they’re about the new characters. I’m totally invested in them. Leaning on nostalgia and the cultural memory of the previous films and their characters just isn’t needed. I would’ve been fine if Luke, Han, and Leia never showed up on screen in this trilogy—that’s how much I’m sold on Rey, Finn, and Poe.

But I get it. The brief here is to tie everything together. And as JJ Abrams has said, there was no way he was going to please everyone. But it’s strange that he would attempt to please the most toxic people clamouring for change. I’m talking about the racists and misogynists that were upset by The Last Jedi. The sidelining of Rose Tico in The Rise Of Skywalker sure reads a lot like a victory for them. Frankly, that’s the one aspect of this film that I’m always going to find disappointing.

Because it turns out that a lot of the other things that I was initially disappointed by evaporated upon second viewing.

Now, I totally get that a film needs to work for a first viewing. But if any category of film needs to stand up to repeat viewing, it’s a Star Wars film. In the case of The Rise Of Skywalker, I think that repeat viewing might have been prioritised. And I’m okay with that.

Take the ridiculously frenetic pace of the multiple maguffin-led plotlines. On first viewing, it felt rushed and messy. I got the feeling that the double-time pacing was there to brush over any inconsistencies that would reveal themselves if the film were to pause even for a minute to catch its breath.

But that wasn’t the case. On second viewing, things clicked together much more tightly. It felt much more like a well-oiled—if somewhat frenetic—machine rather than a cobbled-together Heath Robinson contraption that might collapse at any moment.

My personal experience of viewing the film for the second time was a lot of fun. I was with my friend Sammy, who is not yet a teenager. His enjoyment was infectious.

At the end, after we see Rey choose her new family name, Sammy said “I knew she was going to say Skywalker!”

“I guess that explains the title”, I said. “The Rise Of Skywalker.”

“Or”, said Sammy, “it could be talking about Ben Solo.”

I hadn’t thought of that.

When I first saw The Rise Of Skywalker, I was disappointed by all the ways it was walking back the audacious decisions made in The Last Jedi, particularly Rey’s parentage and the genetic component to The Force. But on second viewing, I noticed the ways that this film built on the previous one. Finn’s blossoming sensitivity keeps the democratisation of The Force on the table. And the mind-melding connection between Rey and Kylo Ren that started in The Last Jedi is crucial for the plot of The Rise Of Skywalker.

Once I was able to get over the decisions I didn’t agree with, I was able to judge the film on its own merits. And you know what? It’s really good!

On the technical level, it was always bound to be good, but I mean on an emotional level too. If I go with it, then I’m rewarded with a rollercoaster ride of emotions. There were moments when I welled up (they mostly involved Chewbacca: Chewie’s reaction to Leia’s death; Chewie getting the medal …the only moment that might have topped those was Han Solo’s “I know”).

So just in case there’s any doubt—given all the criticisms I’ve enumerated—let me clear: I like this film. I very much look forward to seeing it again (and again).

But I do think there’s some truth to what Eric says here:

A friend’s review of “The Rise of Skywalker”, which also serves as a perfect summary of JJ Abrams’ career: “A very well-executed lack of creativity.”

I think I might substitute the word “personality” for “creativity”. However you feel about The Last Jedi, there’s no denying that it embodies the vision of one person:

I think the reason why The Last Jedi works so well is that Rian Johnson makes no concessions to my childhood, or anyone else’s. This is his film. Of all the millions of us who were transported by this universe as children, only he gets to put his story onto the screen and into the saga. There are two ways to react to this. You can quite correctly exclaim “That’s not how I would do it!”, or you can go with it …even if that means letting go of some deeply-held feelings about what could’ve, should’ve, would’ve happened if it were our story.

JJ Abrams, on the other hand, has done his utmost to please us. I admire that, but I feel it comes at a price. The storytelling isn’t safe exactly, but it’s far from personal.

The result is that The Rise Of Skywalker is supremely entertaining—especially on repeat viewing—and it has a big heart. I just wish it had more guts.

Wednesday, January 1st, 2020

Bound in Shallows: Space Exploration and Institutional Drift

If a human civilization beyond Earth ever comes into being, this will be unprecedented in any historical context we might care to invoke—unprecedented in recorded history, unprecedented in human history, unprecedented in terrestrial history, and so on. There have been many human civilizations, but all of these civilizations have arisen and developed on the surface of Earth, so that a civilization that arises or develops away from the surface of Earth would be unprecedented and in this sense absolutely novel even if the institutional structure of a spacefaring civilization were the same as the institutional structure of every civilization that has existed on Earth. For this civilizational novelty, some human novelty is a prerequisite, and this human novelty will be expressed in the mythology that motivates and sustains a spacefaring civilization.

A deep dive into deep time:

Record-keeping technologies introduce an asymmetry into history. First language, then written language, then printed books, and so and so forth. Should human history extend as far into the deep future as it now extends into the deep past, the documentary evidence of past beliefs will be a daunting archive, but in an archive so vast there would be a superfluity of resources to trace the development of human mythologies in a way that we cannot now trace them in our past. We are today creating that archive by inventing the technologies that allow us to preserve an ever-greater proportion of our activities in a way that can be transmitted to our posterity.

Tuesday, December 31st, 2019

Running Code Over Time – Eric’s Archived Thoughts

We should think of our code, even our designs, as running for decades, and alter our work to match.

Friday, December 20th, 2019

The Layers Of The Web

The opening presentation from the Beyond Tellerrand conference held in Berlin in November 2019.

Guten Morgen. All right. I’m just going to get started because I’ve got a lot to talk about and I’m very, very excited to be here.

I’m excited to talk about the web. I’ve been thinking a lot about the web. You know, I think a lot about the web all the time, but this year, in particular, thinking about where the web came from; asking myself where the web came from, which is kind of a dumb question because it’s pretty obvious where the web came from.

It came from this guy. This is Tim Berners-Lee and he is the creator of the World Wide Web. It was 30 years ago, March 1989, that he wrote a proposal while he was at CERN, a very dull-looking proposal called “Information Management: A Proposal” that had incomprehensible diagrams trying to explain what he had in mind. But a supervisor, Mike Sendall, saw the potential and scrawled across the top, “Vague but exciting.”

Tim Berners-Lee starts working on this idea he has for a global hypertext system and he starts creating the world’s first web browser and the world’s first web server, which is this NeXT machine which is in the Science Museum in London, a lovely machine, the NeXT box.

I was in the neighbourhood so I just had to come by and say hello…

I have a great affection for it because, earlier this year, I was very honored to be invited to CERN, along with this bunch of hackers, to take part in a project related to the 30th anniversary of that proposal. I will show you a video that explains the project.

So, we came to CERN this week in order to create some sort of modern-day interpretation of the very first web browser.

—Kimberly Blessing

Well, the project is to restore the first browser which was developed by the inventor of the Web, and the idea is to create an experience for the people who could not use the web in its early days to have an idea how it felt to use the web at that time.

—Martin Akolo Chiteri

I think the biggest difficulty was to make the browser work in the NeXT machine that we had.

—Angela Ricci

We really needed to work with an original NeXT box in order to really understand what that experience was like in order to be able to write some code and replicate that experience.

—Kimberly Blessing

My role is code, so generating the code to create the interactive aspect of the World Wide web browser, recreated browser. It’s very much writing JavaScript to kind of create all the NeXT operating system UI, making requests to servers to go and get the HTML and massage the HTML back into a format that looks good in the World Wide Web browser; and making sure we end up with a URL that goes into production that someone can visit and see their own webpages. The tangible software is what I’m responsible for, so I have to make sure it all gets done. Otherwise, we have no browser to look at, basically.

—Remy Sharp

We got together a few years back to do a similar sort of hack project here at CERN which was creating the world’s second-ever web browser, which was the Line Mode browser. We had a lot of fun with it and it’s a great bunch of people from all over the world. It’s been really great to get back together and it’s always amazing to be here at CERN, to be at not just the birthplace of the Web, but the most important place on the planet for science.

Yeah, it’s been a lot of fun. I kind of don’t want it to be over because we are in our element, hacking away, having fun, and just soaking up the atmosphere, and we are getting to chat with people who were there 30 years ago, Jean-Francois Groff and Robert Cailliau, these people who were involved in the creation of the World Wide web. To me, that’s amazing to be surrounded by so much World Wide Web history.

The plan is that this will go online and anyone will be able to access it because it’s on the web, and that’s the beautiful thing about the web is that anyone can visit a website, and so everyone will have the opportunity to try using the world’s first web browser and see what modern webpages would look like if they were passed through this first web browser.

—Jeremy Keith

Well, spoiler alert. The project was a success and you can, indeed, look at your websites in a recreation of the first-ever web browser. This is the URL. It’s worldwideweb.cern.ch.

Success, that was good. But as you could probably tell from that video, Remy was the one basically making this all happen. He was the one writing the JavaScript to recreate this in a modern browser. This is the first-ever web page viewed in the first-ever web browser.

As you gathered, again, I was really fascinated by the history of the Web, like, where did it come from, and the people who were there at the time and getting to pick their brains. I spent most of my time working on the accompanying website to go with this project. I was creating this timeline.

Timeline

Because this was to mark the 30th anniversary of this proposal, I thought, well, we could easily look at what has happened in the last 30 years: websites, web servers, formats, standards - all that stuff. But I thought it would be fascinating to look at the previous 30 years as well and try and figure out the things that were happening that influenced Tim Berners-Lee in terms of hypertext, networks, computing, and all this stuff.

But I’d kind of had given myself this arbitrary cut-off point of 30 years to make this nice symmetry of it being the 30th anniversary of the World Wide Web. I could go further back. I could start asking, well, what happened before 30 years ago? What were the biggest influences on Tim Berners-Lee and the World Wide Web?

Now, if you were to ask Tim Berners-Lee himself who his biggest influencers were, he would give you a straight-up answer. He will say his biggest influencers were Conway Berners-Lee and Mary Lee Woods, his father and mother, which is fair enough. Normally, when you ask people who their influences are, they say, “Oh, my parents. They gave me a loving environment. They kindled my curiosity,” and all that stuff. I’m sure that’s true but, in this case, it was also a big influence in a practical sense in that both Mary and Conway worked on the Ferranti Mark 1. That’s where they met. They were programmers. Tim Berners-Lee’s parents were programmers on the Ferranti Mark 1, a very early computer. This is in the 1950s in Britain.

Okay, this feels like a good origin story for the web, right? They were working on this early computer.

But it’s an early computer; it’s not the first computer. Maybe I need to go back further. How far back do I go to find the first computer?

Time machine.

Is this the first computer, the Antikythera mechanism? You can see this in a museum in Athens. This was recovered from a shipwreck. It was recovered at the start of the 20th Century, but it dates back thousands of years, a mechanism for predicting the position of stars and planets. It does calculations. It is a calculating device. Not a programmable computer as such, though.

Worshipping the reliquary of Babbage’s brain

If you’re thinking about the origins of the idea of a programmable computer, I think we could start to look at this gentleman, Charles Babbage. This is half of Charles Babbage’s brain, which is in the Science Museum in London along with that original NeXT box that the World Wide web was created on. The other half is in the Computing History Museum in California.

Charles Babbage lived in the 19th Century, and kind of got a lot of seed funding from the U.K. government to build a device, the Difference Engine, which would do calculations. Later on, he scrapped that and started working on the Analytical Engine which would be even better — a 2.0 version. It never got finished, by the way, but it was a really amazing idea because you could see the architecture of like a central processing unit, but it was still fundamentally a calculator, a calculating machine.

The breakthrough in terms of programming maybe came from Charles Babbage’s collaborator. This is Ada Lovelace. She was translating documents by an Italian mathematician about Difference Engines and calculations. She realized that—hang on—if we’re doing operations on numbers, what if those numbers could stand for other concepts, non-numerical like words or thoughts? Then we could do operations on things other than numbers, which is exactly what we do today in modern computing.

If you use a word processor, you’re not processing words; you’re operating on ones and zeros. If you use a graphics program, you’re not actually moving pixels around; you’re operating on ones and zeroes. This idea of how anything could stand in for ones and zeros for numbers kind of started with Ada Lovelace.

But, as I said, the Difference Engine and the Analytical Engine, they never got finished, and this was kind of a dead-end. It turns out, they weren’t an influence.

Later on, for example, this genius who was definitely responsible for the first working computers, Alan Turing, he wasn’t aware of the work of Babbage and Lovelace, which is a shame. He was kind of working in isolation.

He came up with the idea of the universal machine, the Turing Machine. Give it an infinitely long tape and enough state, enough time, you could calculate literally anything, which is pretty much what computers are.

He was working at Bletchley Park breaking the code for the Enigma machines, and that leads to the creation of what I think would be the first programmable computer. This is Colossus at Bletchley Park. This was created by a colleague of Turing, Tommy Flowers.

It is programmable. It’s using valves, but it’s absolutely programmable. It was top secret, so even for years after the war, this was not known about. In the history books, even to this day, you’ll often see ENIAC listed as the first programmable computer, but I think that honor goes to Tommy Flowers and Colossus.

By the way, Alan Turing, after the war, after 1945, he did go on to work and keep on working in the field of computing. In fact, he worked as a consultant at Ferranti. He was working on the Ferranti Mark 1, the same computer where Tim Berners-Lee’s parents met when they were programmers.

As I say, that was after the war ended in 1945. Now, we can’t say that the work at Bletchley Park was responsible for winning the war, but we could probably say that it’s certainly responsible for shortening the war. If it weren’t for the work done by the codebreakers at Bletchley Park, the war might not have finished in 1945.

1945 is the year that this gentleman wrote a piece that was certainly influential on Tim Berners-Lee. This is Vannevar Bush, a scientist, a thinker. In 1945, he published a piece in the Atlantic Monthly under the heading, “A Scientist Looks at Tomorrow,” he publishes, “As We May Think.”

In this piece, he describes an imaginary device. It’s a mechanical device inside a desk, and the operator is allowed to work on reams and reams of microfilm and to connect ideas together, make these associative trails. This is kind of like hypertext before the word hypertext has been coined. Vannevar Bush calls this device the Memex. That’s published in 1945.

Memex

Also, in 1945, this young man has been drafted into the U.S. Navy and he’s shipping out to the Pacific. His name is Douglas Engelbart. Literally as the ship is leaving the harbor to head to the Pacific, word comes through that the war is over.

Now, he still gets shipped out to the Pacific. He’s in the Philippines. But now, instead of fighting against the Japanese, he’s lounging around in a hut on stilts reading magazines and that’s where he reads “As We May Think,” by Vannevar Bush.

Fast-forward years later; he’s trying to decide what to do with his life other than settle down, get married, have a job, you know, that kind of thing. He thinks, “No, no, I want to make the world a better place.” He realizes that computers could be the way to do this if they could implement something very much like the Memex. Instead of a mechanical device, what if computers could create the Memex, this hypertext system? He devotes his life to this and effectively invents the field of Human Computer Interaction.

On December 9th, 1968, he demonstrates what he’s been working at. This is in San Francisco, and he demonstrates bitmap screens. He demonstrates real-time collaboration on documents, working hypertext …and also he invents the mouse for the demo.

We have a pointing device called a mouse, a standard keyboard, and a special key set we have here. And we are going to go for a picture down on our laboratory in Menlo Park and pipe it up. It’ll show you, from another point of view, more about how that mouse works.

Come in, Menlo Park. Okay, there’s Don Anders’ hand in Menlo Park. In a second, we’ll see the screen that he’s working and the way the tracking spot moves in conjunction with movements of that mouse. I don’t know why we call it a mouse sometimes. I apologize; it started that way and we never did change it.

—Douglas Engelbart

This was ground-breaking. The mother of all demos, it came to be known as. This was a big influence on Tim Berners-Lee.

At this point, we’ve entered the time cone of those 30 years before the proposal that Tim Berners-Lee made, which is good because this is the moment where I like to branch off from this timeline and sort of turn it around.

The question I’m sure nobody is asking—because you saw there was a video link-up there; Douglas Engelbart is in San Francisco, and he has a video link-up with Menlo Park to demonstrate real-time collaboration with computers—the question nobody is asking is, who is operating the video camera in Menlo Park?

Well, I’ll tell you the answer to that question that nobody is asking. The man operating the video camera in Menlo Park is this man. His name is Stuart Brand. Now, Stuart Brand has spent most of the ‘60s doing what you would do in the ‘60s; he was dropping acid. This was all kosher. This was before it was illegal.

He was on the Merry Pranksters bus with Ken Kesey and, on one particular acid trip, he literally saw the Earth curving away and realizing that, yeah, we’re all on one planet, man! And he started a campaign with badges called, “Why haven’t we seen a photograph of the whole Earth yet?” I like the “yet” part in there like it’s a conspiracy that we haven’t seen a photograph of the whole Earth.

He was kind of onto something here, realizing that seeing our planet as a whole planet from space could be a consciousness-changing thing much like LSD is a consciousness-changing thing. Sure enough, people did talk about the effect it had when we got photographs like Earthrise from Apollo 8, and he used those pictures when he published the Whole Earth Catalog, which was a series of books.

The Whole Earth Catalog was basically like Wikipedia before the internet. It was this big manual of how to do everything. The idea was, if you were running a commune, living in a commune, you needed to know about technology, and agriculture, and weather, and all the stuff, and you could find that in the Whole Earth Catalog.

He was quite an influential guy, Stuart Brand. You probably heard the Steve Jobs commencement speech where he quotes Stuart Brand, “Stay hungry, stay foolish,” all that stuff.

Stuart Brand also did a lot of writing. After Douglas Engelbart’s demo, he started to see that this computer thing was something else. He literally said computers are the new LSD, so he starts really investigating computing and computers.

He writes this great article in “Rolling Stone” magazine in 1972 about space war, one of the first games you could play on the screen. But he has a wide range of interests. He kind of kicked off the environmental movement in some ways.

At one point, he writes a book about architecture. He writes a book called “How Buildings Learn.” There’s a television series that goes with it as well. This is a classic book (the definition of a classic book being a book that everyone has heard of and nobody has read).

In this book, he starts looking at the work of a British architect called Frank Duffy. Frank Duffy has this idea about architecture he calls shearing layers. The way that Frank Duffy puts it is that a building, properly conceived, consists of several layers of longevity, so kind of different rates of change.

He diagrams this out in terms of a building, and you see that you’ve got the site that the building is on that’s moving at a geological timescale, right? That should be around for thousands of years, we would hope.

Then you’ve got the actual structure that could stand for centuries.

Then you get into the infrastructure inside. You know, the plumbing and all that, you probably want to swap out every few decades.

Basically, until you get down to the stuff inside a room, the furniture that you can move around on a daily basis. You’ve got all these timescales moving from fast to slow as you move inwards into the house.

What I find fascinating about this idea of these different layers as well is the way that each layer depends on the layer below. Like, you can’t have the structure of a building without first having a site to put it on. You can’t move furniture around inside a room until you’ve made the room using the walls and the doors, right? This idea of shearing layers is kind of fascinating, and we’re going to get back to it.

Something else that Stuart Brand went on to do; he was one of the co-founders of the Long Now Foundation. Anybody here part of the Long Now Foundation? Any members of the Long Now Foundation? Ah… It’s a great organization. It’s literally dedicated to long-term thinking. It was founded by Stuart Brand and Danny Hillis, the computer scientist, and Brian Eno, the musician and producer. Like I said, dedicated to long-term thinking. This is my membership card made out of a durable metal because it’s got to last for thousands of years.

If you go on the website of the Long Now Foundation, you’ll notice that the years are made up of five digits, so instead of 2019, it will be 02019. Well, you know, you’ve got to solve the Y10K problem. They’re dedicated to long-term thinking, to trying to think in the longer now.

Admiring the Clock Of The Long Now

One of the most famous projects is the clock of the Long Now. This is a clock that will tell time for 10,000 years. Brian Eno has done the chimes. They’re generative. It’ll never chime the same way twice. It chimes once a century. This is a scale model that’s in the Science Museum in London along with half of Charles Babbage’s brain and the original NeXT machine that Tim Berners-Lee created World Wide Web on.

This is just a scale model. The full-sized clock is going to be inside a mountain in west Texas. You’ll be able to visit it. It’ll be like a pilgrimage. Construction is underway. I hope to visit the clock one day.

Stuart Brand collected his thoughts. It’s a really fascinating project when you think about, how do you design something to last 10,000 years? How do you communicate over 10,000 years? One of those tricky design problems almost like the Voyager Golden Record or the Yucca Mountain waste disposal. How do you communicate to future generations? You can’t rely on language. You can’t rely on semiotics. Anyway, he collected a lot of his thoughts into this book called “The Clock of the Long Now,” subtitled “Time and Responsibility: The Ideas Behind the World’s Slowest Computer.” He’s thinking about time. That’s when he comes back to shearing layers and these different layers of rates of change; different layers of time.

Stuart Brand abstracts the idea of shearing layers into something called “pace layers.” What if it’s not just architecture? What if any kind of system has these different rates of change, these layers?

He diagrams this out in terms of the human species, so think of humans. We have these different layers that we operate at.

At the lowest, slowest level, there is our nature, literally, like what makes us human in terms of our DNA. That doesn’t change for tens of thousands of years. Physiologically, there’s no difference between a caveman and an astronaut, right?

Then you’ve got culture, which cumulates of centuries, and the tribal identities we have around things like nations, language, and things like that.

Governance, models of governance, so not governments but governance, as in the way we choose to run things, whether that’s a feudal society, or a monarchy, or representative democracy, right? Those things do change, but not too fast, hopefully.

Infrastructure: you’ve got to keep up with the times, you know? This needs to move at a faster pace, again.

Commerce: much more fast-moving. Commerce needs to — you’re getting into the faster timescales there.

Then he puts fashion at the top. By fashion, he means anything that is supposed to be new and exciting, so that includes pop music, for example. The whole idea with fashion is that it’s there to try stuff out and discard it very quickly.

“What about this?” “No.” “What about that?” “Try this.” “No, try that.”

The good stuff, the stuff that kind of sticks to the wall, will maybe find its way down to the longer-lasting layers. Maybe a really good pop song from fashion ends up becoming part of culture, over time.

Here’s the way that Stuart Brand describes pace layers. He says:

Fast learns; slow remembers. Fast proposals, and slow disposes. Fast is discontinuous; slow is continuous. Fast and small instructs slow and big by a crude innovation and by occasional revolution, but slow and big controls small and fast by constraint and constancy.

He says:

Fast gets all of our attention but slow has all the power.

Pace layers is one of those ideas that, once you see it, you can’t unsee it. You know when you want to make someone’s life a misery, you just teach them about typography. Now they can’t unsee all the terrible kerning in the world. I can’t unsee pace layers. I see them wherever I look.

Does anyone remember this book, UX designers in the room, “The Elements of User Experience,” by Jesse James Garrett? It’s old now. We’re going back in the way but, in it, he’s got this diagram about the different layers to a user experience. You’ve got the strategy below that finally ends up with an interface at the top.

I look at this, and I go, “Oh, right. It’s pace layers. It’s literally pace layers.” Each layer depending on the layer below, the slower layers at the bottom, the faster-moving things at the top.

With this mindset that pace layers are everywhere, I thought, “Can I map out the web in terms of pace layers, the technology stack of the web?” I’m going to give it a go.

At the lowest stack, the slowest moving, I would say there’s the internet itself, as in TCP/IP, the transmission control protocol, Internet protocol created by Bob Kahn and Vint Cerf in the ‘70s and pretty much unchanged since then, deliberately dumb, deliberately simple. All it does is move packets around. Pretty much unchanged.

On top of that, you get the other protocols that use TCP/IP, like in the case of the web, the hypertext transfer protocol. Now, this has changed over time. We now have HTTP/2. But it hasn’t been rapid change. It’s been gradual. Again, that kind of feels right. It feels good that HTTP isn’t constantly changing underneath us too much.

Then we serve up over HTTP are URLs. I wish that URLs were down here. I wish that URLs were everlasting, never changing. But, unfortunately, I must acknowledge that that’s not true. Links die. We have to really work hard to keep them alive. I think we should work hard to keep them alive.

What do you put at those URLs? At the simplest level, it’s supposed to be plain text. But this is the web, so let’s say structured text. This is going to be HTML, the hypertext markup language, which Tim Berners-Lee came up with when he created the World Wide Web. I say, “Came up with.” He basically stole it from SGML that scientists at CERN were already using and sprinkled in one or two new tags, as they were calling it back then.

There were maybe like 20-something tags in HTML when Tim Berners-Lee created the web. Now we’ve got over 100 elements, as we call them. But I feel like I’ve been able to keep up with the pace of change. I mean, the vast big kind of growth spurt with HTML was probably HTML5. That’s been a while back now. It’s definitely change that I can keep on top of.

Then we have CSS, the presentation layer. That feels like it’s been moving at a nice clip lately. I feel like we’ve been getting a lot of cool stuff in CSS, like Flexbox and Grid, and all this new stuff that browsers are shipping. Still, I feel like, yeah, yeah, this is good. It’s right that we get lots of CSS pretty rapidly. It’s not completely overwhelming.

Then there’s the JavaScript ecosystem. I specifically say the “JavaScript ecosystem” as opposed to the “JavaScript language” because the JavaScript language is being developed at a nice pace. I feel like it’s going at a good speed of standardization. But the ecosystem, the frameworks, the libraries, the build tools, all of that stuff, that feels like, “You know what? Try this. No, try that. What about this? What about that? Oh, you’re still using that framework? No, no, we stopped using that last week. Oh, you’re still using that build tool? No, no, no, that’s so … we’ve moved on.”

I find this very overwhelming. Can I get a show of hands of anybody else who feels overwhelmed by this rate of change? All right. Keep your hands up. Keep your hands up and just look around. I want you to see you are not alone. You are not alone.

But I tell you what; after mapping these layers out into the pace layer diagram, I realized, wait a minute. The JavaScript layer, the fashion layer, if you will, it’s supposed to be like that. It’s supposed to be trying stuff out. Throw this at the wall. No, throw that at the wall. How about this? How about that?

It’s true that the good stuff does stick. Like if I think back to the first uses of JavaScript—okay, I’m showing my age, but—when JavaScript first came along, we’d use it for things like image rollovers or form validation, right? These days, if I wanted to do an image rollover—you mouseover something and it changes its appearance—I wouldn’t use JavaScript. I’d use CSS because we’ve got :hover.

If I was doing a form validation, like, “Oh, has that field actually been filled in?” because it’s required and, “Does that field actually look like an email address?” because it’s supposed to be an email address, I wouldn’t even use JavaScript. I would use HTML; input type="email" required. Again, the good stuff moves down into the sort of slower layers. Fast learns; slow remembers.

The other thing I realized when I diagrammed this out was that, “Huh, this kind of maps to how I approach building on the web.” I pretty much take this for granted that it’s going to be on the Internet. There’s not much I can do about that. Then I start thinking about URLs like URL-first design, the information architecture of a site. I think it’s underrated. I think people should create a URL-first design. URL design, in general, I think it’s a really good place to start if you’re building a product or a service.

Then, about your content in terms of structure. What is the most important thing on this page? That should be an H1. Is this a paragraph? Is it a list? Thinking about the structure first and then going on to think about the appearance which is definitely the way you want to go if you’re making something responsive. Think about the structure first and then the appearance and all these different form factors.

Then finally, add in behavior with JavaScript. Whatever HTML and CSS can’t do, that’s what I will use JavaScript for to kind of enhance it from there.

This maps really nicely to how I personally approach building things on the web. But, it is a testament to the flexibility of the World Wide Web that, if you don’t want to build in this way, you don’t have to.

If you want, you could build like this. JavaScript is a really powerful language. If you wanted to do navigations and routing in JavaScript, you can. If you want to inject all your content into the page using JavaScript, you can. CSS in JS? You can. Right? I mean, this is pretty much the architecture of a single-page app. It’s on the internet and everything is in JavaScript. The internet is a delivery mechanism for a chunk of JavaScript that does everything: the markup, the CSS, the routing.

This isn’t how I approach building on the web. I kind of ask myself why this doesn’t feel quite right to me. I think it’s because of the way it kind of turns everything into a single point of failure, which is the JavaScript, rather than spreading out those points of failure. We’re on the Internet and, as long as the JavaScript runs okay, the user gets everything. It turns what you’re building into a binary proposition that either it doesn’t work at all or it works great. Those are your own two options.

Now I’ll point out that, in another medium, this would make complete sense. Like if you’re building a native app. If you build an iOS app and I’ve got an iOS device, I get 100% of what you’ve designed and built. But if you’re building an iOS app and I have an Android device, I get zero percent of what you’ve designed and built because you can’t install an iOS app on an Android device. Either it works great or it doesn’t work at all; 100% or zero percent.

The web doesn’t have to be like that. If you build in that layered way on the web, then maybe I don’t get 100% of what you’ve designed and built but I don’t get zero percent, either. I’ll get something somewhere along the way, hopefully, closer to working great. It goes from not working at all to just about working. It works fine and works well; it works great.

You’re building up these layers of experience, the idea being that nobody gets left behind. Everybody gets something regardless of their device, their network, their browser. Everyone is not going to get the same experience, but everybody gets something. That feels very true to the original sort of founding ideas of the web and it maps so nicely to our technical stack on the web, the fact that you can start to think about things like URLs-first and think about the structure, then the presentation, and then the behavior.

I’m not the only one who likes thinking in this layered kind of way when it comes to the web. I’m going to quote my friend Ethan Marcotte. He says:

I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it will change if, say, fonts aren’t available, or JavaScript doesn’t work, or someone doesn’t see the design as you or I might and is having the page read allowed to them.

That’s a really good point that when you build in the layered way, you’re building in the resilience that something can fall back to a layer a little further down.

This brings up something I’ve mentioned here before at Beyond Tellerrand, which is that, when we’re evaluating technologies, the question we tend to ask is, how well does it work? That’s an absolutely valid question. You’re about to try a new tool, a new framework, a new standard. You ask yourself; how well does it work?

I think the more important question to ask is:

How well does it fail?

What happens if that piece of technology fails? That’s why I like this layered approach because this fails really well. JavaScript’s no longer a single point of failure. Neither is CSS, frankly. If the CSS never loaded, the user still gets something.

Now, this brings up an idea, a principle that definitely influenced Tim Berners-Lee. It was at the heart of his design principles for the World Wide Web. It’s called the Principle of Least Power that states, “Choose the least powerful language suitable for a given purpose,” which sounds really counterintuitive. Why would I choose the least powerful language to do something?

It’s kind of down to the fact that there’s a trade-off. With power, you get a fragility, right? Maybe something that is really powerful isn’t as universal as something simpler. It makes sense to figure out the simplest technology you can use to achieve a task.

I’ll give an example from my friend Derek Featherstone. He says:

In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

Again, he’s talking about the resilience you get by building in a layered way and choosing the least powerful technology.

It’s like a classic example being ARIA. The first rule of ARIA is, don’t use ARIA if you don’t have to. Rather than using a div and then adding the event handlers and the ARIA roles to make it look like a button, just use a button. Use the simpler technology lower in the stack.

Now, I get pushback on this because people will tell me, “Well, that’s fine if you’re building something simple, but I’m not building something simple. I’m building something complex.” Everyone likes to think they’re building something complex, right? Everyone is convinced they’re working on really hard things, which makes sense. That’s human nature.

If you’re at a cocktail party and someone says, “What do you do?” and you describe your work and they say, “Oh, okay. That sounds really easy,” you’d be offended, right?

If you’re at a party and someone says, “What do you do?” you describe your work and they go, “Wow, God, that sounds hard,” you’d be like, “Yeah! Yeah, it is hard. What I do is hard.”

I think we gravitate to this, especially when someone markets it as, “This is a serious tool for serious, complex sites.” I’m like, “That’s me. I’m working on a serious, complex site.”

I don’t think the reality is quite like that. Reality is just messier. There’s nothing quite that simple. Very few things are really that complex.

Everything kind of exists on this continuum somewhere along the way. Even the simplest website has some form of interaction, something appy about it.

Those are those other two terms people use when talking about simple and complex is website and web app, as if you can divide the entirety of the whole World Wide Web into two categories: websites and web apps.

Again, that just doesn’t make sense to me. I think the truth is, things are messier and schmooshier between this continuum of websites and web apps. I don’t get why we even need the separate word. It’s all web stuff.

Though, there is this newer term, “Progressive Web App,” that I kind of like.

Who has heard of Progressive Web Apps? All right.

Who thinks they have a good handle on what a Progressive Web App is?

Okay. See, that’s a lot fewer hands, which is totally understandable because, if you start googling, “What is a Progressive Web App?” you get these Zen-like articles. “It’s a state of mind.” “It’s about rich, native-like interactions, man.”

No. No, it’s not. Worse still, you read, “Oh, a Progressive Web App is a Single PageApp.” I was like, no, you’ve lost me there. No, it’s not. Or least it can be, but any website can be a Progressive Web App. You can elevate a website to be a Progressive Web App.

I don’t mean in some sort of Zen-like fashion. I mean using technologies, three particular technologies.

  1. You make sure that website is running HTTPS,
  2. you have a web app manifest that’s a JSON file with metadata, and
  3. you have a service worker that gets installed on the user’s device.

That’s it. These three technologies turn a website into a Progressive Web App — no mystery about it.

The tricky bit is that service worker part. It’s kind of a weird thing because it’s JavaScript but it’s JavaScript that gets installed on the user’s device and acts like a proxy. It intercepts network requests and can do different things like grab things from the cache instead of going to the network.

I’m not going to go into how it works because I’ve written plenty about that in this book “Going Offline,” so if you want to know the code, you can go read the book.

I will say that when I first came across service workers, it totally did my head in because this is my mental model of the web. We’ve got the stack of technologies that we’re building on top of, each layer depending on the layer below. Then service workers come along and say, “Well, actually, you could have a website like this,” where the lowest layer, the network, the Internet goes away and the website still works. Mind is blown!

It took me a while to get my head around that. The service worker file is on the user’s device and, if they’ve got no Internet connect, it can still make decisions and serve up something like a custom offline page.

Here’s a website I run called huffduffer.com. It’s for making your own podcast out of found sounds. If you’re offline or the website is down, which happens, and you visit Huffduffer, you get this offline page saying, “Sorry, you’re offline.” Not very useful, but it’s branded like the site, okay? It’s almost like the way you have a custom 404 page. Now you can have a custom offline page that matches your site. It’s a small thing, but it can be handy.

We ran this conference in Brighton for two years, Ampersand. It’s a web typography conference. That also has a very simple offline page that just says, “Sorry. You’re offline,” but then it has the bare minimum information you need about the conference like where is this conference happening; what time does it start?

You can imagine a restaurant website having this, an offline page that tells you, “Here is the address. Here are the opening hours.” I would like it if restaurant websites had that information when you’re online as well, but…

You can also have fun with this, like Trivago. Their site relies on search, so there’s not much you can do when you’re offline, so they give you a game to play, the offline maze to keep you entertained.

That’s kind of at the simplest level of what you could do, a custom offline page.

Then at the other level, I’ve written this book called “Resilient web Design.” A lot of the ideas I’m talking about here are in this book. The book is a website. You go to the website and you read the book. That’s it. It’s free. You just go to resilientwebdesign.com. I mean free. I don’t ask for your email address and I’m not tracking any information at all.

This is how it looks when you’re online, and then this is how it looks when you’re offline. It is exactly the same. In fact, the moment you visit the website, it basically downloads the whole book.

Now, that’s the extreme example. Most websites, you wouldn’t want to do that because you kind of want the HTML to be fresh. This is never going to get updated. I’m done with this so I’m totally fine with, you go straight to the cache; never even go to the network. It’s absolutely offline first. You’re probably going to want something in between those two extremes.

On my own website, adactio.com, if you’re browsing around the website and you’re reading things, that’s all fine. Then what if you lose your internet connection? You get the custom offline page that says, “Sorry, you’re offline,” but then it also shows you the things you’ve previously visited.

You can revisit any of these pages. These have all been cached, so you can cache things as people are browsing around the site. That’s a nice little pattern that a lot of websites could benefit from. It only suffers from the fact that all I can show you is stuff you’ve already seen. You have to have already visited these pages for them to show up in this list. Another pattern that I think is maybe better from a user experience point is when you put the control in the hands of the user.

This website, archive.dconstruct.org, this is what it sounds like. It’s an archive. It’s conference talks.

We ran a conference called dConstruct for 10 years from 2005 to 2015. Breaking news; we’re bringing it back for a one-off event next year, September 2020.

Anyway, all the talks from ten years are online here as audio files. You can browse around and listen to these talks.

You’ll also see that there’s this option to save for offline, exposed on the interface. What that does it is doesn’t just save the page offline; it also saves the audio offline. Then, when you’re an airplane or at the bottom of the ocean or whatever, you can then listen to the things you explicitly asked to be saved offline. It’s effectively a podcast player in the browser.

You see there’s a lot of things you can do. There are kind of a lot of layers you can build upon once you have a service worker.

At the very least, you can do caching because that’s the stuff we do anyway, like put this file in the cache, your CSS, your JavaScript, your icons, whatever.

Then think about, well, maybe I should have a custom offline page, even if it’s just for the branding reasons of having that nice page, just like we have a custom 404 page.

Then you start thinking, well, I want the adding to home screen experience to be good, so you’ve got the web app manifest.

You implement one of those patterns there allowing the users to save things offline, maybe.

Also, push notifications are now possible thanks to service workers. It used to be, if you wanted to make someone’s life a misery, you had to build a native app to give them push notifications all day long. Now you can make someone’s life a misery on the web too.

There’s even more advanced APIs like background sync where the website can talk to the web server even when that website isn’t open in the browser and sync up information — super powerful stuff.

Now, the support for something like service worker and the cache API is almost universal at this point. The support for stuff like background sync notifications is spottier, not universal, and that’s okay because, as long as you’re adding these things in layers. Then it’s fine if something doesn’t have universal support, right? It’s making something work great but, if someone doesn’t get that, it still works good. It’s all about building in that layered way.

Now you’re probably thinking, “Ah-ha! I’ve hoisted him by his own petard because service workers use JavaScript. That means they rely on JavaScript. You’ve made JavaScript a single point of failure!” Exactly what I was complaining about with Single Page Apps, right?

There’s a difference. With a Single Page App, you’re relying on JavaScript. The user gets absolutely nothing if JavaScript doesn’t work.

In the case of service workers, you literally cannot make a website that relies on a service worker. You have to make a website that works first without a service worker and then add the service worker on top, because, think about it; the first time anybody visits the website, even if their browser supports service workers, the service worker is not installed. So you have to build in layers.

I think this is why it appeals to me so much. The design of service workers is a layered design. You have to have something that works first, and then you elevate it. You improve the user experience using these technologies but you don’t rely on it. It’s not a single point of failure. It’s an enhancement.

That means you can take any website. Somebody’s homepage; a book that’s online; this archived stuff ; something that is more appy, sure, and make it work pretty much like a native app. It can appear full screen, add to the home screen, be indistinguishable from native apps so that the latest and greatest browsers and devices get the best experience. They’re making full use of the newest technologies.

But, as well as these things working in the latest and greatest browsers, they still work in the first web browser ever created. You can still look at these things in that very first web browser that Tim Berners-Lee created at worldwideweb.cern.ch.

WorldWideWeb

It’s like it is an unbroken line over 30 years on the web. We’re not talking about the Long Now when we’re talking about 30 years but, in terms of technology, that does feel special.

You can also look at the world’s first webpage in the first-ever web browser but, almost more amazingly, you can look at the world’s first web page at its original URL in a modern web browser and it still works.

We managed to make the web so much better with new APIs, new technologies, without breaking it, without breaking that backward compatibility. There’s something special about that. There’s something special about the web if you build in layers.

I’m encouraging you to think in terms of layers and use the layers of the web.

Thank you.

Friday, December 13th, 2019

The Origin Story of Container Queries—zachleat.com

Everyone wants it, but it sure seems like no one is actively working on it.

Zach traces the earliest inklings of container queries to an old blog post of Andy’s—back when he was at Clearleft—called Responsive Containers:

For fun, here’s some made-up syntax (which Jeremy has dubbed ‘selector queries’)…

Wednesday, December 11th, 2019

Saron Yitbarek and Jeremy Keith - Command Line Heroes Live Podcast - View Source 2019 - YouTube

Here’s the live podcast recording I was on at the View Source conference in Amsterdam a while back, all about the history of JavaScript.

My contribution starts about ten minutes in. I really, really enjoyed our closing chat around the 25 minute mark.

It was such a pleasure and an honour to watch Saron at work—she did an amazing job!

Monday, December 2nd, 2019

Saturday, November 16th, 2019

Morphosis: Goliath, David, Adam

A biblical short story from Adam Roberts.

The Layers Of The Web

Here are the slides from my opening keynote at Beyond Tellarrand on Thursday. They don’t make much sense out of context.

The Layers Of The Web - Jeremy Keith on Vimeo

Thanks to the quick work of Marc and his team, the talk I gave at Beyond Tellerrand on Thursday was online within hours!

I’m really pleased with how this turned out. I wasn’t sure if anybody was going to be interested in the deep dive into history that I took for the first 15 or 20 minutes, but lots of people told me that they really enjoyed that part, so that makes me happy.

Tuesday, November 12th, 2019

Third party

The web turned 30 this year. When I was back at CERN to mark this anniversary, there was a lot of introspection and questioning the direction that the web has taken. Everyone I know that uses the web is in agreement that tracking and surveillance are out of control. It seems only right to question whether the web has lost its way.

But here’s the thing: the technologies that enable tracking and surveillance didn’t exist in the early years of the web—JavaScript and cookies.

Without cookies, the web was stateless. This was by design. Now, I totally understand why cookies—or something like cookies—were needed. Without some way of keeping track of state, there’s no good way for a website to “remember” what’s in your shopping cart, or whether you’ve authenticated yourself.

But why would cookies ever need to work across domains? Authentication, shopping carts and all that good stuff can happen on the same domain. Third-party cookies, on the other hand, seem custom made for tracking and frankly, not much else.

Browsers allow you to disable third-party cookies, though it’s not yet the default. If enough people do it—and complain about the sites that stop working when third-party cookies are disabled—then maybe it can become the default.

Firefox is taking steps in this direction, automatically disabling some third-party cookies—the ones that known trackers. Safari is also taking steps to prevent cross-site tracking. It’s not too late to change the tide of third-party cookies.

Then there’s third-party JavaScript.

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default. But I guess the precedent had been set with images and style sheets: they could be embedded regardless of whether their domain names matched yours. Still, this is executable code we’re talking about here: that’s quite a footgun that the web has given site owners. And boy, oh boy, has it been used by the worst people to do the most damage.

Again, as with cookies, if we were to imagine what the web would be like if JavaScript was restricted by a same-domain policy, there are certainly things that would be trickier to do.

  • Embedding video, audio, and maps would get a lot finickier.
  • Analytics would need to be self-hosted. I don’t think that would bother any site owners. An analytics platform like Google Analytics that tracks people across domains is doing it for its own benefit rather than that of site owners.
  • Advertising wouldn’t be creepy and annoying. Instead of what’s so euphemistically called “personalisation”, advertisers would have to rely on serving relevant ads based on the content of the site rather than an invasive psychological profile of the user. (I honestly think that advertisers would benefit from this kind of targetting.)

It’s harder to imagine putting the genie back in the bottle when it comes to third-party JavaScript than it is with third-party cookies. All the same, I wish that browsers made it easier to experiment with it. Just as I can choose to accept all cookies, reject all cookies, or only accept same-origin cookies, I wish I could accept all JavaScript, reject all JavaScript, or only accept same-origin JavaScript.

As it is, browsers are making it harder and harder to exercise any control over JavaScript at all. So we reach for third-party tools. We don’t call them JavaScript managers though. We call them ad blockers. But honestly, most of the ad-blocker users I know—myself included—are not bothered by the advertising; we’re bothered by the tracking. We should really call them surveillance blockers.

If third-party JavaScript weren’t the norm, not only would it make the web more secure, it would make it way more performant. Read the chapter on third parties in this year’s newly-released Web Almanac. The figures are staggering.

93% of pages include at least one third-party resource, 76% of pages issue a request to an analytics domain, the median page requests content from at least 9 unique third-party domains that represent 35% of their total network activity, and the most active 10% of pages issue a whopping 175 third-party requests or more.

I don’t think all the web’s performance ills are due to third-party scripts; developers are doing a bang-up job of making their sites big and bloated with their own self-hosted frameworks and code. But as long as third-party JavaScript is allowed onto a site, there’s a limit to how much good developers can do to improve the performance of their sites.

I go to performance-related conferences and you know who I’ve never seen at those events? The people who write the JavaScript for third-party tracking scripts. Those developers are wielding an outsized influence on the health of the web.

I’m very happy to see the work being done by Mozilla and Apple to normalise the idea of rejecting third-party cookies. I’d love to see the rejection of third-party JavaScript normalised in the same way. I know that it would make my life as a developer harder. But that’s of lesser importance. It would be better for the web.

Thursday, November 7th, 2019

Information mesh

Timelines of people, interfaces, technologies and more:

30 years of facts about the World Wide Web.

Friday, October 18th, 2019

Web talk

At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.

The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:

How We Built The World Wide Web In Five Days

This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.

At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.

But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.

I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.

There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.

We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:

It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.

In the meantime, you can read the text, watch the video, or look at the slides (but the slides really don’t make much sense in isolation).

Wednesday, October 16th, 2019

How We Built The World Wide Web In Five Days

This talk about recreating the first ever web browser was a joint presentation with Remy Sharp, delivered at the Fronteers conference in Amsterdam in October 2019.

Jeremy

Our story begins with the Big Bang.

13.8 billion years ago

This sets a chain of events in motion that gives us elementary particles, then more complex particles like atoms, which form stars and planets, including our own, on which life evolves, which brings us to the recent past when this whole process results in the universe generating a way of looking at itself: physicists.

A physicist is the atom’s way of knowing about atoms.

—George Wald

By the end of World War Two, physicists in Europe were in short supply. If they hadn’t already fled during Hitler’s rise to power, they were now being actively wooed away to the United States.

64 years ago

To counteract this brain drain, a coalition of countries forms the European Organization for Nuclear Research, or to use its French acronym, CERN.

They get some land in a suburb of Geneva on the border between Switzerland and France, where they set about smashing particles together and recreating the conditions that existed at the birth of the universe.

The Syncrocyclotron.

Every year, CERN is host to thousands of scientists who come to run their experiments.

Remy

Fast forward to February 2019, a group of 9 of us were invited to CERN as an elite group of hackers to recreate a different experiment.

The group.

We are there to recreate a piece of software first published 30 years ago. Given this goal, we need to answer some important questions first:

  • How does this software look and feel?
  • How does it work?
  • How you interact with it?
  • How does it behave?

The software is so old that it doesn’t run on any modern machines, so we have a NeXT machine specially shipped from the nearby museum. This is no ordinary machine. It was one of the only two NeXT machines that existed at CERN in the late 80s.

Now we have the machine to run this special software.

By some fluke the good people of the web have captured several different versions of this software and published them on Github.

So we selected the oldest version we could find. We download it from Github to our computers. Now we have to transfer it to the NeXT machine.

Except there’s no USB drive. It didn’t exist. CD ROM? Floppy drive? The NeXT computer had a “floptical drive”—bespoke to NeXT computers—all very well, but in 2019 we don’t have those drives.

To transfer the software from our machine, to the NEXT machine, we needed to use the network.

Jeremy

62 years ago

In 1957, J.C.R. Licklider was the first person to publicly demonstrate the idea of time sharing: linking one computer to another.

56 years ago

Six years later, he expanded on the idea in a memo that described an Intergalactic Computer Network.

By this time, he was working at ARPA: the department of Defense’s Advanced Research Projects Agency. They were very interested in the idea of linking computers together, for very practical reasons.

America’s military communications had a top-down command-and-control structure. That was a single point of failure. One pre-emptive strike and it’s game over.

The solution was to create a decentralised network of computers that used Paul Baran’s brilliant idea of packet switching to move information around the network without any central authority.

This idea led to the creation of the ARPANET. Initially it connected a few universities. The ARPANET grew until it wasn’t just computers at each endpoint; it was entire networks. It was turning into a network of networks …an internetwork, or internet, for short. In order for these networks to play nicely with one another, they needed to agree on using the same set of protocols for packet switching.

Bob Kahn and Vint Cerf crafted the simplest possible set of low-level protocols: the Transmission Control Protocol and the Internet Protocol. TCP/IP.

TCP/IP is deliberately dumb. It doesn’t care about the contents of the packets of data being passed around the internet. People were then free to create more task-specific protocols to sit on top of TCP/IP.

There are protocols specifically for email, for example. Gopher is another example of a bespoke protocol. And there’s the File Transfer Protocol, or FTP.

Remy

Back in our war room in 2019, we finally work out that can use FTP to get the software across. FTP is an arcane protocol, but we can agree that it will work across the two eras.

Although we have to manually install FTP servers onto our machines. FTP doesn’t ship with new machines because it’s generally considered insecure.

Now we finally have the software installed on the NeXT computer and we’re able to run the application.

We double click the shading looking, partly hand drawn icon with a lightning bolt on it, and we wait…

Once the software’s finally running, we’re able to see that it looks a bit like an ancient word processor. We can read, edit and open documents. There’s some basic styles lots of heavy margins. There’s a super weird menu navigation in place.

But there’s something different about this software. Something that makes this more than just a word processor.

These documents, they have links…

Jeremy

Ted Nelson is fond of coining neologisms. You can thank him for words like “intertwingled” and “teledildonics”.

56 years ago

He also coined the word “hypertext” in 1963. It is defined by what it is not.

Hypertext is text which is not constrained to be linear.

Ever played a “choose your own adventure” book? That’s hypertext. You can jump from one point in the book to a different point that has its own unique identifier.

The idea of hypertext predates the word. In 1945, Vannevar Bush published a visionary article in The Atlantic Monthly called As We May Think.

He imagines a mechanical device built into a desk that can summon reams of information stored on microfilm, allowing the user to create “associative trails” as they make connections between different concepts. He calls it the Memex.

Memex

Also in 1945, a young American named Douglas Engelbart has been drafted into the navy and is shipping out to the Pacific to fight against Japan. Literally as the ship is leaving the harbour, word comes through that the war is over. He still gets shipped out to the Philippines, but now he’s spending his time lounging in a hut reading magazines. That’s how he comes to read Vannevar Bush’s Memex article, which lodges in his brain.

51 years ago

Douglas Engelbart decides to dedicate his life to building the computer equivalent of the Memex.

On December 9th, 1968, he unveils his oNLine System—NLS—in a public demonstration. Not only does he have a working implementation of hypertext, he also shows collaborative real-time editing, windows, graphics, and oh yeah—for this demo, he invents the mouse.

It truly is The Mother of All Demos.

Douglas Engelbart has a posse.

39 years ago

There were a number of other attempts at creating hypertext systems. In 1980, a young computer scientist named Tim Berners-Lee found himself working at CERN, where scientists were having a heck of time just keeping track of information.

He created a system somewhat like Apple’s Hypercard, but with clickable links. He named it ENQUIRE, after a Victorian book of manners called Enquire Within Upon Everything.

ENQUIRE didn’t work out, but Tim Berners-Lee didn’t give up on the problem of managing information at CERN. He thinks about all the work done before: Vannevar Bush’s Memex; Ted Nelson’s Xanadu project; Douglas Engelbart’s oNLine System.

A lot of hypertext ideas really are similar to a choose-your-own-adventure: jumping around from point to point within a book. But what if, instead of imagining a hypertext book, we could have a hypertext library? Then you could jump from one point in a book to a different point in a different book in a completely different part of the library.

The Library Of Babel

In other words, what if you took the world of hypertext and the world of networks, and you smashed them together?

30 years ago

On the 12th of March, 1989, Tim Berners-Lee circulates the first draft of a document titled Information Management: A Proposal.

The diagrams are incomprehensible. But his supervisor at CERN, Mike Sendall, sees the potential. He reads the proposal and scrawls these words across the top: “vague, but exciting.”

Tim Berners-Lee gets the go-ahead to spend some time on this project. And he gets the budget for a nice shiny NeXT machine. With the support of his colleague Robert Cailliau, Berners-Lee sets about making his theoretical project a reality. They kick around a few ideas for the name.

They thought of calling it The Mesh. They thought of calling it The Information Mine, but Tim rejected that, knowing that whatever they called it, the words would be abbreviated to letters, and The Information Mine would’ve seemed quite egotistical.

So, even though it’s only going to exist on one single computer to begin with, and even though the letters of the abbreviation take longer to say than the words being abbreviated, they call it …the World Wide Web.

As Robert Cailliau told us, they were thinking “Well, we can always change it later.”

Tim Berners-Lee brainstorms a new protocol for hypertext called the HyperText Transfer Protocol—HTTP.

He thinks about a format for hypertext called the Hypertext Markup Language—HTML.

He comes up with an addressing scheme that uses Unique Document Identifiers—UDIs, later renamed to URIs, and later renamed again to URLs.

But he needs to put it all together into running code. And so Tim Berners-Lee sets about writing a piece of software…

Remy

Tim Berners-Lee’s document is a proposal at that stage 30 years ago. It’s just theory. So he needs to build a prototype to actually demonstrate how the World Wide Web would work.

The NeXT computer is the perfect ground for rapid software development because the NeXT operating system ships with a program called NSBuilder.

NeXT

NSBuilder is software to build software. In fact, the “NS” (meaning NeXTSTEP) can be found in existing software today - you’ll find references to NSText in Safari and Mac developer documentation.

Tim Berners-Lee, using NSBuilder was able to create a working prototype of this software in just 6 weeks

He called it: WorldWideWeb.

We finally have the software working the way it ran 30 years ago.

But our project is to replicate this browser so that you can try it out, and see how web pages look through the lens of 1990.

So we enter some of our blog urls. https://remysharp.com, https://adactio.com

But HTTPS doesn’t work. There was no HTTPS. There’s no HTTP2. HTTP1.0 hadn’t even been invented.

So I make a proxy. Effectively a monster-in-the-middle attack on all web requests, stripping the SSL layer and then returning the HTML over the HTTP 0.9 protocol.

And finally, we see…

We see junk.

We can see the text content of the website, but there’s a lot of HTML junk tags being spat out onto the screen, particularly at the start of the document.

Jeremy

<h1> <h2> <h3> <h4> <h5> <h6> 
<ol> <ul> <li> <p>

These tags are probably very familiar to you. You recognise this language, right?

That’s right. It’s SGML.

SGML is the successor to GML, which supposedly stands for Generalised Markup Language. But that may well be a backronym. The format was created by Goldfarb, Mosher, and Lorie: G, M, L.

SGML is supposed to be short for Standard Generalised Markup Language.

A flavour of SGML was already being used at CERN when Tim Berners-Lee was working on his World Wide Web project. Rather than create a whole new format from scratch, he repurposed what people were already familiar with. This was his HyperText Markup Language, HTML.

One thing he did add was a tag called A for anchor.

Its href attribute is short for “hypertext reference”. Plop a URL in there and you’ve got a link.

The hypertext community thought this was a terrible way to make links.

They believed that two-way linking was vital. With two-way linking, the linked resource connects back to where the link originates. So if the linked resource moved, the link would stay intact.

That’s not the case with the World Wide Web. If the linked resource moves, the link is broken.

Perhaps you’ve experienced broken links?

When Tim Berners-Lee wrote the code for his WorldWideWeb browser, there was a grand total of 26 tags in HTML. I know that we’d refer to them as elements today, but that term wasn’t being used back then.

Now there are well over 100 elements in HTML. The reason why the language has been able to expand so much is down to the way web browsers today treat unknown elements: ignore any opening and closing tags you don’t recognise and only render the text in between them.

Remy

The parsing algorithm was brittle (when compared to modern parsers). There’s no DOM tree being built up. Indeed, the DOM didn’t exist.

Remember that the WorldWideWeb was a browser that effectively smooshed together a word processor and network requests, the styling method was based (mostly) around adding margins as the tags were parsed.

Kimberly Blessing was digging through the original 7344 lines of code for the WorldWideWeb source. She found the code that could explain why we were seeing junk.

<link rel="..."

In this case, when the parser encountered <link rel="…" it would see the <.

<

“Yes, a tag; let’s slurp it up”.

<li

Then it reads li and the parser is thinking, “This looks like a list item, good stuff.”

<lin

Then encounters the n (of link) and, excusing the paring algorithm because it was the first, would then abort the style it was about to apply and promptly spit out the rest of the content on screen, having already swallowed up the first four characters: <lin.

k rel="stylesheet" href="...">

With that, we decided to make the executive design decision that we would strip out any elements that were unknown to the original WorldWideWeb browser — link, script, video and img — which of course there was no image support in the world’s first browser.

This is the first little cheat we applied, so that the page would be more pleasing to you, the visitor of our emulator. Otherwise you’d be presented with a lot of scary looking junk.

So now we have all the reference we need to be able to replicate this browser:

  • The machine running the original operating system, which gives us colours, fonts, menus and so on.
  • The browser itself, how windows behave, what’s in the menus, what makes the experience unique to that period of time.
  • And finally how it looks when we visit URLs.

So off we go.

🤯

Jeremy

While Remy sets about recreating the functionality of the WorldWideWeb browser, Angela was recreating the user interface using CSS.

Inputs. Buttons. Icons. Menus. All with the exact borders, highlights and shadows used in the UI of the NeXT operating system, including having the scrollbar on the left side of windows.

Meanwhile the rest of us were putting together an explanatory website to give some backstory to what we were doing. I spent most of my time working on a timeline showing thirty years before and thirty years after the original proposal for the web.

Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

The WorldWideWeb browser inherited fonts from the NeXTSTEP operating system. It mostly used Helvetica and a font called Ohlfs (created by Keith Ohlfs). Helvetica is ubiquitous but Ohlfs was never seen outside of a NeXT machine.

Our teammates Mark and Brian were obsessed with accurately recreating the typography. We couldn’t use modern fonts which are vector based. We need pixeliness.

So Mark and Brian took a screenshot of the NeXT machine’s alphabet. With help from afar from font designer David Jonathan Ross, they traced each square pixel in a vector program and then exported that as a web font. Now we’ve got a web font that deliberately isn’t anti-aliased. It’s a vector format that recreates the look of a bitmap.

Put the pixelly font together with the CSS interface elements and you’ve got something that really looks like the old WorldWideWeb programme.

Remy

This is the final product of our work at CERN that week. A fully working WorldWideWeb emulator giving a reasonable close experience of what it was like to surf the web as if it were 30 years ago.

This is entirely in the browser and was written using:

  • React,
  • React Draggable for the windows and menus,
  • React Hotkeys for keyboard combo shortcuts (we replicated the original OS as much as we could),
  • idb-keyval for some local storage,
  • Parcel for bundling.

These tools weren’t chosen particular because they were the best tools for the job, but rather because they were the tools I knew that well enough that would help speed up my development process.

We worked hard to replicate the look and feel as much as we could. We even replicated typos found throughout the WorldWideWeb app:

An excercise in global information availability

Why don’t we see how it looks…

Jeremy

There’s kind of irony in this in that it relies heavily on JavaScript. In fact, there’s nothing there other than JavaScript. But of course the WorldWideWeb browser couldn’t deal with JavaScript—JavaScript hadn’t been invented yet. So the one URL that definitely wouldn’t work in this emulator is …the emulator itself.

Remy

(Which Jeremy was blaming me for.)

This is what you see when you visit the WorldWideWeb browser for the first time. We can see we are welcomed by the universe of hypertext. We’ve got these menus over here that you can drag off and open panels (I always thought this was an ordering bug but the operating system actually works like this).

We’ll go ahead and open the Fronteers website. I go to “Document” and then I go to “Open from full document reference” (because the word URL didn’t exist). I’m going to pop the Fronteers URL in here. And there it is. We’ve got the Fronteers website. Looks pretty good. (One of my favourite UI bits is this scrollbar on the left hand side instead of the right.)

We can follow the links. Actually one of my favourite features that was in this original browser that we replicated was this “Navigate” menu. I’ve just opened the first link in the document, but I can click on “Next”, and “Next” a bunch of times and it will cycle through each one of the links on the page that I launched from and let me read all the pages that the Fronteers site links to (which I really like). I can go backwards and forwards, and so on.

One thing you might have already noticed is that there are no URLs here. And in fact, to view source, it was considered a kind of diagnostic option and it was very very tucked away. The reason for this is that URLs—and the source HTML or SGML—was considered ugly and potentially a bad user experience.

But there’s one thing about navigating here that’s different. To open this link, I had to double-click.

Jeremy

The WorldWideBrowser was more of a prototype than anything else. It demonstrated the potential of the World Wide Web project, but it only worked on NeXT machines.

To show how the World Wide Web could work on any computer, the second ever web browser was the Line Mode Browser, coded by Nicola Pellow. It had a very basic text interface—no clicking on links—but it could be installed anywhere.

Lots of other geeks and nerds were working on their own web browsers, but it was Marc Andreesen’s Mosaic browser that really blew the doors open for the web. It had a nice usable interface, and it (unilaterrally) introduced the innovation of images on the web.

Andreesen went on to found Netscape. The World Wide Web took off at an unprecedented rate. Microsoft brought out their Internet Explorer browser and started trying to catch up with Netscape. We had the browser wars. Later we got even more browsers, like Safari and Chrome, while Netscape morphed into Firefox and Internet Explorer morphed into Edge. And the rest is history.

But all of these browsers were missing something that was in the original WorldWideWeb browser.

Remy

The reason I have to double-click on these links is that, when I do a single click, it actually places the cursor. The cursor is blinking there on “Fronteers.” And the reason I can place the cursor is because I can edit the document.

I see Fronteers here is missing a heading. We want to welcome you all:

Welkom

We want to make that a heading. Let’s style that. It’s a heading.

So the browser was meant to edit documents. Let’s put a bit of text here:

Great talks from Remy and Jeremy

(forget about everyone else). Now if I want to create a link, I’ll go ahead and navigate to Jeremy’s site, https://adactio.com. I’m going to do “Link”, then “Mark all”, which is a way of copying the URL to that window. Then I go back to the Fronteers website, select “Jeremy”, and then do “Link to marked.” I can double-click on Jeremy’s name it will open up his website.

I can save this document as well. I’m going to call it fronteers.html.

Let’s do a hard reboot—a browser refresh. I come back to my machine a couple of days later, “Ah, the Fronteers page!”. I’m going to open that again, and it linked to that really handsome guy in the sprite shirt. And yes, the links still work.

In fact, this documentation that you see when the WorldWideWeb browser launches was written, styled, and linked using the WorldWideWeb browser. The WorldWideWeb browser was for a web that you could read and write.

But this didn’t survive. It was a hurdle that was too tricky to propose or implement across the different types servers that existed and for the upcoming browsers that were on the horizon.

And so it wasn’t standardised and doesn’t exist today.

But this is an important lesson from the time: reducing complexity increases the chances of mass adoption.

In the end, simplicity wins.

Jeremy

I think that’s a pattern we see over and over again, not just in the history of the web, but before the web. Simplicity wins.

Ted Nelson famously to this day thinks that the World Wide Web is weak sauce. It didn’t try to solve complex right out of the gate, like handling micro-payments.

As we saw, the hypertext community that one-way linking was ridiculous. But simplicity does win out.

Unfortunately that’s why browsers ended up just being browsers. We got some of the functionality back with wikis, content management systems, and social media to a certain extent. But I think it’s still a bit of a shame that when I want to browse a web page, I’m using one piece of software—the browser—but when I want to make a web page, I’m using another piece of software (or multiple pieces of software) to get something on to the web.

I feel like we lost something.

Remy

We head home after a week of hacking.

We were all invited back in March earlier this year for the Web@30 event that was taking place to celebrate the web but also Sir Tim Berners-Lee.

A NeXT machine from 1989 running the WorldWideWeb browser and my laptop in 2019 running https://worldwideweb.cern.ch

A few of us, Jeremy, Martin, and myself, went back to CERN for the the first leg of the event. There was even a video showing off our work as part of the main conference. Jeremy and I even chased Tim Berners-Lee back to London at the science museum like obsessive web fanboys. It was a lot of fun!

The night before I got a message from Jean-François Groff, pictured here on the right. JF Groff joined Tim Berners-Lee 30 years ago and created libwww (a precursor to libcurl).

The message read:

Sitting with Tim right now. He loves your browser!

Crushed it.

It’s amazing that we were able to pull this off in a week just with text editors and information that’s freely available. It’s mind boggling how much we can do today and how far it can reach. And it all started on that NeXTSTEP machine 30 years ago.

What I really loved about this project was working with this brilliantly old technology, digging around at the birth of browsers and the web.

I wouldn’t be stood here today, if it weren’t for the web.

I wouldn’t even know Jeremy, if it weren’t for the web.

I wouldn’t have a career, if it weren’t for the web.

I loved seeing how such old technology, the original WorldWideWeb browser was still able to render my blog. Because I put content first, delivered markup from the server. The page rendered because HTML really is backward compatible.

HTML and HTTP are just text. Nothing terribly fancy. Dare I say, beautifully simple, and as we said before, simplicity wins the day.

This same simplicity is what allows us all to have the chance for an equal voice. The web allows us to freely publish our thoughts and experiences. We have to fight to protect that kind of web.

And we’ve got to work at keeping it simple.

Jeremy

When we returned to CERN for the 30th anniversary celebrations, one of the other people there was the journalist Zeynep Tefepkçi.

When @Zeynep met NeXT.

Lou, Zeynep, Tim, Robert, and Jean-François. #Web30

She was on a panel along with Tim Berners-Lee, Robert Caillau, Jean-François Groff, and Lou Montoulli. At the end of the panel discussion, she was asked:

What would you tell the next generation about how to use this wonderful tool?

She replied:

If you have something wonderful, if you do not defend it, you will lose it.

If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.

Defend the simplicity and resilience that’s so central to the web.

I don’t know about you, but I often feel that just trying to make a web page has become far too complicated. But this is complexity that we have chosen with our tools, processes, and assumptions. We’ve buried the magic. The magic of linking web pages together. The magic of a working global hypertext system, where nobody needs to ask for permission to publish.

Tim Berners-Lee prototyped the first web browser, but the subsequent world wide web wasn’t created by any one person. It was created by everyone. That. Is. Magical.

I don’t want the web to become a place where only an elite priesthood get to experience the magic of creation. I’m going to fight to defend the openness of the world wide web. This is for everyone. Not just for everyone to use; it’s for everyone to create.

The lines of code that changed everything.

We construct top-10 lists for movies, games, TV—pieces of work that shape our souls. But we don’t sit around compiling lists of the world’s most consequential bits of code, even though they arguably inform the zeitgeist just as much.

This is a fascinating way to look at the history of computing, by focusing in on culturally significant pieces of code. The whole list is excellent, but if I had to pick a favourite …well, see if you can guess what it is.

Tuesday, October 15th, 2019

Jeremy Keith & Remy Sharp - How We Built the World Wide Web in Five Days on Vimeo

Here’s the talk that Remy and I gave at Fronteers in Amsterdam, all about our hack week at CERN. We’re both really pleased with how this turned out and we’d love to give it again!

Tuesday, October 8th, 2019

Southern Mosaic

A beautiful audio and visual history of the Lomax’s journey across:

On March 31 1939, when John and Ruby Lomax left their vacation home on Port Aransas, Texas, they already had some idea of what they would encounter on their three-month, 6,502 mile journey through the southern United States collecting folk songs.

Saturday, October 5th, 2019

do you know your tags?

Test your knowledge of the original version of HTML—how many elements can you name?