Tags: transcript

61

sparkline

Wednesday, January 22nd, 2020

Building

The opening presentation from the New Adventures conference held in Nottingham in January 2019.

Good morning, everybody. It is a real honour to be here. As Simon said, I was here six, seven, eight years ago attending this conference because it’s such a great conference. I’m kind of feeling the pressure now that I’m up here on the stage speaking at this conference. I’m just glad I’m on first so I can get it over with and then listen to all these great talks.

I’m here today to talk to you …which is kind of weird when you think about it. I mean, first, the fact that it’s me up here on the stage through some clerical error.

But also, I’m going to talk to you. I’m going to vibrate air over my vocal cords and move this big meaty piece of flesh inside my jaw up and down vibrating the airwaves and you’re going to listen to me doing that. It seems like a crazy thing to do except for the fact that, of course, I’ll be using language.

Language

Maybe the great distinguishing feature of our species, language. The great leap forward that happened—who knows—50,000, 100,000 years ago when we, as a species, developed language. With language, by moving those vocal cords and that big piece of flesh in my jaw, we can tell stories. I can recount something that happened in the past.

Perhaps more amazingly, we can imagine things that might come to be. I could tell you something that might happen in the future. So language is a kind of time travel.

It’s all possible because we’re speaking the same codebase. The particular language I’m talking now is English. As long as you can decode English then all these noises I’m making will make sense to you even if there isn’t actually any information in the words. I can say Chomsky’s famous one.

Colourless green ideas sleep furiously.

You can parse that. It doesn’t make any sense, but you can parse it.

Most of the time, the sentences we use also convey some kind of information. Language is not just time travel. Language is also communication.

There can be an idea that’s sitting in my head and I’ll, you know, vibrate the air and vocal cords, flap this big fleshy thing in my jaw around, and transfer the idea from my head to your head. Language is almost like a virus. You can’t help but take the idea in.

I can say to you, “Don’t think of an elephant,” right? Now you’ve just thought of an elephant. It’s the language equivalent of the chicken game which, if you haven’t played before, sorry. You’ve just lost.

Chicken game. Don’t look at this chicken. Game over.

This sentence, “Don’t think of an elephant,” is actually the title of a book by George Lakoff. George Lakoff is a linguist. He’s written many books. He wrote Women, Fire, and Dangerous Things. He wrote this, Metaphors We Live By, because he’s kind of obsessed with metaphors.

We use metaphor all the time in language. We use conceptual metaphor, so when we take one idea and we use the language of that idea to talk about a different idea. The classic example being something intangible.

Let’s say time. How do we talk about time when we can’t touch it, we can’t feel it, it’s intangible? Well, we use metaphor.

We talk about time as though it’s a physical object moving through space. We say time flies or time drags or we talk about time as though it’s a resource. We talk about saving time, wasting time.

You can’t do any of those things with time. That’s not how time works. But the metaphor is very helpful.

The other kind of metaphor is the cognitive metaphor. This is what George Lakoff is interested in, particularly in things like political language. How we frame a debate can tip the scales of how that debate would unfold. If we were about to have a debate about tax relief, well, before the debate has even begun, we’ve framed taxation as something you need relief from and the scales have been tipped.

I’m very interested in this idea of metaphor, analogy, and simile and how we talk about the work we do. It’s such a young industry. What we do is we borrow from other industries. We’re not the first to do this. There’s a great book called Understanding Comics by Scott McCloud. Who’s read Understanding Comics? It’s great.

It’s about comics but, really, it’s just a fantastic book. It’s written as a comic. In it, Scott McCloud makes the point of this new medium, comics, had to kind of borrow from the existing mediums that came before. He points out that this isn’t new. He says:

Each new medium begins its life by imitating its predecessors. Many early movies were like filmed stage plays. Much early television was like radio with pictures.

Right? That it takes time.

Now, this idea of a new medium having to borrow the tropes and the language of the medium that came before, this idea pops up again on the web in this article published in the year 2000 by John Allsopp on A List Apart, A Dao of Web Design. Can I get a show of hands of who’s read A Dao of Web Design? Awesome. You are my people. The rest of you, please read it. It’s such a wonderful article.

It’s crazy that I’m standing up here recommending, “Oh, yeah, you should totally read this article from the year 2000,” but it is relevant. It’s amazingly relevant still today. It’s maybe more relevant today than when it was written. 
In the article, John says:

When a new medium borrows from an existing one, some of what it borrows makes sense, but much of the borrowing is thoughtless, it’s ritual, and it often constrains the new medium. Over time, the new medium develops its own conventions, throwing off existing conventions that don’t make sense.

Now, at the time John was writing this, 2000, of course, we were borrowing from what had come before in the previous medium and that was print. We were trying to figure out how do we get the same level of control that we were used to in the world of print on the web. We did that using clever techniques thanks to David Siegel who wrote this book, Creating Killer Websites. David Siegel, if you don’t know the name, you’re certainly familiar with his work because he’s the guy who came up with the idea of using tables for layout or having a one-pixel by one-pixel spacer GIF.

Hey, listen. That was the only way we could do it back then. They were hacks, yes, but they were necessary hacks. He did actually recant. Years later, he wrote a piece that said, the web is ruined and I ruined it. This may be overstating the case, but you know.

He was pointing out we could use these techniques, these hacks to constrain Web and make it work like print. We could get pixel-perfect control. John Allsopp, in his article, he’s kind of pushing against and going, no, no, no:

The web is a new medium. It has emerged from the medium of printing whose skills and design language and convention strongly influence it. It is too often shaped by that from which it sprang. Killer websites are usually those which tame the wildness of the web, constraining pages as if they were made of paper. Desktop publishing for the web.

So, I mean, John totally acknowledges that there is a lot to learn from this rich, rich history of print and, before print, just writing. This is clearly the second great leap of our species. We had language where we could communicate ideas, tell stories, imagine the future—as long as we’re in the same physical space—and then we came up with writing. Now we can communicate, re-viral ideas, talk about the future and the past, and we don’t even have to be in the same physical place. Someone who died centuries ago can put an idea in your head by putting language onto a medium like vellum or, later, paper.

You can see this evolution over centuries from illuminated manuscripts to the printing press, Gutenberg, until we get to the 20th Century and we really start to refine the design. We got the Swiss School of Design, the fonts, typography, and the grid system. There’s a lot to learn here.

The Book of Kells. Gutenberg’s bible. Grid Systems.

What’s interesting to me, though, is what seems to be this battle of extremes. We’ve got David Siegel talking about desktop publishing for the web, effectively, and John Allsopp talking about, “No, the web is its own medium. It needs to have its own conventions.”

They seem to be at opposite ends of a spectrum. Yet, they actually have a commonality because, on both sides, when they’re talking about this, they’re talking about websites — web sites. Now, that in itself is a metaphor. You don’t have physical sites on the web. It’s intangible like time. Yet, we chose this metaphor. The idea of a site, a place where you go to a physical place.

Site actually is pretty good with connotations of a building site, a construction site. That was literally the metaphor in the ’90s. The web is like a construction site. It kind of is constantly under construction. Oh, you want the full nostalgic effect?

Under construction.

There we go. We’re back to Geocities. But I feel like then we decided to grow out of this metaphor and use more grownup metaphors. We got professional. We had to borrow from other industries, other mediums, and here’s one that people are very fond of borrowing: architecture—describing what we do as architecture.

Architects

Whether it’s on the design side or the development side, talking about us as architects. It seems like a very appealing industry to borrow from, which is fascinating. If you ever talk to architects, man, it’s a shitty industry. Spec work, awards, and competition, it’s not a great industry.

But we seem to hold it up as, like, “Oh, yeah, we’re like architects because architects are awesome.” I think of Hollywood because every Hollywood movie that has an architect in it, the architects are always really nice people. They’re always like the protagonist, never the antagonist. The architect is never the villain.

It’s fair enough. It’s fair enough to borrow things from something like architecture. For example, I know plenty of designers who would say that this book is the best book about UX that they’ve ever read, 101 Things I Learned in Architecture School by Matthew Frederick. It was published in 2007. It’s not written for UX designers. It’s not written about the web, but there are lessons in there that are directly applicable.

There are other works from the world of architecture that have definitely influenced the work we are doing today like the classic from Christopher Alexander, A Pattern Language. Now this—I say classic rightly—this is a classic book. A classic book is a book everyone has heard of and nobody has read.

That is certainly the case here. Published in 1977, and it influenced lots of people doing things in the digital space. Ward Cunningham, the inventor of the wiki, he said, yeah, he was really influenced by A Pattern Language.

The idea of a pattern language, it’s architecture, but breaking things down into components that you could change the parameters we used in public spaces, buildings, things like that. It’s a modular approach. Later on, in the software world, a gang of four, they wrote Design Patterns: Elements of Reusable Object-Oriented Software, and they were directly influenced by Christopher Alexander, this idea of a pattern language, components, patterns, modularity.

What’s interesting is there’s another book by Molly Wright Steenson, you may remember was a blogger, Girl Wonder. She worked in the world of architecture and she’s written a book about the influence of architects and designers on the digital space. Richard Saul Wurman, and information architecture. There’s a very direct metaphor there, but also Christopher Alexander.

She points out, actually, the funny thing is, he’s had way more of an influence in the digital space than he ever had in architecture. Most architects don’t like him. They think he’s a bit preachy. But his influence in the digital space is massive. Here I am talking about modularity, components, and patterns. Well, I mean, that is so hot right now. Design systems, we’re breaking things down into patterns. 
In fact, I ended up organizing a conference in 2017, purely about design systems, pattern libraries, styles, all this stuff called Patterns Day. It was great. We had these wonderful speakers. Jina Anne was there, Rachel Andrew, Alla Kholmatova, Alice Bartlett. It was great.

But, by the end of the day, I was kind of half-joking as saying, we should have had a drinking game where, every time someone referenced Christopher Alexander, we had to take a drink because his spirit loomed large over this. Actually, the full rules of the drinking game I came up with afterward where any time someone references Christopher Alexander, you take a drink. Any time someone says Lego, you take a drink. Any time someone says that naming things is hard, take a drink. Any time someone says atomic or atomic design, take a drink. Anytime someone says bootstrap, you puke the drink back up.

A Pattern Language is a work of architecture that directly not just influenced but is still influencing our work today; the idea of breaking things down into components to reuse.

Now, there’s another work from the world of architecture that has a big influence on me. It’s a classic book, again, How Buildings Learn. It’s the best book I’ve never read, published in 1994, by Stewart Brand. There was also a TV series that went with this that’s pretty fascinating.

In this, he talks about the work of a British architect named Frank Duffy and Duffy’s idea of something he called shearing layers. What Duffy said was that a building properly conceived is several layers of longevity. He kind of broke these down. You’ve got the sites that the building is on. We’re talking about geological time scales.

Then above that, the structure you hope will last for centuries. Then you’ve got the infrastructure inside the building that you might have to swap out every few decades. Change the plumbing. Then you’ve got the walls and the doors. You can change them every so often until you get into the room. You’ve got furniture, which you can move on a daily basis.

The time scales get faster as you move inward. He diagrammed it like this. This is shearing layers diagrammed for the building. I find this really interesting, this idea of different time scales.

Shearing layers.

But there’s another factor here I’m kind of fascinated by, which is that each layer depends on the layer below. You can’t have a structure until you’ve got a site to build on. You can’t have furniture inside a room until you’ve got the room. You need to have the walls there. Each layer is building on top of what’s come before. You can’t jump straight ahead to furniture without first having all those other layers.

Now, this reminds me of another idea that the writer Steven Johnson talks about a lot in his work, for example, this book, Where Good Ideas Come From. This is the idea of the adjacent possible, that certain inventions leap forward that can’t happen until other things have happened before them.

There’s a reason why the microwave oven wasn’t invented in medieval France. Too many other things had to be invented first before something like the microwave oven becomes inevitable.

Everything we do is kind of built on this idea of the adjacent possible because businesses and services on the web are on top of a whole bunch of layers of adjacent possibilities. You can’t have Twitter, Facebook, or Wikipedia until the web exists. The web itself is built on all of these layers that have to happen first.

We have to have the Industrial Revolution. We have to have electricity. Then somebody has to create circuitry. We have to get to the idea of having computers and then networked computers, something like the Internet. Then the web becomes possible. Once the web is possible, then all these businesses on top of the web become possible.

This idea of the adjacent possible, the shearing layers, they kind of fascinate me because I’m seeing a parallel there.

Now, Stewart Brand, who wrote about shearing layers and architecture, he revisited this idea of shearing layers and took them out from the world of architecture in a later work called The Clock of the Long Now. Stewart Brand is one of the founders of the Long Now Foundation. If you haven’t heard of it, it’s an organization dedicated to long-term thinking. I’m a card-carrying member. The card is designed to last for a few thousand years as well.

They’re currently building a clock that will tell time for 10,000 years. Brian Eno has written an algorithm for the chimes so that when it chimes once a century, it will never be quite the same chime. It’s encouraging long now thinking.

In this book, the full title of the book being The Clock of the Long Now: Time and Responsibility: The Ideas Behind the World’s Slowest Computer, he extrapolates shearing layers into something he calls pace layers. If you take the shearing layers model and look around you, it’s everywhere. It’s kind of like systems thinking, the Donella Meadows idea that systems are everywhere.

Pace layers.

It’s kind of true. You look around these pace layers; shearing layers applied to the real world are everywhere. The example he gives is our species. If we look at the human race, we have these different time scales. The slowest is our physical nature as in our DNA, our physiological nature. That takes millennia to change. Physiologically, there’s no difference between a caveman and a spaceman.

Above that, you’ve got culture. This takes centuries, maybe longer, to accumulate over time.

Then systems of governance; not governments — governance. How are we going to run the societies?

An infrastructure, you want that to move faster, but not too fast or it could be very disruptive. 
Then you get into commerce, trading. Very fast-moving.

Then, finally, you’ve got fashion, which is super-fast. By fashion, he means things like popular music, anything that’s supposed to move fast. If fashion moved slowly, that wouldn’t be a good thing. It’s meant to move fast. It’s meant to try things out. “What about this? No, what about this? Try this.” Right? You don’t want for the things further down.

He’s mapped this onto these layers. From shearing layers, we go to pace layers. They have different timescales.

I’m talking about the difference between these really fast layers at the top, you know, “What about this? Try this? Today, we’re doing that,” compared to the really slow layers at the bottom that move slowly and are resistant to change.

He says:

Fast learns but slow remembers. Fast proposes and slow disposes. Fast is discontinuous but slow is continuous. Fast and small instructs slow and big by a crude innovation, an occasional revolution, and slow and big controls small and fast by constraint and constancy. Fast gets all our attention, but slow has all the power.

Now, once I was exposed to this idea and this virus had landed in my head, I found that I couldn’t get it out of my head. I started seeing pace layers everywhere. At Clear Left, where I work, it’s a running joke. On every project, we have a kickoff. It’s like, what’s the time to pace layers? How long will it be before someone makes a pace layer analogy? It’s like my brain has now been rewired to see pace layers everywhere.

It’s like, you know, the first time that someone points out the arrow in the FedEx logo. There was your life before that and there’s your life after that.

You’ve all seen the arrow in the FedEx logo. Yeah.

What about Toblerone? You’ve all seen the bear? Ah, yeah! Right? You will never be able to unsee that.

Consider the duck.

It’s a perfectly normal, ordinary duck. Agreed? But then your brain is exposed to the idea that all ducks are actually wearing dog masks.

All ducks are actually wearing dog masks. Now, when I show you the same picture of the same duck—

—you will never be able to unsee that. That’s how my brain feels when it comes to pace layers. I see them everywhere. It’s like the crazy wall part of the serial killer’s lair in the murder mystery. It’s just pace layers.

I couldn’t help but apply pace layers to the work we do mapping our medium to pace layers. Let’s try it with the World Wide Web.

The layers of the web.

Well, we build on top of the Internet. We can’t have the web before having Internet. At the very bottom layer, you’ve got the protocols of the Internet itself, you know, TCP/IP, which have been pretty much unchanged for decades. They were there from the ARPANET before the Internet. It’s a good thing that they’re unchanged. You would not want to be swapping out that low layer very quickly.

Above that, we have all the different protocols we use, protocols for email, protocols for file transfer, and protocols for the World Wide Web, HTTP, the hypertext transfer protocol. Now, this has evolved over time. We now have HTTP2, but it’s been a slow process and that feels right. Again, we shouldn’t be swapping out too quickly, but it’s a bit faster moving than the Internet protocols. 
On top of HTTP, we can put our URLs. Now, I would love it if URLs were right down at the bottom layer and they were permanent and they never changed and they never went away. That is the web I want, but I must acknowledge that, alas, you have to work hard to keep URLs alive. They do change. They do move. They do get destroyed, which is a bit of a shame, but we can work at it, people. We can work on keeping our URLs alive.

What we put at that those URLs, at the simplest level, we’ve got HTML. It was there from the start. From day one of the web, HTML was there and it’s still there today, but it’s evolved. It’s changed over time. Initially, HTML had 21 elements and now it’s got 121 elements, so it’s evolved.

But it feels like you can keep up with the pace of change. The last big evolution of HTML was 2010, later, with HTML5. We do get new editions every now and then, but it’s fine. We can keep up with it.

Then CSS, CSS changes may be more — definitely changes more rapidly than HTML. That feels like a good thing. We kind of want more. Give us some more CSS and now we’ve got Grid and we’ve got Flexbox. We’ve got all these great, new CSS things. Custom properties.

I don’t feel too overwhelmed by that. I still feel like, “Oh, no, this is good. We’ve got new CSS. I’m feeling I can keep on top of this, you know, read the right articles, read the right books, try them out. It’s fine.”

Then there’s the JavaScript ecosystem.

Specifically, the ecosystem, not the language, because the JavaScript language itself doesn’t actually change that often. ES6 or ES2000, whatever we’re talking about the evolution to the language, they’re not so rapid that we’d get overwhelmed. But the language ecosystem, the culture of JavaScript, that feels overwhelming to me. Right? Since I’ve been speaking up here, two new JavaScript frameworks have been released.

The pace, I constantly feel like I’m falling behind like, “Oh, I haven’t even heard of this new thing that apparently everybody is using.”

Does anyone else feel overwhelmed by this pace of change? Okay, good. Keep your hands up for a sec and just look around. All right? You are not alone. This turns out to be normal.

But here’s the thing. By mapping these different rates onto this model of pace layers, I actually start to feel better about this because let’s say the JavaScript ecosystem is fashion: “It’s going to do this. No, no, today we’re doing that. Try this. Try that.”

Whereas, “Oh, okay. It’s supposed to move fast. It would be bad if it moved slow. It’s meant to be trying stuff out. We see what sticks.”

With fashion, the best of pop music will probably last and find its way down the layers into culture, a slower pace layer. With the JavaScript, the patterns that work in JavaScript may find their way down into the slower moving layers.

To give you an example, when JavaScript was first invented—I’m showing my age here—I remember the common use cases were rollovers, image rollovers. And form validation, so mousing over something and changing how it looks, we’d use JavaScript for that. If someone is filling in a form and there’s a required field, we’d use JavaScript to make sure that required field was filled in.

These days, we wouldn’t even use JavaScript for either of those. We’d use CSS to do rollovers. We’d use HTML to add just one required attribute. The pattern, it stuck. The spaghetti stuck to the wall and it moved down the layers into something more stable.

That’s what JavaScript is kind of supposed to do. When we’re trying to responsive images, we had JavaScript solutions until we got to something that was further down the stack in HTML.

I do feel overwhelmed by the pace of change. But I’m starting to feel a little better about feeling overwhelmed, that it’s okay. JavaScript is meant to feel overwhelming. It’s where we try stuff out. It’s where stuff moves fast.

Now, the other thing I realized by mapping our technology stack of the web onto this pace layer model is that this is how I build. When I’m building a website, I pretty much start at the third layer. I don’t worry about, is the Internet on.

I start with URLs. I think URL design is a really good place to start designing. It is a design discipline, a neglected one, but it is design. Then I think about the content and then structure that content using the best available markup of HTML. I think about the presentation may be on a small screen first and then the presentation on larger screens using CSS. Then start thinking about extra behaviors that I can’t get with HTML and CSS, so I reach for JavaScript to add those extra behaviors.

This seems to me to make sense as a way of building on the web because it maps to the structure of the pace layers of the web. But it’s also a testament to the flexibility of the web that you don’t have to build this way. If you don’t want to build in this layered way, you don’t have to.

In fact, you can build like this. You can put something that’s on the Internet, but you just do everything in JavaScript. URL routing, let’s do that in the browser in JavaScript. The Document Object Model, let’s generate that in the browser in JavaScript. CSS, apparently we’re doing it in JS now.

Everything in JavaScript. This is an absolutely legitimate choice. You can choose to build things on the web like this. The web allows this. Again, it’s a testament to the flexibility of the web.

Now, personally, I don’t build like this and this doesn’t feel quite right to me. It doesn’t feel like it maps to the web too well. It kind of turns it into this all or nothing situation where, as long as we’ve got JavaScript, everything is going to be great. But if we don’t, there’s nothing.

You end up with this situation where we’ve turned what we’re building on the web into a binary situation. Either it works great or it just doesn’t work at all. There’s this kind of single point of failure there with the JavaScript.

Now, this model makes complete sense in other mediums. I think other mediums have influenced our thinking on the web. Maybe we’ve borrowed the metaphors of these other mediums.

For example, if you’re building a native app, this makes complete sense. If you’re building an iOS app and I have an iOS device, it works great. I get 100% of what you designed. But if you build an iOS app and I have, say, an Android device, it doesn’t work at all. You can’t install an iOS app onto an Android device. Those are your options: either it works great or it doesn’t work at all. This mental model makes complete sense in that field.

On the web, because we can have this layered approach, that means we can build like this. We can go from something that doesn’t work at all to something that just about works—maybe it’s just text on a screen—to something that works fine—maybe it’s missing a bunch of behaviors, but the user can accomplish what they want to do—to something that works well, but maybe the latest and greatest browser APIs aren’t supported by a particular browser—and then to something that works great like the latest browser running the best device, great network.

Building in layers.

Most people are going to be somewhere on this continuum. Maybe nobody is going to get 100% of what you hope they get, but no one is going to get zero percent either as long as you’re building in this way, as long as you’re building with the grain of the web, building in layers, one thing on top of the other.

I’m going to quote Ethan here. Hi, Ethan. Ethan said:

I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it would change if, say, fonts aren’t available or JavaScript doesn’t work or if someone doesn’t see the design as you and I might and is having the page read aloud to them.

In a way, this is a way of busting assumptions, the what-ifs. What if something isn’t supported? By building in a layered way, it’s okay. Everything will fall back to the layer below, the adjacent possible.

Now, Ethan, of course, we all know from this article, Responsive Web Design, published on A List Apart. When was that? 2010. My God, nine years ago. Ten years after, John Allsopp published A Dao of Web Design on A List Apart. One of the first things Ethan does in this article is to reference A Dao of Web Design. You could say that Ethan was building on top of that foundational layer that was set by John Allsopp.

Architecture again. Responsive web design. The reason why Ethan chose that term was because there was this idea in architecture called responsive architecture about buildings that could respond to the conditions of the people in the buildings. That made a really good metaphor for talking about the web on large screens, small screens, and everything in between.

This architecture thing, as a metaphor, it’s not bad. We can learn from it. I think, just be careful not to take it too far.

It’s not the only metaphor we use. Here’s another one. When we don’t talk about ourselves as architects, we’re engineers. Yeah.

Engineers

It sounds good. This one predates the web. We’ve been talking about the idea of software engineering for a long time. I’m very partial to this term: software engineering. Not so much for the term itself. Not that I think it’s a particularly good metaphor, but from where it comes from, which is fricken’ awesome.

Margaret Hamilton.

The term “software engineering” comes from Margaret Hamilton. Margaret Hamilton was in charge of the onboard flight software on the Apollo moon landing. This is engineering. That is the code base she’s standing next to there, which would then literally be woven into the computers onboard Apollo.

But as a metaphor, engineering, well, there’s a whole bunch of different kinds of it. What kind of engineer are we talking about here? Is it material engineering, structural engineering, chemical engineering, aeronautical engineering? They all have commonalities. One being, as an engineer, you’ve got to know two things. There’s the materials you’re going to be working with and the tools you’re going to use to shape those materials.

Now, I think that we can use that metaphor and apply it to the web. I would say the materials on the web are HTML, CSS, and our JavaScript, hopefully in that order. Then we’ve got the tools we use to design for the materials of the web. 
Now, the most obvious tools we could think of are graphic design tools. We started using Photoshop even though that was never intended for Web design. Since then, we’ve evolved and we’ve got tools that are much more focused on the web, things like Sketch, Figma, and all this kind of stuff.

These are obvious tools we use to build the web, but there are less obvious tools. If you were working on a Web project, these tools also get used. You’re going to be talking over email. You’re going to be communicating over Slack, organizing spreadsheets, spreadsheets people.

We talk about these as productivity tools, though sometimes I know it feels like they are reducing productivity rather than increasing it. But it’s kind of a misnomer when you think about productivity tools. All tools are productivity tools. That’s literally what tools are for is to make you more productive.

I think we should acknowledge that these are legitimate design tools. You can’t launch a project without putting in some time and some kind of communication tool.

Then when it comes to the actual welding of these materials, we’ve got a whole bunch of tools that sit in our machines or sit in our Web servers. Now I feel like I’m back up at that top layer of the pace layers and I’m getting overwhelmed with the task runners, the build tools, the chains, the transpilers, and the preprocessors. Apparently, it changes every week. Oh, you’re still using Grunt? No, we’re using Gulp. No, Webpack. That’s what’s so overwhelming.

It also feels like it’s quite complicated. This is complicated stuff, but it’s like we’ve chosen it. We’ve chosen to make our lives complicated, in a way.

I’ll tell you what it reminds me of. Do you remember that startup, Juicero?

Where they sold a big, expensive, complicated machine to make juice, but you had to buy exactly the right juice packets to put in the big, expensive machine to make the juice. It works. It works great. The big, expensive, complicated machine does its job but somebody noticed that you could actually just take the packets and squeeze them by hand and it still produces juice. I’m just saying that squeezing by hand is still an option. You can build websites by squeezing by hand. (I think this metaphor has been stretched just about as far as it can do, so I will leave it there.)

There’s this other kind of spectrum, I guess, between the materials and the tools and then the people that will be exposed to the materials and the tools. They kind of fall into two categories: the engineers themselves and the end-users.

When we’re evaluating our tools and asking, “Is this the right tool to use?” we should evaluate it from our perspective, yes, “Is this going to be a helpful tool to me as an engineer?” if we’re using that metaphor. But I strongly feel we should also ask, “Is this going to be useful for the end-user?”

If those two things come into conflict, what then? Do we privilege our own experience over the user experience? I would hope not. I worry that, in a lot of tool choices, particularly on stuff that gets sent down to the browser. “Oh, I’m going to use a CSS framework.” Great. Good for you. That’s helping you out but now the user has to pay the cost of the benefit that you get from that CSS framework because they have to download the whole CSS framework.

Sometimes, these things come into conflict and I feel like maybe we privileged the developer experience over the user experience and that worries me. The other time they don’t come into conflict. All those tools like preprocessors and task runners that just sit on your own computer, no direct effect on the end-user experience. Frankly, use whatever you like. It doesn’t make a direct effect on the end-user experience.

When we’re evaluating tools, there are all these questions to ask. Who benefits from the tool? If I choose to use this tool, will it benefit the users? Will it benefit the engineers? Neither? Both?

There are other questions we ask like, well, just how good is this tool? To evaluate that we ask; yeah, how well does it work? Does this tool do what it says it will do well?

This, of course, is a completely valid question to ask but there’s a corollary that I think is more valid and that’s to ask not just how well does it work but how well does it fail?

What happens when something goes wrong?

This is exactly why I think this layered approach makes sense because, if you build in this layered way, each one of these layers can fail well. If you build like this, then JavaScript can fail well. What if something goes wrong and you’ve got an error in your JavaScript? You fall back to something that still works. Not as great as it worked before, but it still works. It fails well.

These technologies on the web, they fail well by design. CSS fails well. Use a CSS property the browser doesn’t understand or CSS value. The browser just ignores it. It fails well.

HTML: Make up an HTML element. Throw it into a webpage. The browser doesn’t throw an error. The browser doesn’t stop parsing the webpage. It just ignores it and moves on. It fails well.

It actually makes sense to not jump ahead to the powerful stuff, to the top of the pace layers, but to try and build in layers and stay low for as long as possible. This is actually a principle, a principle that underlies the architecture of the web itself called the Principle of Least Power. You should choose the least powerful language for a given purpose, which seems really counterintuitive.

Why would I choose the least powerful language to do something? Surely, I want more power. The idea here is the power comes at an expense. Power comes at the expense of complexity, fragility. The more powerful technology is maybe more likely to fail badly.

Derek Featherstone put it well. He said:

In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof. It just works.

The example there was rollovers. How are you going to do rollovers? Do it in JavaScript? No, do it in CSS. :hover - done. Right? Oh, you need to make an interactive button? Use the button element. Be lazy.

This makes a lot of sense, the Principle of Least Power. It makes a lot of sense to me on the web, especially when you combine it with a universal law that definitely applies on the web, and that’s Murphy’s Law:

Anything that can possibly go wrong will go wrong.

This comes directly from the world of engineering. Edward Aloysius Murphy Jr. was an aerospace engineer. It’s because he had this attitude, he never lost anybody on his watch.

I think we tend to dismiss things going wrong as edge cases. We kind of assume the average output. Other industries, when they’re making cars, they test them. They strap crash test dummies in. They smack them into walls at high speed.

To be fair, a lot of the reason why they have to do that is because of regulation. They didn’t necessarily choose to do it, but still. Can you imagine if they went, well, actually, we realize that most people are going to drive cars on roads and people driving into walls is an edge case, so we’re not going to worry too much about that?

Now, obviously, you want to hope for the best but you should prepare for the worst. Trent Walton said:

Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability.

The web’s inherent variability, that gets to the heart of it.

Dave Siegel was trying to battle with the pixel-perfect labels was the web’s inherent variability. What John Allsopp was calling was for us to embrace the web’s inherent variability. It’s a feature, not a bug.

Are we engineers? Can we call ourselves engineers? Well, let me tell you something from the world of structural engineering.

This is the plan for the Quebec Bridge in Canada, a cantilever bridge. Construction started at the start of the 20th Century. There was a competition to see who get to design and build a bridge because that’s the way the industry works.

The engineer in charge was named Theodore Cooper. Now, originally, the bridge was meant to be 490 meters long but Theodore Cooper changed the specification to make it 550 meters long, mostly because, up in Scotland, the Firth of Forth Bridge, that was the longest bridge in the world at the time, longest cantilever. He wanted this bridge to exceed that, so he made the bridge longer but he did not recalculate the already high stresses being placed on the material of the bridge.

Oh, also, Theodore Cooper refused to work on site. He was down in New York, supposedly overseeing construction from New York. And when it was proposed that somebody should check his calculations, he took that as a personal afront and said, “No, no, no. No, no, that won’t work,” so there was no code reviews happening on this project.

Now, someone was onsite, the young engineer named Norman McLure. By 1907, August 6th, he had started to notice that the steel was bending, getting a lot of stress. Then again, on August 27th, it had got worse.

Cooper was notified down in New York. He did send a telegram back to Quebec. He said, “Place no more load on Quebec bridge until all facts considered - stop.” But he was inferring that the work should stop. He never explicitly said, “Stop the work right now,” so the telegram was ignored and work continued.

On August 29th, 1907, the bridge collapsed. It was shortly before the end of the day. The whistle was just about to blow to signal the end of the working day. There were 86 workers on the bridge and 75 of them died.

Now, something started happening in Canada a few years after this, by 1925. Engineering schools in Canada started holding private ceremonies around graduation time. This was a ceremony that was separate from qualifications. This wasn’t about whether you were qualified to be an engineer. This was called The Ritual of the Calling of the Engineer. You would speak an obligation penned by Rudyard Kipling, which I won’t repeat here because it’s meant to stay within the confines of this ritual.

You would also receive an iron ring. This iron ring would be a symbol of pride of being an engineer, but also a symbol of humility. For the longest time, the myth persisted that the iron itself was made from the steel in the Quebec Bridge. It’s not true, but the Quebec Bridge certainly looms over the idea of the iron ring. You’d wear it on the little finger of your working hand, so it would brush against the paper or the computer keyboard during your working day as a constant reminder of your responsibility as an engineer.

The iron ring.

When we call ourselves engineers, I do have to ask, have we earned it? Do we take our responsibility seriously?

Maybe we don’t call ourselves engineers, but then what do we call ourselves? Does it even matter?

Builders

Well, we could go back to that original metaphor from the ’90s, under construction. Maybe we’re builders. We build things. The web is under construction. We’re the ones constructing it. It’s not so bad, you know, to be the ones literally building the web. It’s kind of awesome when you think about it.

Christopher Alexander, when he was talking about his reason for coming up with A Pattern Language, was because he said:

Most of the wonderful places in the world were not made by architects but by the people.

Maybe we’re at the bottom of the layer stack here as workers just building the web, but maybe we also have all the power — more power than we realize. Our collective power is greater than anything any architect could wield.

Yeah, maybe we’re builders. Maybe we’re bricklayers. I know Simon comes from a long line of bricklayers. It is a noble profession. Think about what our building blocks are, the building blocks of the World Wide Web.

The World Wide Web, I think, is the next great leap forward. We had language, writing, the printing press, and now hypertext in the form of the Word Wide Web. Who gets to build it? We do with this kind of building block: the URL, a link. What an amazing building block that is.

I can make a webpage and put two links on it linking to two different things. That combination of those two links has never existed before in the history of the web. We’ve created something new, link by link, building block by building block, building in layers.

I’m reminded of an apocryphal story may be from medieval times—who knows—a traveler coming across three workers. All three workers are doing the same thing. They’re building. They’re moving stones. They’re putting stones one on top of the other.

The traveler says to the first builder, “What are you doing?”

He says, “Oh, I’m moving stones.”

He says to the second builder, “What are you doing?” 
He says, “I’m building a wall.”

He says to the third builder, “What are you doing?”

He says, “I’m building a cathedral.”

They’re all doing the same task but thinking about it in different ways. Maybe that’s what we need to do. Forget about labels, metaphors, architecture, engineer, building, whatever. Just think about what a privilege it is to be doing this, to embrace the fact that we are the builders. We are the bricklayers.

Today, for example, we’re going to hear from quite an amazing collection of bricklayers that I’m really looking forward to hearing from. I want to hear what they’re building. I want to hear their stories of how they built it, why they built it.

But to do that, I need to stop moving air over these vocal cords and flapping this fleshy piece of meat around in my mouth and just stop talking. Thank you for listening.

Sunday, December 29th, 2019

Move Fast & Don’t Break Things | Filament Group, Inc.

This is the transcript of a brilliant presentation by Scott—read the whole thing! It starts with a much-needed history lesson that gets to where we are now with the dismal state of performance on the web, and then gives a whole truckload of handy tips and tricks for improving performance when it comes to styles, scripts, images, fonts, and just about everything on the front end.

Essential!

Tuesday, October 22nd, 2019

Monday, October 14th, 2019

The World-Wide Work. — Ethan Marcotte

Here’s the transcript of Ethan’s magnificent closing talk from New Adventures. I’m pretty sure this is the best conference talk I’ve ever had the honour of seeing.

Tuesday, October 8th, 2019

You really don’t need all that JavaScript, I promise

The transcript of a fantastic talk by Stuart. The latter half is a demo of Portals, but in the early part of the talk, he absolutely nails the rise in popularity of complex front-end frameworks:

I think the reason people started inventing client-side frameworks is this: that you lose control when you load another page. You click on a link, you say to the browser: navigate to here. And that’s it; it’s now out of your hands and in the browser’s hands. And then the browser gives you back control when the new page loads.

Wednesday, September 18th, 2019

Keeping it simple with CSS that scales - Andy Bell

The transcript of Andy’s talk from this year’s State Of The Browser conference.

I don’t think using scale as an excuse for over-engineering stuff—especially CSS—is acceptable, even for huge teams that work on huge products.

Saturday, June 22nd, 2019

Apollo 11 in Real-time

What a magnificent website! You can watch, read, and listen to the entire Apollo 11 mission! Do it now, or wait until until July 16th when you can follow along in real time …time-shifted by half a century.

Thursday, February 14th, 2019

Talk at Bush Symposium: Notes

On the 50th anniversary of Vannevar Bush’s As We May Think, Tim Berners-Lee delivered this address in 1995.

To a large part we have MEMEXes on our desks today. We have not yet seen the wide scale deployment of easy human interfaces for editing hypertext and making links. (I find this constantly frustrating, but always assume will be cured by cheap commercial products within the year.)

Thursday, February 7th, 2019

Transcript of Tim Berners-Lee’s talk to the LCS 35th Anniversary celebrations, Cambridge Massachusetts, 1999/April/14

Twenty years ago—when the web was just a decade old—Tim Berners-Lee gave this talk, looking backwards and forwards.

For me the fundamental Web is the Web of people. It’s not the Web of machines talking to each other; it’s not the network of machines talking to each other. It’s not the Web of documents. Remember when machines talked to each other over some protocol, two machines are talking on behalf of two people.

Monday, May 7th, 2018

Pair Programming

Amber gave a lightning talk about pair programming at the Beyond Tellerrand Düsseldorf side event. Here is the transcript of that presentation.

The fact that everyone has different personalities, means pairing with others shouldn’t be forced upon anyone, and even if people do pair, there is no set time limit or a set way to do so.

So, there’s no roadmap. There’s no step-by-step guide in a readme file to successfully install pair programming

Tuesday, April 10th, 2018

Fantasies of the Future: Design in a World Being Eaten by Software / Paul Robert Lloyd

The transcript of a terrific talk by Paul, calling for a more thoughtful, questioning approach to digital design. It covers the issues I’ve raised about Booking.com’s dark patterns and a post I linked to a while back about the shifting priorities of designers working at scale.

Drawing inspiration from architectural practice, its successes and failures, I question the role of design in a world being eaten by software. When the prevailing technocratic culture permits the creation of products that undermine and exploit users, who will protect citizens within the digital spaces they now inhabit?

Thursday, April 5th, 2018

Dear Developer, The Web Isn’t About You | sonniesedge.co.uk

This is absolutely brilliant!

Forgive my excitement, but this transcript of Charlie’s talk is so, so good—an equal mix of history and practical advice. Once you’ve read it, share it. I want everyone to have the pleasure of reading this inspiring piece!

It is this flirty declarative nature makes HTML so incredibly robust. Just look at this video. It shows me pulling chunks out of the Amazon homepage as I browse it, while the page continues to run.

Let’s just stop and think about that, because we take it for granted. I’m pulling chunks of code out of a running computer application, AND IT IS STILL WORKING.

Just how… INCREDIBLE is that? Can you imagine pulling random chunks of code out of the memory of your iPhone or Windows laptop, and still expecting it to work? Of course not! But with HTML, it’s a given.

Friday, February 9th, 2018

Everything Easy is Hard Again – Frank Chimero

I wonder if I have twenty years of experience making websites, or if it is really five years of experience, repeated four times.

I saw Frank give this talk at Mirror Conf last year and it resonated with me so so much. I’ve been looking forward to him publishing the transcript ever since. If you’re anything like me, this will read as though it’s coming from directly inside your head.

In one way, it is easier to be inexperienced: you don’t have to learn what is no longer relevant. Experience, on the other hand, creates two distinct struggles: the first is to identify and unlearn what is no longer necessary (that’s work, too). The second is to remain open-minded, patient, and willing to engage with what’s new, even if it resembles a new take on something you decided against a long time ago.

I could just keep quoting the whole thing, because it’s all brilliant, but I’ll stop with one more bit about the increasing complexity of build processes and the decreasing availability of a simple view source:

Illegibility comes from complexity without clarity. I believe that the legibility of the source is one of the most important properties of the web. It’s the main thing that keeps the door open to independent, unmediated contributions to the network. If you can write markup, you don’t need Medium or Twitter or Instagram (though they’re nice to have). And the best way to help someone write markup is to make sure they can read markup.

Thursday, January 18th, 2018

Dude, you broke the future! - Charlie’s Diary

The transcript of a talk by Charles Stross on the perils of prediction and the lessons of the past. It echoes Ted Chiang’s observation that runaway AIs are already here, and they’re called corporations.

History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

I’m talking about the very old, very slow AIs we call corporations, of course.

Saturday, January 13th, 2018

The Human Computer’s Dreams Of The Future by Ida Rhodes (PDF)

From the proceedings of the Electronic Computer Symposium in 1952, the remarkable Ida Rhodes describes a vision of the future…

My crystal ball reveals Mrs. Mary Jones in the living room of her home, most of the walls doubling as screens for projected art or information. She has just dialed her visiophone. On the wall panel facing her, the full colored image of a rare orchid fades, to be replaced by the figure of Mr. Brown seated at his desk. Mrs. Jones states her business: she wishes her valuable collection of orchid plants insured. Mr. Brown consults a small code book and dials a string of figures. A green light appears on his wall. He asks Mrs. Jones a few pertinent questions and types out her replies. He then pushes the start button. Mr. Brown fades from view. Instead, Mrs. Jones has now in front of her a set of figures relating to the policy in which she is interested. The premium rate and benefits are acceptable and she agrees to take out the policy. Here is Brown again. From a pocket in his wall emerges a sealed, addressed, and postage-metered envelope which drops into the mailing chute. It contains, says Brown, an application form completely filled out by the automatic computer and ready for her signature.

Wednesday, January 10th, 2018

Legends of the Ancient Web

An absolutely fantastic talk (as always) from Maciej, this time looking at the history of radio and its parallels with the internet (something that Tom Standage touched on his book, Writing On The Wall). It starts as a hobbyist, fun medium. Then it gets regulated. Then it gets used to reinforce existing power structures.

It is hard to accept that good people, working on technology that benefits so many, with nothing but good intentions, could end up building a powerful tool for the wicked.

Thursday, December 21st, 2017

the bullet hole misconception

The transcript of a terrific talk on the humane use of technology.

Instead of using technology to replace people, we should use it to augment ourselves to do things that were previously impossible, to help us make our lives better. That is the sweet spot of our technology. We have to accept human behaviour the way it is, not the way we would wish it to be.

Monday, November 20th, 2017

Trolleys, veils and prisoners: the case for accessibility from philosophical ethics

The transcript of a presentation on the intersection of ethics and accessibility.

Sunday, November 19th, 2017

SpeechBoard | Home | The easiest way to edit podcasts

This is quite impressive—you edit the audio file by editing the transcript!

Tuesday, September 19th, 2017

Evaluating Technology

A presentation from the Beyond Tellerrand conference held in Düsseldorf in May 2017. I also presented a version of this talk at An Event Apart, Smashing Conference, Render, Frontend United, and From The Front.

I’m going to show you some code. Who wants to see some code?

All right, I’ll show you some code. This is code. This is a picture of code.

Photograph 51

The code base in this case is the deoxyribonucleic acid. This is literally a photograph of code. It’s the famous Photograph 51, which was taken by Rosalind Franklin, the X-ray crystallographer. And it was thanks to her work that we were able to decode the very structure of DNA.

Rosalind Franklin

Base-4, unlike the binary base that we work with, with computers, it’s base-4 A-C-G-T: Adenine, Cytosine, Guanine, Thymine. From those four simple ingredients we get DNA, and from DNA we get every single life form on our planet: mammals, birds, fish, plants. Everything is made of DNA. This huge variety from such simple building blocks.

Apollo 11 Mission Image - Earth view over Central and North America

What’s interesting, though, is if you look at this massive variety of life on our planet, you start to see some trends over time as life evolves through the process of natural selection. You see a trend towards specialisation, a species becoming really, really good at something as the environment selects for fitness. A trend towards ubiquity as life attempts to spread as far as possible. And, interestingly, a trend towards cooperation, that a group could be more powerful than an individual.

Now we’re no different to any other life form, and this is how we have evolved over time from simpler beginnings. I mean, we like to think of ourselves as being a more highly evolved species than other species, but the truth is that every species of life on this planet is the most highly evolved species of life on this planet because they’re still here. Every species is fit for its environment. Otherwise they wouldn’t be here.

This is the process, this long, slow process of natural selection. It’s messy. It takes a long time. It relies on errors in the code to get selected for. This is the process that we human beings have gone through, same as every other species on the planet.

But then we figured out a way to hack the process. We figured out a way to get a jumpstart on evolution, and that’s through technology. Through technology we can bypass the process of natural selection and augment ourselves, extend our capabilities like this.

Acheulean hand ax

This is a very early example of technology. It existed for millions of years in this form, ubiquitous, across the planet. This is the Acheulean hand ax. We didn’t need to evolve a sharp cutting tool at the end of our limb because, through technology, we were able to create a sharp cutting tool at the end of our limb. Then through that we were able to extend our capabilities and shape our environment.

We shape our tools and, thereafter, the tools shape us.

And we have other tools. This is a modern tool, the pencil. I’m sure you’ll all familiar with it. You use it all the time. I think it’s a great piece of technology, great affordance on there. Built in progress bar, and it’s got an undo at the end.

I, Pencil

What’s interesting is if you look at the evolution of technology and you compare it to the evolution of biology, you start to see some of the same trends; trends towards specialisation, ubiquity, and cooperation.

The pencil does one thing really, really well. The Acheulean hand ax does one thing really, really well.

All over the world you found Acheulean hand axes, and all over the world you will find the pencil in pretty much the same form.

And, most importantly of all, cooperation. No human being can make a pencil. Not by themselves. It requires cooperation.

There’s a famous book by Leonard Read called I, Pencil, and it’s told from the point of view of a pencil and describing how it requires cooperation. It requires human beings to come together to fell the trees to get the wood, to get the graphite, to put it all together. No single human being can do that by themselves. We have to cooperate to create technology.

You can try to create technology by yourself, but you’re probably going to have a hard time. Like Thomas Thwaites, he’s an artist in the U.K. You might have seen his most recent project. He tried to live as a goat for a year.

The toaster project

This is from a while back where he attempted to make a toaster from scratch. When I say from scratch, I mean from scratch. He wanted to mine his own metals. He wanted to smelt the steel. He wanted to create the plastic, wire it all up, and do it all by himself. It was a very interesting process. It didn’t really work out. I mean it worked for like a second or two when he plugged it in and then completely burned out, and it was prohibitively expensive.

When it comes to technology, cooperation is built in, along with those other trends: specialisation, ubiquity.

It’s easy to think when we compare these trends in biology and technology and we see the overlap, to fall into the trap of thinking they’re basically the same process, but they’re not. Underneath the hood the process is very different.

In biology it’s natural selection, this long, messy, slow process. But kind of like DNA, it’s very simple building blocks that results in amazing complexity. With technology it’s kind of the other way around. Nature doesn’t imagine the end result of a species and then work towards that end result. Nature doesn’t imagine an elephant or an ostrich. It’s just, that’s the end result of evolution. Whereas with technology, we can imagine things, design things, and then build them. Picture something in our mind that we want to exist in the world and then work together to build that.

Now one of my favourite examples of imagining technology and then creating it is a design school called Chindogu created by Kenji Kawakami. He started the International Chindogu Society in 1995. There’s goals, principles behind Chindogu, and the main one is that these things, these pieces of technology must be not exactly useful, but somehow not all together useless.

Noodle cooler

I’ll show you what I mean and you get the idea. You look at these things and you think, uh, that’s crazy. But actually, is it crazy or is it brilliant? Like this, I think, well, that’s ridiculous. Well— actually, not entirely useless, not exactly useful, but, you know, keeping your shoes dry in the rain. That seems sort of useful.

Butter stick Shoe umbrellas

They’re described as being un-useless. These are un-useless objects. But why not? I mean why not harvest the kinetic energy of your child to clean the floors? If you don’t have a child, that’s fine. It works other ways.

Toddler mop Cat mop

These things, I mean they’re fun to imagine and to create, but you couldn’t imagine them actually in the world being used. You couldn’t imagine mass adoption. Like, I found this thing from the book of Chindogu from 1995, and it describes this device where you kind of put a camera on the end of a stick so you can take self portraits, but you couldn’t really imagine anyone actually using something like this out in the world, right?

Selfie stick

These are all examples of what we see in the history of technology. From Acheulean hand axes to pencils to Chindogu, there are bits of hardware. When we think of technology, that’s what we tend to think of: bits of hardware. And the hardware is augmenting the human. The human is using the hardware to gain benefit.

Something interesting happened in the 20th Century when we started to get another layer in between the human and the hardware, and that’s software. Then the human can interact with the software, and the software can interact with the hardware. I would say the best example of this, looking back through the history of technology of the last 100 years or so, would be the Apollo Program, the perfect mixture of human, software, and hardware.

Apollo 11 Mission Image - View of Moon limb and Lunar Module during ascent, Mare Smythii, Earth on horizon

By the way, seeing as we were just talking about selfies and selfie sticks, I just want to point out that this picture is one of the very few examples of an everyone-elsie. This picture was taken by Michael Collins in the Command Module, and Neil Armstrong and Buzz Aldrin are in that spaceship, and every human being alive on planet earth is also in this picture with one exception, Michael Collins, the person taking the picture. It’s an everyone-elsie.

I think the Apollo program is the pinnacle of human achievement so far, I would say, and this perfect example of this mixture of, like, amazing humans required to do this, amazing hardware to get them there, and amazing software. It’s hard to imagine how it would have been possible to send people to the moon without the work of Margaret Hamilton. Writing the onboard flight software and also creating entire schools of thought of software engineering.

Margaret Hamilton

Since then, and looking through the trend of technology from then onwards, what you start to notice is that the hardware becomes less and less important, and the software is what really starts to count with Moore’s law and everything like that, that we can put more and more complexity into the software. Maybe the end goal of technology is eventually that the hardware becomes completely irrelevant, fades away. This idea of design dissolving in behaviour.

WWW

This idea of the hardware becoming irrelevant in a way was kind of what was at the heart of the World Wide web project created by Tim Berners-Lee when he was at CERN because there at CERN — CERN is an amazing place, but everybody just kind of does whatever they want. It’s crazy. There’s almost no hierarchy, which means everybody uses whatever kind of computer they want. You can’t dictate to people at CERN you all must use this operating system. That was at the heart of the World Wide web project, the idea to make the hardware irrelevant. It shouldn’t matter what kind of computer you’ve got. You should still be able to access information.

Tim Berners-Lee

We kind of take that for granted today, but it is quite a revolutionary thought. We don’t worry about it today. You make a website, of course you can look at it on a Windows device or a Mac or a Linux machine or an iPhone, an iOS device, or an Android device. Of course. But it wasn’t clear at the time. You know back at the time you would make software for specific operating systems, so this idea of making hardware irrelevant was kind of revolutionary.

The World Wide web project is a classic example of a piece of technology that didn’t come out of nowhere. It built on what came before. Like every other piece of technology, it built on what was already there. You can’t have Twitter or Facebook without the World Wide Web, and you can’t have the World Wide web without the Internet. You can’t have the Internet without computers. You can’t have computers without electricity. You can’t have electricity without the Industrial Revolution. Building on the shoulders of giants all the way up.

There’s also this idea of the adjacent possible. It’s when these things become possible. You couldn’t have had the World Wide web right after the Industrial Revolution because these other steps hadn’t yet taken place. It’s something that the author Steven Johnson takes about: the adjacent possible. It was impossible to invent the microwave oven in 16th Century Holland because there were too many other things that needed to be invented in the way.

It’s easy to see this as an inevitable process that, of course electricity follows industrialisation. Of course computers come, and of course the Internet comes. And there is a certain amount of inevitability. This happens all the time in the history of technology where there’s simultaneous inventions and people are beating one another to the patent office by hours to patent, whether it’s radio or the telephone or any of these devices that it seemed inevitable.

I don’t think the specifics are inevitable. Something like the World Wide web was inevitable, but the World Wide web we got was not. Something like the Internet was inevitable, but not the Internet that we got.

The World Wide web project itself has these building blocks: HTTP, the protocol, URLs as identifiers, and HTML was a simple format. Again, these formats are built upon what came before. Because it turns out that making the technology—creating a format or a protocol or spec for identifying things—not to belittle the work, but that’s actually not the hard part. The hard part is convincing people to use the protocol, convincing people to use the format.

Grace Hopper

That’s where you butt up against humans. How do you convince humans? Which always reminds me of Grace Hopper, an amazing computer scientist, rear admiral Grace Hopper, co-inventor of COBOL and the inventor of the compiler, without which we wouldn’t have computing as we know it today. She bumped up against this all the time, that people were reluctant to try new things. She had this phrase. She said, “Humans are allergic to change.” Now, she used to try and fight that. In fact, she used to have a clock on her wall that went backwards to simply demonstrate that it’s an arbitrary convention. You could change the convention.

She said the most dangerous phrase in the English language is, “We’ve always done it that way.” So she was right to notice that humans are allergic to change. I think we could all agree on that.

But her tactic was, “I try to change that,” whereas with Tim Berners-Lee and the World Wide Web, he sort of embraced it. He sort of went with it. He said, “Okay. I’ve got these things I want to convince people to use, but humans are allergic to change,” and that’s why he built on top of what was already there.

He didn’t create these things from scratch. HTTP, the protocol, is built on top of TCP/IP, the work of Bob Kahn and Vint Cerf. The URLs work on top of the Domain Name System and the work of Jon Postel. And HTML, this very simple format, was built on top of a format, a flavour of SGML, that everybody at CERN was already using. So it wasn’t a hard sell to get people to use HTML because it was very familiar.

In fact, if you were to look at SGML back then in use at CERN, you would see these elements.

<body> <title> <p> <h1> <h2> <h3> <ol> <ul> <li> <dl> <dt> <dd>

These are SGML elements used in CERN SGML. You could literally take a CERN SGML document, change the file extension to .htm, and it was an HTML document.

It’s true. Humans are allergic to change, so go with that. Don’t make it hard for them.

Now of course, we got these elements in HTML. This is where they came from. It’s just taking wholesale from SGML. Over time, we got a whole bunch more elements. We got more semantic richness added to HTML, so we can structure our documents more clearly.

<article> <section> <aside> <figure> <main> <header> <footer>

Where it gets really interesting is that we also got more behavioural elements added to HTML, the elements that browsers recognise and do quite advanced things with like video and audio and canvas.

<canvas> <video> <audio> <picture> <datalist>

Now what’s interesting is that I find it fascinating that we can evolve a format like this. We can just keep adding things to the format. The reason why we could do that is because these elements were designed with backwards compatibility built in. If you have an open video tag, closing video tag, you can put content in between there for the browsers that don’t understand the video tag.

The same with canvas. You can put fallback content in there, so you don’t have to wait for every browser to support one of these elements. You can start using it straight away and still provide something for older browsers. That’s very deliberate.

The canvas element was actually a proprietary element created by Apple and other browsers saw it and said, “Oh, yeah, we like that. We’re going to take that,” and they started standardising on it. To begin with, it was a standalone element like img. You put a closing slash there or whatever. But when it got standardised, they deliberately added a closing tag so that people could put fallback content in there. What I’m saying is it wasn’t an accident. It was designed.

Now Chris yesterday mentioned the HTML design principles, and this is one of them—that when you’re creating new elements, new attributes, you should design them in such a way that “the content can degrade gracefully in older or less capable user agents even when making use of these new elements, attributes, APIs, content models.” It is a design decision. There are HTML design principles. They’re very good.

I like design principles. I like design principles a lot. I actually collect them. I’m a bit of a nerd for design principles, and I collect them at this URL:

principles.adactio.com

There you will find design principles for software, for organisations, for people, for schools of thought. There’s Chindogu design principles I’ve collected there.

I guess why I’m fascinated by principles is where they sit. Jina talked about this yesterday in relation to a design system, in that you begin with the goals. This is like the vision, what you’re trying to achieve, and then the principles define how you’re going to achieve that. Then the patterns are the result of the principles. The principles are based on the goals, which result in the patterns.

In the case of the World Wide Web, the goal is to make hardware irrelevant. Access to information regardless of hardware. The principles are encoded in the HTML design principles, and then the patterns are those elements that we get, those elements that are designed with backwards compatibility in mind.

Now when we look at new things added to HMTL, new features, new browser APIs, what we tend to ask, of course, is: how well does it work?

How well does this thing do what it claims it’s going to do? That’s an excellent question to ask whenever you’re evaluating a new technology or tool. But I don’t think it’s the most important question. I think it’s just as important to ask: how well does it fail?

How well does it fail?

If you look at those HTML elements, which have been designed that way, they fail well. They fail well in older browsers. You can have that fallback content. I think this is a good lens to look at technology through because what we tend to do, when there’s a new browser API, we go to Can I Use, and we see, well, what’s the support like? We see some green, and we see some red. But the red doesn’t tell you how well it fails.

Here’s an example: CSS shapes. If you go to caniuse.com and you look at the support, there’s some green, and there’s some red. You might think there’s not enough green, so I’m not going to use it. But what you should really be asking is, how well does it fail?

In the case of CSS shapes, here’s an example of CSS shapes in action. I’ve got a border radius on this image, and on this text here I’ve said, shape-outside: circle on the image, so the text is wrapping around that circle. How well does it fail? Well, let’s look at it in a browser that doesn’t support CSS shapes, and we see the text goes in a straight line.

I’d say it fails pretty well because this is what would have happened anyway, and the text wrapping around the circle was kind of an enhancement on top of what would have happened anyway. Actually, it fails really well, so you might as well go ahead and use it. You might as well go ahead and use it even if it was only supported in one browser or two browsers because it fails well.

Let’s use that lens of asking how well does it work and how well does it fail to look at some of the technologies that you’ve probably been hearing about—some of the buzzwords in the world of front-end development. Let’s start with this. This is a big buzzword these days: service workers.

Service Workers

Who has heard of service workers? Okay. Quite a few.

Who is using service workers? Not so many. Interesting.

The rest of you, you’ve heard of it, and you’re currently probably in the state of evaluating the technology, trying to decide whether you should use this technology.

I’m not going to explain how service workers work. I guess I’ll just describe what it can do. It’s an amazing piece of technology that you kind of install on the user’s machine and then it sits there like a virus intercepting requests, which sounds scary, but actually is really powerful because you can really improve performance. You can serve things from a cache. You get access to the cache API. You can make things work offline, which is kind of amazing, because you’ve got access to those requests.

I was trying to describe it the other day and the best way I could think of describing it was a service worker is like doing a man-in-the-middle attack on your own website, but in a good way—in a good way. There’s endless possibilities of what you can do with this technology. It’s very powerful. And, at the very least, you can make a nice, custom, offline page instead of the dinosaur game or whatever people would normally get when they’re offline. You can have a custom offline page in the same way you could have a custom 404 page.

The Guardian have a service worker on their site, and they do a crossword puzzle. You’re on the train, you’re trying to read that article, but there’s no internet connection. Well, you can play the crossword puzzle. Little things like that, so it can be used for real delight. It’s a great technology.

How well does it work? It does what it says…. You don’t get anything for free with service workers, though. A service worker file is JavaScript, which can actually be quite confusing because you’ll be tempted to treat it like your other JavaScript files and do what you would do to other JavaScript files, but don’t do that. It’s almost like service worker scripts happen to be written in JavaScript, but they require this whole new mindset. So it’s kind of hard to get your head around. It’s a new technology to learn, but it’s powerful.

Well, let’s see what the support is like on Can I Use. Not bad. Not bad at all. Some good green there, but there’s quite a bit of red. If this is the reason why you haven’t used service workers yet because you see the support and you think, “Not enough support. I’m not going to invest my time,” I think you haven’t asked the question, “how well does it fail?” This is where I think the absolute genius of service worker comes in.

Service workers fail superbly because here’s what happens with a service worker. The first time someone visits your site there, of course, is no service worker installed on the client. They must first visit your site, get the downloads, have that service worker installed, which means every browser doesn’t support service workers for the first visit.

Then, on subsequent visits, you can use the service worker for the browsers that support it as this enhancement. Provide the custom offline page. Cache those assets. Do offline first stuff. But you’re not going to harm any of those browsers that are in the red on Can I Use, and that’s deliberate in the design of service workers. It’s been designed that way. I think service workers fail really well.

Let’s look at another hot topic.

Web Components

Who has heard of web components? Who is using web components—the real thing now? Okay. Wow. Brave. Brave person.

Web components actually aren’t a specific technology. web components is an umbrella term. I mean, in a way, service workers is kind of an umbrella term because it’s what you get access to through service workers that counts. You get access to the fetch API and the cache API and even notifications through a service worker.

With web components, it’s this term for a combination of specs, a combination of APIs like custom elements, the very sinister sounding shadow DOM, which is not as scary as it sounds, and there’s other things in there too like HTML imports and template. All of this stuff together is given the label web components. The idea is we’ve already got these very powerful elements in HTML, and it’s great when they get added to HTML, but it takes a long time. The standards process is slow. What if we could just make our own elements? That’s what you get to do with custom elements. You get to make shit up.

<mega-menu> <slippy-map> <image-gallery> <modal-lightbox> <off-canvas>

These common patterns. You keep having to reinvent the wheel. Let’s make an element for that. The only requirement with a custom element is that you have to have a hyphen in there. This is kind of a long-term agreement with the spec makers that they will never make an HTML element with a hyphen in it. Therefore, it’s kind of a safe space to use a hyphen in a made up element.

Okay, but if you just make up an element like this, it’s effectively the same as having a span in your document. It doesn’t do anything. It’s the other specs that make it come to life, like having HTML imports that link off to a file that describes what the browser is supposed to do with this new element that you’ve created.

Then in that file you could have your HTML. You could have your CSS. You could have JavaScript. And, crucially, it is modular. It doesn’t leak through. Those styles won’t leak through to the rest of the page. This is the dream we’ve been chasing: encapsulation. This is kind of the problem that React is solving. This is the reason why we have design systems, to try and be modular and try and encapsulate styles, behaviours, semantics, meaning.

Web components are intended as a solution to this, so it sounds pretty great. How well does it work? Well, let’s see what the browser support is like for some parts of web components. Let’s take custom elements. Yeah, some green, but there’s an awful lot of red. Never mind, as we’ve learned from looking at things like CSS shapes and service workers. But the red doesn’t tell us anything because the lack of support in a browser doesn’t answer the question, how well does it fail? How well do web components fail?

This is where it gets interesting because the answer to the question, “How well do web components fail?” is …it depends.

It depends on how you use the web components. It depends on if you applied the same kind of design principles that the creators of HTML applied when they’re making new elements.

Let’s say you make an image-gallery element, and you make it so that the content of the image gallery is inside the open and closing tag.

<image-gallery>
  <img src="..." alt="...">
  <img src="..." alt="...">
  <img src="..." alt="...">
</image-gallery>

Now in a non-supporting browser this is actually acceptable because they won’t understand what this image-gallery thing is. They won’t throw an error because HTML is very tolerant of stuff it doesn’t understand. They’ll just display the images as images. That’s acceptable.

Now in a browser that supports web components, all those different specs, you can take these images, and you can whiz-bang them up into a swishy carousel with all sorts of cool stuff going on that’s encapsulated; that you can share with other people; that people can just drop into their site. If you do this, web components fail very well. However, what I tend to see when I see web components in use is more like this where it’s literally an opening tag, closing tag, and all of the content and all the behaviour and all the styling is away somewhere else being pulled in through JavaScript, creating a kind of single source of failure.

<image-gallery>
</image-gallery>

In fact, there’s demo sites to demonstrate the power of web components that do this. The Polymer Project, there’s a whole collection of web components, and they created an entire online shop to demonstrate how cool web components are, and this is the HTML of that shop.

<body>
  <shop-app>
  </shop-app>
  <script>...</script>
</body>

The body element simply contains a shop-app custom element and then a script, and all the power is in the script. Here the web component fails really badly because you get absolutely nothing. That’s what I mean when I say it depends. It depends entirely on how we use them.

Now the good news is, as we saw from looking at Can I Use, it’s very early days with web components. We haven’t figured out yet what the best practices are, so we can set the course of the future here. We can decide that there should be design principles for how we collectively use this powerful technology like web components.

See, the exciting thing about web components is that they give us developers the same power that previously only browser makers had. But the scary thing about web components is that they give us developers the same power that previously only browser makers had. With great power, et cetera, et cetera, and we should rise to the challenge of that responsibility.

What’s interesting about both these things we’re looking at is that, like I said, they’re not really a single technology in themselves. They’re kind of these umbrella terms. With service worker it’s an umbrella term for fetch and cache and notifications, background sync — very cool stuff. With web components it’s an umbrella term for custom elements and HTML imports and shadow DOM and all this stuff.

But they’re both coming from the same place, the same sort of point of view, which is this idea that we, web developers, should be given that power and that responsibility to have access to these low-level APIs rather than just waiting for standards bodies to give us access through new APIs. This is all encapsulated in a school of thought called The Extensible Web, that we should have access to these low-level APIs.

The Extensible web is effectively — it’s literally a manifesto. There’s a manifesto for The Extensible Web. It’s just a phrase. It’s not a technology, just words, but words are very powerful when it comes to technology, when it comes to adopting technology. Words can get you very far. Ajax is just a word. It’s just a word for technologies that already existed at the time, but Jesse James Garrett put a word on it, and it made it easier to talk about it, and it helped the adoption of those technologies.

Responsive web Design: what Ethan did was he put a phrase to a collection of technologies: media queries, fluid layouts, fluid images. Wrapped it all up in a very powerful term, Responsive web Design, and the web was never the same.

Progressive Web Apps

Here’s a term you’ve probably heard of over the last couple of days: progressive web apps. Anybody who went to the Microsoft talk yesterday at lunchtime would have heard about progressive web apps. It’s just a term. It’s just an umbrella term for other technologies underneath. Progressive web app is the combination of having your site run over HTTPS, so it’s secure, which by the way is a requirement for running a service worker, and then also having a manifest file, which contains all this metadata. Chris mentioned it yesterday. You point to your icons and metadata about your site. All that adds up to, hey, you’ve got a progressive web app.

It’s a good sounding — I like this term. It’s a good sounding term. It was created by Frances Berriman and her husband, Alex Russell, to describe this. Again, a little bit of a manifesto in that these sites should be responsive and intuitive and they need to fulfil these criteria. But I worry sometimes about the phrasing. I mean, all the technologies are great. And you will actually get rewarded if you use these technologies. If you use HTTPS, you got a service worker, you got a manifest file. On Chrome for Android, if someone visits your site a couple of times, they’ll be prompted to add the site to the home screen just as though it were a native app. It will behave like a native app in the app switcher. You’re getting rewarded for these best practices.

But when I see the poster children for progressive web apps, my heart sinks when I see stuff like this. This is the Washington Post progressive web app, but this is what you get if you visit on the “wrong” device. In this case I’m visiting on a desktop browser, and I’m being told to come back with a mobile browser. Oh, how the tables have turned! It was not that long ago when we were being turned away on our mobile devices, and now we’re turning people away on desktops.

This was a solved problem. We did this with responsive web design. The idea of having a separate site for your progressive web app - no, no, no. We’re going back to the days of m.sites and the “real” website. No. No. I feel this is the wrong direction.

I worry that maybe this progressive web app terminology might be hurting it and the way that Google are pushing this app shell model. Anything can be a progressive web app, anything on the web.

I mean I’ve got things that I’ve turned into progressive web apps, and some of them might be, okay, maybe you consider this site, Huffduffer, as an app. I don’t know what a web app is, but people tell me it might be a web app. But I’ve also got like a community website, and it fulfils all the criteria. I guess it’s a progressive web app. My personal site, it’s a blog, but technically it’s a progressive web app. I put a book online. A book is an app now because it fulfils all the criteria. Even a single page collecting design principles is technically a progressive web app.

I worry about the phrasing, potentially limiting people when they come to evaluate the technology. “Oh, progressive web app, well, that’s not for me because I’m not building apps. I’m building some other kind of site.” I think that would be a real shame because literally every site on the web can benefit from those technologies, which brings me to the next question when we’re evaluating technology. Who benefits from the technology?

Who benefits?

Broadly speaking, I would say there’s kind of two schools of who could benefit from a particular technology on the Web. Does the technology benefit the developer or does the technology benefit the user? Much like what Chris was showing yesterday with the Tetris blocks and kind of going on a scale from technologies that benefit users to technologies that benefit developers.

Now I would say that nine times out of ten there is no conflict. Nine times out of ten a piece of technology is beneficial to the developer and beneficial to the user. You could argue that any technology that benefits the developer is de facto a benefit to the user because the developer is working better, working faster, therefore they can get the website out, and that’s good for the user.

Let’s talk about technologies that directly impact users versus the technologies that directly impact developers. Now personally I’m going to generally fall down on the side of technologies that benefit users over technologies that benefit developers. I mean, you look at something like service workers. There isn’t actually a benefit to developers. If anything, there’s a tax because you’ve got to get your head around service workers. You’ve got a new thing to learn. You’ve got to get your head down, learn how it works, write the code. It’s actually not beneficial for developers, but the end result—offline pages, faster performance—hugely beneficial for users. I’ll fall down on that side.

Going back to when I told you I was a nerd for design principles. Well, I actually have a favourite design principle and it’s from the HTML design principles. It’s the one that Chris mentioned yesterday morning. It’s known as the priority of constituencies:

In case of conflict, consider users over authors over specifiers over theoretical purity.

That’s pretty much the way I evaluate technology too. I think of the users first. And the authors, that’s us, we have quite a strong voice in that list, but it is second to users.

Now when we’re considering the tools and we’re evaluating who benefits from this tool, “Is it developers, or is it users, or is it both?” I think we need to stop and make a distinction about the kinds of tools we work with. I’m trying to work out how to phrase this distinction, and I kind of think of it as inward facing tools and outward facing tools: inward facing tools developers use; outward facing tools that directly touch end users.

I’ll show you what I mean. These are like the inward facing tools in that you put them on your computer. They sit on your computer, but the end output is still going to be HTML, CSS, and JavaScript. These are tools to make you work faster: task runners, version control, build tools—all that kind of stuff.

Now when it comes to evaluating these technologies, my attitude is, whatever works for you. Now we can have arguments and say, “Oh, I prefer this tool over that tool”, but it really doesn’t matter. What matters is: does it work for you? Does it make you work faster? Does it make your team work faster? That’s really the only criteria because none of these directly touch the end user.

That criterion of, “Hey, what works for me”, that’s a good one to apply for these inward facing tools, but I think we need to apply different criteria for the outward facing tools, the tools that directly affect the end user because, yes, we, developers, get benefit from these frameworks and libraries that are written in CSS and JavaScript, but the user pays a tax. The user pays a tax in the download of these things when they’re on the client. It’s actually interesting to see how a lot of these JavaScript frameworks have kind of shifted the pendulum where it used to be the user had to pay a tax if you wanted to use React of Angular or Ember. The pendulum is swinging back that we can get the best of both worlds where you can use these tools as an inward facing tool, use them on the server, and still get the benefit to the user without the user having to pay this tax.

I think we need to evaluate inward facing tools and outward facing tools with different criteria. Now when it comes to evaluating tools, especially tools that directly affect the end user—CSS frameworks, JavaScript libraries, things like that—there’s a whole bunch of questions to ask to evaluate the technology, questions like: what’s the browser support like? What browsers does this tool not work in? What’s the community like? Am I going to get a response to my questions? How big is the file size? How much of a tax is the user going to have to download? All of these are good questions, but they are not the most important question.

The most important question—I’d say this is true of evaluating any technology—is, what are the assumptions?

What are the assumptions?

What are the assumptions that have been baked into the tool you’re about to use, because I guarantee you there are assumptions baked into those tools. I know that because those tools were created by humans. And we humans, we have biases. We have assumptions, and we can’t help but encode those biases and assumptions into what we make. It’s true of anything we make. It’s particularly true of software.

We talk about opinionated software. But in a way, all software is opinionated. You just have to realise where the opinions lie. This is why you can get into this situation where we’re talking about frameworks and libraries, and one person is saying, “Oh, this library rocks”, and the other person is saying, “No, this library sucks!” They’re both right and they’re both wrong because it entirely depends on how well the philosophy of that tool matches your own philosophy.

If you’re using a tool that’s meant to extend your capabilities and that tool matches your own philosophy, you will work with the tool, and you will work faster and better. But if the philosophy of the tool has a mismatch with your own philosophy, you’re going to fight that tool every step of the way. That’s why people can be right and wrong about these frameworks. What works for one person doesn’t work for another. All software is opinionated.

It makes it really hard to try and create un-opinionated software. At Clearleft we’ve got this tool. It’s an open source project now called Fractal for building pattern libraries, working with pattern libraries. The fundamental principle behind it was that it should be as agnostic as possible, completely agnostic to build tools, completely agnostic to templating languages, that it should be able to work just about anywhere. It turns out it’s really, really hard to make agnostic software because you keep having to make decisions that favour one thing over another at every step.

Whether it’s writing the documentation or showing an example, you have to show the example in some templating language. You have to choose a winner in the documentation to demonstrate something. It’s really hard to write agnostic software. Every default you add to a piece of software shows your assumptions because those defaults matter.

But I don’t want to make it sound like these tools have a way of working and there’s no changing it, that the assumptions are baked in and there’s nothing you can do about it; that you can’t fight against those assumptions. Because there are examples of tools being used other than the uses for which they were intended right throughout the history of technology. I mean, when Alexander Graham Bell created the telephone, he thought that people would use it to listen to concerts that were happening far away. When Edison created the gramophone, he thought that people would record their voices so they could have conversations at a distance. Those two technologies ended up being used for the exact opposition purposes than what their inventors intended.

Hedy Lamarr

Here’s an example from the history of technology from Hedy Lamarr, the star of the silver screen the first half of the 20th Century here in Europe. She ended up married to an Austrian industrialist arms manufacturer. After the Anschluss, she would sit in on those meetings taking notes. Nobody paid much attention to her, but she was paying attention to the technical details.

She managed to get out of Nazi occupied Europe, which was a whole adventure in itself. Made her way to America, and she wanted to do something for the war effort, particularly after an incident where a refuge ship was sunk by a torpedo. A whole bunch of children lost their lives, and she wanted to do something to make it easier to get the U-boats. She worked on a system for torpedoes. It was basically a guidance system for radio controlled torpedoes.

The problem is, if you have a radio frequency you’re using to control the torpedo to guide it towards its target, if the enemy figure out what the frequency is, they can jam the signal and now you can no longer control the torpedo. Together with a composer named George Antheil, Hedy Lamarr came up with this system for constantly switching the frequency, so both the torpedo and the person controlling it are constantly switching the radio frequency to the same place, and now it’s much, much harder to jam that transmission.

Okay. But what’s that got to do with us, some technology for guided missiles in World War II? In this room, I’m guessing you’ve got devices that have WiFi and Bluetooth and GPS, and all of those technologies depend on frequency hopping. That wasn’t the use for which it was created, but that’s the use we got out of it.

We can kind of bend technology to our will, and yet there seems to be a lot of times this inevitability to technology. I don’t mean on the front-end where it’s like, “I guess I have to learn this JavaScript framework” because it seems inevitable that everyone must learn this JavaScript framework. Does anyone else feel disempowered by that, that feeling of, “uh, I guess I have to learn that technology because it’s inevitable?”

I get that out in the real world as well: “I guess this technology is coming”, you know, with self-driving cars, machine learning, whatever it happens to be. I guess we’ve just got to accept it. There’s even this idea of technological determinism that technology is the driving force of human history. We’re just along for the ride. It’s the future. Take it.

The ultimate extreme of this attitude of technological determinism is the idea of the technological singularity, kind of like the rapture for the nerds. It’s an idea borrowed from cosmology where you have a singularity at the heart of a black hole. You know a star collapses to as dense as possible. It creates a singularity. Nothing can escape, not even light.

The point is there’s an event horizon around a black hole, and it’s impossible from outside the event horizon to get any information from what’s happening beyond the event horizon. With a technological singularity, the idea is that technology will advance so quickly and so rapidly there will be an event horizon, and it’s literally impossible for us to imagine what’s beyond that event horizon. That’s the technological singularity. It makes me uncomfortable.

But looking back over the history of technology and the history of civilisation, I think we’ve had singularities already. I think the Agricultural Revolution was a singularity because, if you tried to describe to nomadic human beings before the Agricultural Revolution what life would be like when you settle down and work on farms, it would be impossible to imagine. The Industrial Revolution was kind of a singularity because it was such a huge change from agriculture. And we’re probably living through a third singularity now, an information age singularity.

But the interesting thing is, looking back at those previous singularities, they didn’t wipe away what came before. Those things live alongside. We still have agriculture at the same time as having industry. We still have nomadic peoples, so it’s not like everything gets wiped out by what comes before.

In fact, Kevin Kelly, who is a very interesting character, he writes about technology. In one of his books he wrote that no technology has ever gone extinct, which sounds like actually a pretty crazy claim, but try and disprove it. And he doesn’t mean it is a technology sitting in a museum somewhere. He means that somewhere in the world somebody is still using that piece of technology, some ancient piece of farming equipment, some ancient piece of computer equipment.

He writes these very provocational sort of books with titles like What Technology Wants, and The Inevitable, which makes it sound like he’s on the side of technological determinism, but actually his point is a bit more subtle. He’s trying to point out that there is an inevitability to what’s coming down the pipe with these technologies, but we shouldn’t confuse that with not being able to control it and not being able to steer the direction of those technologies.

Like I was saying, something like the World Wide Web was inevitable, but the World Wide Web we got was not. I think it’s true of any technology. We can steer it. We can choose how we use the technologies.

Looking at Kevin Kelly and his impressive facial hair, you might be forgiven for thinking that he’s Amish. He isn’t Amish, but he would describe himself as Amish-ish in that he’s lived with the Amish, and he thinks we can learn a lot from the Amish.

It turns out they get a very bad reputation. People think that the Amish reject technology. It’s not true. What they do is they take their time.

The Amish are steadily adopting technology at their pace. They are slow geeks.

I think we could all be slow geeks. We could all be a bit more Amish-ish. I don’t mean in our dress sense or facial hair. I mean in the way that we are slow geeks and we ask questions of our technology. We ask questions like, “How well does it work?” but also, “How well does it fail?” That we ask, “Who benefits from this technology?” And perhaps most importantly that we ask, “What are the assumptions of those technologies?”

Because when I look back at the history of human civilisation and the history of technology, I don’t see technology as the driving force; that it was inevitable that we got to where we are today. What I see as the driving force are people, remarkable people, it’s true, but people nonetheless.

Rosalind Franklin Margaret Hamilton Grace Hopper Hedy Lamarr

And you know who else is remarkable? You’re remarkable. And your attitude shouldn’t be, “It’s the future. Take it.” It should be, “It’s the future. Make it.” And I’m looking forward to seeing the future you make. Thank you.