Here are the slides from my opening keynote at Beyond Tellarrand on Thursday. They don’t make much sense out of context.
Saturday, November 16th, 2019
I’m really pleased with how this turned out. I wasn’t sure if anybody was going to be interested in the deep dive into history that I took for the first 15 or 20 minutes, but lots of people told me that they really enjoyed that part, so that makes me happy.
Monday, November 11th, 2019
The slides from Laura’s excellent talk at FF Conf on Friday.
FF Conf 2019
Friday was FF Conf day here in Brighton. This was the eleventh(!) time that Remy and Julie have put on the event. It was, as ever, excellent.
It’s a conference that ticks all the boxes for me. For starters, it’s a single-track event. The more I attend conferences, the more convinced I am that multi-track events are a terrible waste of time for attendees (and a financially bad model for organisers). I know that sounds like a sweeping broad generalisation, but ask me about it next time we meet and I’ll go into more detail. For now, I just want to talk about this mercifully single-track conference.
FF Conf has built up a rock-solid reputation over the years. I think that’s down to how Remy curates it. He thinks about what he wants to know and learn more about, and then thinks about who to invite to speak on those topics. So every year is like a snapshot of Remy’s brain. By happy coincidence, a snapshot of Remy’s brain right now looks a lot like my own.
You could tell that Remy had grouped the talks together in themes. There was a performance-themed chunk right after lunch. There was a people-themed chunk in the morning. There was a creative-coding chunk at the end of the day. Nice work, DJ.
I think it was quite telling what wasn’t on the line-up. There were no talks about specific libraries or frameworks. For me, that was a blessed relief. The only technology-specific talk was Alice’s excellent talk on Git—a tool that’s useful no matter what you’re coding.
One of the reasons why I enjoyed the framework-free nature of the day is that most talks—and conferences—that revolve around libraries and frameworks are invariably focused on the developer experience. Think about it: next time you’re watching a talk about a framework or library, ask yourself how it impacts user experience.
At FF Conf, the focus was firmly on people. In the case of Laura’s barnstorming presentation, those people are end users (I’m constantly impressed by how calm and measured Laura remains even when talking about blood-boilingly bad behaviour from the tech industry). In the case of Amina’s talk, the people are junior developers. And for Sharon’s presentation, the people are everyone.
One of the most useful talks of the day was from Anna who took us on a guided tour of dev tools to identify performance improvements. I found it inspiring in a very literal sense—if I had my laptop with me, I think I would’ve opened it up there and then and started tinkering with my websites.
Harry also talked about performance, but at Remy’s request, it was more business focused. Specifically, it was focused on Harry’s consultancy business. I think this would’ve been the perfect talk for more of an “industry” event, whereas FF Conf is very much a community event: Harry’s semi-serious jibes about keeping his performance secrets under wraps didn’t quite match the generous tone of the rest of the line-up.
(Oh, and Remy, when you start to put together the line-up for next year’s FF Conf, be sure to check out Charlotte Dann—her talk at Material was the perfect mix of code and creativity.)
I don’t think I can take credit for Charlotte being on the line-up, but I will take credit for saying she’d be the perfect fit.
And then Suz Hinton closed out the conference with this rallying cry that resonated perfectly with Laura’s talk:
Less mass-produced surveillance bullshit and more Harry Potter magic (please)!
I think that rallying cry could apply equally well to conferences, and I think FF Conf is a good example of that ethos in action.
Wednesday, October 30th, 2019
Cassie’s excellent talk on SVG animation is well worth your time.
Tuesday, October 22nd, 2019
The IndieWeb Movement: Owning Your Data and Being the Change You Want to See in the Web · Jamie Tanna
A great introduction to indie web building blocks from Jamie.
Sunday, October 20th, 2019
A terrific—and fun!—talk from Zach about site deaths, owning your own content, and the indie web.
Oh, and he really did create MySpaceBook for the talk.
Friday, October 18th, 2019
At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.
The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:
This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.
Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.
That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.
At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.
But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.
I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.
There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.
We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:
It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.
This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.
Wednesday, October 16th, 2019
Our story begins with the Big Bang.
13.8 billion years ago
This sets a chain of events in motion that gives us elementary particles, then more complex particles like atoms, which form stars and planets, including our own, on which life evolves, which brings us to the recent past when this whole process results in the universe generating a way of looking at itself: physicists.
A physicist is the atom’s way of knowing about atoms.
By the end of World War Two, physicists in Europe were in short supply. If they hadn’t already fled during Hitler’s rise to power, they were now being actively wooed away to the United States.
64 years ago
To counteract this brain drain, a coalition of countries forms the European Organization for Nuclear Research, or to use its French acronym, CERN.
They get some land in a suburb of Geneva on the border between Switzerland and France, where they set about smashing particles together and recreating the conditions that existed at the birth of the universe.
Every year, CERN is host to thousands of scientists who come to run their experiments.
Fast forward to February 2019, a group of 9 of us were invited to CERN as an elite group of hackers to recreate a different experiment.
We are there to recreate a piece of software first published 30 years ago. Given this goal, we need to answer some important questions first:
- How does this software look and feel?
- How does it work?
- How you interact with it?
- How does it behave?
The software is so old that it doesn’t run on any modern machines, so we have a NeXT machine specially shipped from the nearby museum. This is no ordinary machine. It was one of the only two NeXT machines that existed at CERN in the late 80s.
Now we have the machine to run this special software.
By some fluke the good people of the web have captured several different versions of this software and published them on Github.
So we selected the oldest version we could find. We download it from Github to our computers. Now we have to transfer it to the NeXT machine.
Except there’s no USB drive. It didn’t exist. CD ROM? Floppy drive? The NeXT computer had a “floptical drive”—bespoke to NeXT computers—all very well, but in 2019 we don’t have those drives.
To transfer the software from our machine, to the NEXT machine, we needed to use the network.
62 years ago
In 1957, J.C.R. Licklider was the first person to publicly demonstrate the idea of time sharing: linking one computer to another.
56 years ago
Six years later, he expanded on the idea in a memo that described an Intergalactic Computer Network.
By this time, he was working at ARPA: the department of Defense’s Advanced Research Projects Agency. They were very interested in the idea of linking computers together, for very practical reasons.
America’s military communications had a top-down command-and-control structure. That was a single point of failure. One pre-emptive strike and it’s game over.
The solution was to create a decentralised network of computers that used Paul Baran’s brilliant idea of packet switching to move information around the network without any central authority.
This idea led to the creation of the ARPANET. Initially it connected a few universities. The ARPANET grew until it wasn’t just computers at each endpoint; it was entire networks. It was turning into a network of networks …an internetwork, or internet, for short. In order for these networks to play nicely with one another, they needed to agree on using the same set of protocols for packet switching.
Bob Kahn and Vint Cerf crafted the simplest possible set of low-level protocols: the Transmission Control Protocol and the Internet Protocol. TCP/IP.
TCP/IP is deliberately dumb. It doesn’t care about the contents of the packets of data being passed around the internet. People were then free to create more task-specific protocols to sit on top of TCP/IP.
There are protocols specifically for email, for example. Gopher is another example of a bespoke protocol. And there’s the File Transfer Protocol, or FTP.
Back in our war room in 2019, we finally work out that can use FTP to get the software across. FTP is an arcane protocol, but we can agree that it will work across the two eras.
Although we have to manually install FTP servers onto our machines. FTP doesn’t ship with new machines because it’s generally considered insecure.
Now we finally have the software installed on the NeXT computer and we’re able to run the application.
We double click the shading looking, partly hand drawn icon with a lightning bolt on it, and we wait…
Once the software’s finally running, we’re able to see that it looks a bit like an ancient word processor. We can read, edit and open documents. There’s some basic styles lots of heavy margins. There’s a super weird menu navigation in place.
But there’s something different about this software. Something that makes this more than just a word processor.
These documents, they have links…
Ted Nelson is fond of coining neologisms. You can thank him for words like “intertwingled” and “teledildonics”.
56 years ago
He also coined the word “hypertext” in 1963. It is defined by what it is not.
Hypertext is text which is not constrained to be linear.
Ever played a “choose your own adventure” book? That’s hypertext. You can jump from one point in the book to a different point that has its own unique identifier.
The idea of hypertext predates the word. In 1945, Vannevar Bush published a visionary article in The Atlantic Monthly called As We May Think.
He imagines a mechanical device built into a desk that can summon reams of information stored on microfilm, allowing the user to create “associative trails” as they make connections between different concepts. He calls it the Memex.
Also in 1945, a young American named Douglas Engelbart has been drafted into the navy and is shipping out to the Pacific to fight against Japan. Literally as the ship is leaving the harbour, word comes through that the war is over. He still gets shipped out to the Philippines, but now he’s spending his time lounging in a hut reading magazines. That’s how he comes to read Vannevar Bush’s Memex article, which lodges in his brain.
51 years ago
Douglas Engelbart decides to dedicate his life to building the computer equivalent of the Memex.
On December 9th, 1968, he unveils his oNLine System—NLS—in a public demonstration. Not only does he have a working implementation of hypertext, he also shows collaborative real-time editing, windows, graphics, and oh yeah—for this demo, he invents the mouse.
It truly is The Mother of All Demos.
39 years ago
There were a number of other attempts at creating hypertext systems. In 1980, a young computer scientist named Tim Berners-Lee found himself working at CERN, where scientists were having a heck of time just keeping track of information.
He created a system somewhat like Apple’s Hypercard, but with clickable links. He named it ENQUIRE, after a Victorian book of manners called Enquire Within Upon Everything.
ENQUIRE didn’t work out, but Tim Berners-Lee didn’t give up on the problem of managing information at CERN. He thinks about all the work done before: Vannevar Bush’s Memex; Ted Nelson’s Xanadu project; Douglas Engelbart’s oNLine System.
A lot of hypertext ideas really are similar to a choose-your-own-adventure: jumping around from point to point within a book. But what if, instead of imagining a hypertext book, we could have a hypertext library? Then you could jump from one point in a book to a different point in a different book in a completely different part of the library.
In other words, what if you took the world of hypertext and the world of networks, and you smashed them together?
30 years ago
On the 12th of March, 1989, Tim Berners-Lee circulates the first draft of a document titled Information Management: A Proposal.
The diagrams are incomprehensible. But his supervisor at CERN, Mike Sendall, sees the potential. He reads the proposal and scrawls these words across the top: “vague, but exciting.”
Tim Berners-Lee gets the go-ahead to spend some time on this project. And he gets the budget for a nice shiny NeXT machine. With the support of his colleague Robert Cailliau, Berners-Lee sets about making his theoretical project a reality. They kick around a few ideas for the name.
They thought of calling it The Mesh. They thought of calling it The Information Mine, but Tim rejected that, knowing that whatever they called it, the words would be abbreviated to letters, and The Information Mine would’ve seemed quite egotistical.
So, even though it’s only going to exist on one single computer to begin with, and even though the letters of the abbreviation take longer to say than the words being abbreviated, they call it …the World Wide Web.
As Robert Cailliau told us, they were thinking “Well, we can always change it later.”
Tim Berners-Lee brainstorms a new protocol for hypertext called the HyperText Transfer Protocol—HTTP.
He thinks about a format for hypertext called the Hypertext Markup Language—HTML.
He comes up with an addressing scheme that uses Unique Document Identifiers—UDIs, later renamed to URIs, and later renamed again to URLs.
But he needs to put it all together into running code. And so Tim Berners-Lee sets about writing a piece of software…
Tim Berners-Lee’s document is a proposal at that stage 30 years ago. It’s just theory. So he needs to build a prototype to actually demonstrate how the World Wide Web would work.
The NeXT computer is the perfect ground for rapid software development because the NeXT operating system ships with a program called NSBuilder.
NSBuilder is software to build software. In fact, the “NS” (meaning NeXTSTEP) can be found in existing software today - you’ll find references to NSText in Safari and Mac developer documentation.
Tim Berners-Lee, using NSBuilder was able to create a working prototype of this software in just 6 weeks
He called it: WorldWideWeb.
We finally have the software working the way it ran 30 years ago.
But our project is to replicate this browser so that you can try it out, and see how web pages look through the lens of 1990.
But HTTPS doesn’t work. There was no HTTPS. There’s no HTTP2. HTTP1.0 hadn’t even been invented.
So I make a proxy. Effectively a monster-in-the-middle attack on all web requests, stripping the SSL layer and then returning the HTML over the HTTP 0.9 protocol.
And finally, we see…
We see junk.
We can see the text content of the website, but there’s a lot of HTML junk tags being spat out onto the screen, particularly at the start of the document.
<h1> <h2> <h3> <h4> <h5> <h6> <ol> <ul> <li> <p>
These tags are probably very familiar to you. You recognise this language, right?
That’s right. It’s SGML.
SGML is the successor to GML, which supposedly stands for Generalised Markup Language. But that may well be a backronym. The format was created by Goldfarb, Mosher, and Lorie: G, M, L.
SGML is supposed to be short for Standard Generalised Markup Language.
A flavour of SGML was already being used at CERN when Tim Berners-Lee was working on his World Wide Web project. Rather than create a whole new format from scratch, he repurposed what people were already familiar with. This was his HyperText Markup Language, HTML.
One thing he did add was a tag called
A for anchor.
href attribute is short for “hypertext reference”. Plop a URL in there and you’ve got a link.
The hypertext community thought this was a terrible way to make links.
They believed that two-way linking was vital. With two-way linking, the linked resource connects back to where the link originates. So if the linked resource moved, the link would stay intact.
That’s not the case with the World Wide Web. If the linked resource moves, the link is broken.
Perhaps you’ve experienced broken links?
When Tim Berners-Lee wrote the code for his WorldWideWeb browser, there was a grand total of 26 tags in HTML. I know that we’d refer to them as elements today, but that term wasn’t being used back then.
Now there are well over 100 elements in HTML. The reason why the language has been able to expand so much is down to the way web browsers today treat unknown elements: ignore any opening and closing tags you don’t recognise and only render the text in between them.
The parsing algorithm was brittle (when compared to modern parsers). There’s no DOM tree being built up. Indeed, the DOM didn’t exist.
Remember that the WorldWideWeb was a browser that effectively smooshed together a word processor and network requests, the styling method was based (mostly) around adding margins as the tags were parsed.
Kimberly Blessing was digging through the original 7344 lines of code for the WorldWideWeb source. She found the code that could explain why we were seeing junk.
In this case, when the parser encountered
<link rel="…" it would see the
“Yes, a tag; let’s slurp it up”.
Then it reads
li and the parser is thinking, “This looks like a list item, good stuff.”
Then encounters the
link) and, excusing the paring algorithm because it was the first, would then abort the style it was about to apply and promptly spit out the rest of the content on screen, having already swallowed up the first four characters:
k rel="stylesheet" href="...">
With that, we decided to make the executive design decision that we would strip out any elements that were unknown to the original WorldWideWeb browser —
img — which of course there was no image support in the world’s first browser.
This is the first little cheat we applied, so that the page would be more pleasing to you, the visitor of our emulator. Otherwise you’d be presented with a lot of scary looking junk.
So now we have all the reference we need to be able to replicate this browser:
- The machine running the original operating system, which gives us colours, fonts, menus and so on.
- The browser itself, how windows behave, what’s in the menus, what makes the experience unique to that period of time.
- And finally how it looks when we visit URLs.
So off we go.
While Remy sets about recreating the functionality of the WorldWideWeb browser, Angela was recreating the user interface using CSS.
Inputs. Buttons. Icons. Menus. All with the exact borders, highlights and shadows used in the UI of the NeXT operating system, including having the scrollbar on the left side of windows.
Meanwhile the rest of us were putting together an explanatory website to give some backstory to what we were doing. I spent most of my time working on a timeline showing thirty years before and thirty years after the original proposal for the web.
The WorldWideWeb browser inherited fonts from the NeXTSTEP operating system. It mostly used Helvetica and a font called Ohlfs (created by Keith Ohlfs). Helvetica is ubiquitous but Ohlfs was never seen outside of a NeXT machine.
Our teammates Mark and Brian were obsessed with accurately recreating the typography. We couldn’t use modern fonts which are vector based. We need pixeliness.
So Mark and Brian took a screenshot of the NeXT machine’s alphabet. With help from afar from font designer David Jonathan Ross, they traced each square pixel in a vector program and then exported that as a web font. Now we’ve got a web font that deliberately isn’t anti-aliased. It’s a vector format that recreates the look of a bitmap.
Put the pixelly font together with the CSS interface elements and you’ve got something that really looks like the old WorldWideWeb programme.
This is the final product of our work at CERN that week. A fully working WorldWideWeb emulator giving a reasonable close experience of what it was like to surf the web as if it were 30 years ago.
This is entirely in the browser and was written using:
- React Draggable for the windows and menus,
- React Hotkeys for keyboard combo shortcuts (we replicated the original OS as much as we could),
- idb-keyval for some local storage,
- Parcel for bundling.
These tools weren’t chosen particular because they were the best tools for the job, but rather because they were the tools I knew that well enough that would help speed up my development process.
We worked hard to replicate the look and feel as much as we could. We even replicated typos found throughout the WorldWideWeb app:
An excercise in global information availability
Why don’t we see how it looks…
(Which Jeremy was blaming me for.)
This is what you see when you visit the WorldWideWeb browser for the first time. We can see we are welcomed by the universe of hypertext. We’ve got these menus over here that you can drag off and open panels (I always thought this was an ordering bug but the operating system actually works like this).
We’ll go ahead and open the Fronteers website. I go to “Document” and then I go to “Open from full document reference” (because the word URL didn’t exist). I’m going to pop the Fronteers URL in here. And there it is. We’ve got the Fronteers website. Looks pretty good. (One of my favourite UI bits is this scrollbar on the left hand side instead of the right.)
We can follow the links. Actually one of my favourite features that was in this original browser that we replicated was this “Navigate” menu. I’ve just opened the first link in the document, but I can click on “Next”, and “Next” a bunch of times and it will cycle through each one of the links on the page that I launched from and let me read all the pages that the Fronteers site links to (which I really like). I can go backwards and forwards, and so on.
One thing you might have already noticed is that there are no URLs here. And in fact, to view source, it was considered a kind of diagnostic option and it was very very tucked away. The reason for this is that URLs—and the source HTML or SGML—was considered ugly and potentially a bad user experience.
But there’s one thing about navigating here that’s different. To open this link, I had to double-click.
The WorldWideBrowser was more of a prototype than anything else. It demonstrated the potential of the World Wide Web project, but it only worked on NeXT machines.
To show how the World Wide Web could work on any computer, the second ever web browser was the Line Mode Browser, coded by Nicola Pellow. It had a very basic text interface—no clicking on links—but it could be installed anywhere.
Lots of other geeks and nerds were working on their own web browsers, but it was Marc Andreesen’s Mosaic browser that really blew the doors open for the web. It had a nice usable interface, and it (unilaterrally) introduced the innovation of images on the web.
Andreesen went on to found Netscape. The World Wide Web took off at an unprecedented rate. Microsoft brought out their Internet Explorer browser and started trying to catch up with Netscape. We had the browser wars. Later we got even more browsers, like Safari and Chrome, while Netscape morphed into Firefox and Internet Explorer morphed into Edge. And the rest is history.
But all of these browsers were missing something that was in the original WorldWideWeb browser.
The reason I have to double-click on these links is that, when I do a single click, it actually places the cursor. The cursor is blinking there on “Fronteers.” And the reason I can place the cursor is because I can edit the document.
I see Fronteers here is missing a heading. We want to welcome you all:
We want to make that a heading. Let’s style that. It’s a heading.
So the browser was meant to edit documents. Let’s put a bit of text here:
Great talks from Remy and Jeremy
(forget about everyone else). Now if I want to create a link, I’ll go ahead and navigate to Jeremy’s site, https://adactio.com. I’m going to do “Link”, then “Mark all”, which is a way of copying the URL to that window. Then I go back to the Fronteers website, select “Jeremy”, and then do “Link to marked.” I can double-click on Jeremy’s name it will open up his website.
I can save this document as well. I’m going to call it
Let’s do a hard reboot—a browser refresh. I come back to my machine a couple of days later, “Ah, the Fronteers page!”. I’m going to open that again, and it linked to that really handsome guy in the sprite shirt. And yes, the links still work.
In fact, this documentation that you see when the WorldWideWeb browser launches was written, styled, and linked using the WorldWideWeb browser. The WorldWideWeb browser was for a web that you could read and write.
But this didn’t survive. It was a hurdle that was too tricky to propose or implement across the different types servers that existed and for the upcoming browsers that were on the horizon.
And so it wasn’t standardised and doesn’t exist today.
But this is an important lesson from the time: reducing complexity increases the chances of mass adoption.
In the end, simplicity wins.
I think that’s a pattern we see over and over again, not just in the history of the web, but before the web. Simplicity wins.
Ted Nelson famously to this day thinks that the World Wide Web is weak sauce. It didn’t try to solve complex right out of the gate, like handling micro-payments.
As we saw, the hypertext community that one-way linking was ridiculous. But simplicity does win out.
Unfortunately that’s why browsers ended up just being browsers. We got some of the functionality back with wikis, content management systems, and social media to a certain extent. But I think it’s still a bit of a shame that when I want to browse a web page, I’m using one piece of software—the browser—but when I want to make a web page, I’m using another piece of software (or multiple pieces of software) to get something on to the web.
I feel like we lost something.
We head home after a week of hacking.
We were all invited back in March earlier this year for the Web@30 event that was taking place to celebrate the web but also Sir Tim Berners-Lee.
A few of us, Jeremy, Martin, and myself, went back to CERN for the the first leg of the event. There was even a video showing off our work as part of the main conference. Jeremy and I even chased Tim Berners-Lee back to London at the science museum like obsessive web fanboys. It was a lot of fun!
The night before I got a message from Jean-François Groff, pictured here on the right. JF Groff joined Tim Berners-Lee 30 years ago and created
libwww (a precursor to
The message read:
Sitting with Tim right now. He loves your browser!
It’s amazing that we were able to pull this off in a week just with text editors and information that’s freely available. It’s mind boggling how much we can do today and how far it can reach. And it all started on that NeXTSTEP machine 30 years ago.
What I really loved about this project was working with this brilliantly old technology, digging around at the birth of browsers and the web.
I wouldn’t be stood here today, if it weren’t for the web.
I wouldn’t even know Jeremy, if it weren’t for the web.
I wouldn’t have a career, if it weren’t for the web.
I loved seeing how such old technology, the original WorldWideWeb browser was still able to render my blog. Because I put content first, delivered markup from the server. The page rendered because HTML really is backward compatible.
HTML and HTTP are just text. Nothing terribly fancy. Dare I say, beautifully simple, and as we said before, simplicity wins the day.
This same simplicity is what allows us all to have the chance for an equal voice. The web allows us to freely publish our thoughts and experiences. We have to fight to protect that kind of web.
And we’ve got to work at keeping it simple.
When we returned to CERN for the 30th anniversary celebrations, one of the other people there was the journalist Zeynep Tefepkçi.
She was on a panel along with Tim Berners-Lee, Robert Caillau, Jean-François Groff, and Lou Montoulli. At the end of the panel discussion, she was asked:
What would you tell the next generation about how to use this wonderful tool?
If you have something wonderful, if you do not defend it, you will lose it.
If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.
Defend the simplicity and resilience that’s so central to the web.
I don’t know about you, but I often feel that just trying to make a web page has become far too complicated. But this is complexity that we have chosen with our tools, processes, and assumptions. We’ve buried the magic. The magic of linking web pages together. The magic of a working global hypertext system, where nobody needs to ask for permission to publish.
Tim Berners-Lee prototyped the first web browser, but the subsequent world wide web wasn’t created by any one person. It was created by everyone. That. Is. Magical.
I don’t want the web to become a place where only an elite priesthood get to experience the magic of creation. I’m going to fight to defend the openness of the world wide web. This is for everyone. Not just for everyone to use; it’s for everyone to create.
Tuesday, October 15th, 2019
Here’s the talk that Remy and I gave at Fronteers in Amsterdam, all about our hack week at CERN. We’re both really pleased with how this turned out and we’d love to give it again!
Monday, October 14th, 2019
I saw Nicholas give this great talk at Paris Web on site deaths, the indie web, and publishing on your own site. That talk was in French, but these slides are (mostly) in English—I was able to follow along surprisingy easily!
Tuesday, October 8th, 2019
The transcript of a fantastic talk by Stuart. The latter half is a demo of Portals, but in the early part of the talk, he absolutely nails the rise in popularity of complex front-end frameworks:
I think the reason people started inventing client-side frameworks is this: that you lose control when you load another page. You click on a link, you say to the browser: navigate to here. And that’s it; it’s now out of your hands and in the browser’s hands. And then the browser gives you back control when the new page loads.
Wednesday, September 18th, 2019
The transcript of Andy’s talk from this year’s State Of The Browser conference.
I don’t think using scale as an excuse for over-engineering stuff—especially CSS—is acceptable, even for huge teams that work on huge products.
Tuesday, September 17th, 2019
Geneva Copenhagen Amsterdam
Back in the late 2000s, I used to go to Copenhagen every for an event called Reboot. It was a fun, eclectic mix of talks and discussions, but alas, the last one was over a decade ago.
It was organised by Thomas Madsen-Mygdal. I hadn’t seen Thomas in years, but then, earlier this year, our paths crossed when I was back at CERN for the 30th anniversary of the web. He got a real kick out of the browser recreation project I was part of.
I few months ago, I got an email from Thomas about the new event he’s running in Copenhagen called Techfestival. He was wondering if there was some way of making the WorldWideWeb project part of the event. We ended up settling on having a stand—a modern computer running a modern web browser running a recreation of the first ever web browser from almost three decades ago.
So I showed up at Techfestival and found that the computer had been set up in a Shoreditchian shipping container. I wasn’t exactly sure what I was supposed to do, so I just hung around nearby until someone wandering by would pause and start tentatively approaching the stand.
“Would you like to try the time machine?” I asked. Nobody refused the offer. I explained that they were looking at a recreation of the world’s first web browser, and then showed them how they could enter a URL to see how the oldest web browser would render a modern website.
Lots of people entered
google.com, but some people had their own websites, either personal or for their business. They enjoyed seeing how well (or not) their pages held up. They’d take photos of the screen.
People asked lots of questions, which I really enjoyed answering. After a while, I was able to spot the themes that came up frequently. Some people were confusing the origin story of the internet with the origin story of the web, so I was more than happy to go into detail on either or both.
The experience helped me clarify in my own mind what was exciting and interesting about the birth of the web—how much has changed, and how much and stayed the same.
All of this very useful fodder for a conference talk I’m putting together. This will be a joint talk with Remy at the Fronteers conference in Amsterdam in a couple of weeks. We’re calling the talk How We Built the World Wide Web in Five Days:
The World Wide Web turned 30 years old this year. To mark the occasion, a motley group of web nerds gathered at CERN, the birthplace of the web, to build a time machine. The first ever web browser was, confusingly, called WorldWideWeb. What if we could recreate the experience of using it …but within a modern browser! Join (Je)Remy on a journey through time and space and code as they excavate the foundations of Tim Berners-Lee’s gloriously ambitious and hacky hypertext system that went on to conquer the world.
Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.
We’ve been honing the material and doing some run-throughs at the Clearleft HQ at 68 Middle Street this week. The talk has a somewhat unusual structure with two converging timelines. I think it’s going to work really well, but I won’t know until we actually deliver the talk in Amsterdam. I’m excited—and a bit nervous—about it.
Whether it’s in a shipping container in Copenhagen or on a stage in Amsterdam, I’m starting to realise just how much I enjoy talking about web history.
Wednesday, September 11th, 2019
The video of a talk in which Mark discusses pace layers, dogs, and design systems. He concludes:
- Current design systems thinking limits free, playful expression.
- Design systems uncover organisational disfunction.
- Continual design improvement and delivery is a lie.
- Component-focussed design is siloed thinking.
It’s true many design systems are the blueprints for manufacturing and large scale application. But in almost every instance I can think of, once you move from design to manufacturing the horse has bolted. It’s very difficult to move back into design because the results of the system are in the wild. The more strict the system, the less able you are to change it. That’s why broad principles, just enough governance, and directional examples are far superior to locked-down cookie cutters.
Tuesday, August 27th, 2019
The Weight of the WWWorld is Up to Us by Patty Toland
A few years ago, a good friend of Patty’s had a medical diagnosis that required everyone to pull together. Another friend shared an article about how not to say the wrong thing. This is ring theory. In a moment of crisis, the person involved is in the centre. You need to understand where you are in this ring structure, and only ever help and comfort inwards and dump concerns and problems outwards.
At the same time, Patty spent time with her family at the beach. Everyone reads the same books together. There was a book about a platoon leader in Vietnam. 80% of the story was literally a litany of stuff—what everyone was carrying. This was peppered with the psychic and emotional loads that they were carrying.
There was a common assertion that slow networks were a third-world challenge. Remember Facebook’s network challenges? They always talked about new markets in India and Africa. The implication is that this isn’t our problem in, say, Omaha or New York.
Pew Research provided updated data this year. The research shows an increase in those trends. Half of the population access the web primarily on mobile. The cost of a broadband subscription is too expensive for many people. Sometimes broadband access simply isn’t available.
There’s a term called “the homework gap.” Two thirds of teachers assign broadband-dependent homework, while one third of students have no access to broadband.
At most 37% of people have unlimited data. Most people run out of data on a frequent basis.
Speed also varies wildly. 4G doesn’t really mean anything. The data is all over the place.
This shows that network issues are definitely not just a third world challenge.
On the 25th anniversary of the web, Tim Berners-Lee said the web’s potential was only just beginning to be glimpsed. Everyone has a role to play to ensure that the web serves all of humanity. In his contract for the web, Tim outlined what governments, companies, and users need to do. This reminded Patty of ring theory. The user is at the centre. Designers and developers are in the next circle out. Then there’s the circle of companies. Then there are platforms, browsers, and frameworks. Finally there’s the outer circle of governments.
There’s no way for a user to know before clicking a link how big and bloated the page is going to be. Even if they abandon the page load, they’ve still used (and wasted) a lot of data.
Third party scripts—like ads—are really bad at dumping in (to use the ring theory model). The best practices for ads suggest that up to 100 additional HTTP requests is totally acceptable. Unbelievable! It doesn’t matter how performant you’ve made a site when this crap gets piled on top of it.
In 2018, the internet’s data centres alone may already have had the same carbon footprint as all global air travel. This will probably triple in the next seven years. The amount of carbon it takes to train a single AI algorithm is more than the entire life cycle of a car. Then there’s fucking Bitcoin. A single Bitcoin transaction could power 21 US households. It is designed to use—specifically, waste—more and more energy over time.
What should we be doing?
Accessibility should be at the heart of what we build. Plan, test, educate, and advocate. If advocacy doesn’t work, fear can be a motivator. There’s an increase in accessibility lawsuits.
Our websites should be as light as possible. Ask, measure, monitor, and optimise. RequestMap is a great tool for visualising requests. You can see the size and scale of third-party requests. You can also see when images are far, far bigger than they need to be.
Take a critical guide to everything and pare everything down. Set perforance budgets—file size budgets, for example. Optimise images, subset custom fonts, lazyload images and videos, get third-party tools out of the critical path (or out completely), and seek out lighter frameworks.
Push the boundaries. See the amazing work that Adrian Holovaty did with Soundslice. He had to make on-the-fly sheet music generation work on old iPads that musicians like to use. He recommends keeping old devices around to see how poorly your product is working on it.
If you have some power, then your job is to empower somebody else.
Web Forms: Now You See Them, Now You Don’t! by Jason Grigsby
Jason is on stage at An Event Apart Chicago in a tuxedo. He wants to talk about how we can make web forms magical. Oh, I see. That explains the get-up.
We’re always being told to make web forms shorter. Luke Wroblewski has highlighted the work of companies that have reduced form fields and increased conversion.
But what if we could get rid of forms altogether? Wouldn’t that be magical!
Jason will reveal the secrets to this magic. But first—a volunteer from the audience, please! Please welcome Joe to the stage.
Joe will now log in on a phone. He types in the username. Then the password. The password is hodge-podge of special characters, numbers and upper and lowercase letters. Joe starts typing. Jason takes the phone and logs in without typing anything!
The secret: Jason was holding an NFC security key in his hand. That works with a new web standard called WebAuthn.
Passwords are terrible. People share them across sites, but who can blame them? It’s hard to remember lots of passwords. The only people who love usernames and passwords are hackers. So sites are developing other methods to try to keep people secure. Two factor authentication helps, although it doesn’t help us with phishing attacks. The hacker gets the password from the phished user …and then gets the one-time code from the phished user too.
But a physical device like a security key solves this problem. So why aren’t we all using security keys (apart from the fear of losing the key)? Well, until WebAuthn, there wasn’t a way for websites to use the keys.
A web server generates a challenge—a long string—that gets sent to a website and passed along to the user. The user’s device generates a credential ID and public and private keys for that domain. The web site stores the public key and credential ID. From then on, the credential ID is used by the website in challenges to users logging in.
There were three common ways that we historically proved who we claimed to be.
- Something you know (e.g. a password).
- Something you have (e.g. a security key).
- Something you are (e.g. biometric information).
These are factors of identification. So two-factor identification is the combination of any of those two. If you use a security key combined with a fingerprint scanner, there’s no need for passwords.
The browser support for the web authentication API (WebAuthn) is a bit patchy right now but you can start playing around with it.
There are a few other options for making logging in faster. There’s the Credential Management API. It allows someone to access passwords stored in their browser’s password manager. But even though it’s newer, there’s actually better browser support for WebAuthn than Credential Management.
Then there’s federated login, or social login. Jason has concerns about handing over log-in to a company like Facebook, Twitter, or Google, but then again, it means fewer passwords. As a site owner, there’s actually a lot of value in not storing log-in information—you won’t be accountable for data breaches. The problem is that you’ve got to decide which providers you’re going to support.
Also keep third-party password managers in mind. These tools—like 1Password—are great. In iOS they’re now nicely integrated at the operating system level, meaning Safari can use them. Finally it’s possible to log in to websites easily on a phone …until you encounter a website that prevents you logging in this way. Some websites get far too clever about detecting autofilled passwords.
Time for another volunteer from the audience. This is Tyler. Tyler will help Jason with a simple checkout form. Shipping information, credit card information, and so on. Jason will fill out this form blindfolded. Tyler will first verify that the dark goggles that Jason will be wearing don’t allow him to see the phone screen. Jason will put the goggles on and Tyler will hand him the phone with the checkout screen open.
Jason dons the goggles. Tyler hands him the phone. Jason does something. The form is filled in and submitted!
What was the secret? The goggles prevented Jason from seeing the phone …but they didn’t prevent the screen from seeing Jason. The goggles block everything but infrared. The iPhone uses infrared for Face ID. So the iPhone, it just looked like Jason was wearing funky sunglasses. Face ID then triggered the Payment Request API.
The Payment Request API allows us to use various payment methods that are built in to the operating system, but without having to make separate implementations for each payment method. The site calls the Payment Request API if it’s supported (use feature detection and progressive enhancement), then trigger the payment UI in the browser. The browser—not the website!—then makes a call to the payment processing provider e.g. Stripe.
E-commerce sites using the Payment Request API have seen a big drop in abandonment and a big increase in completed payments. The browser support is pretty good, especially on mobile. And remember, you can use it as a progressive enhancement. It’s kind of weird that we don’t encounter it more often—it’s been around for a few years now.
Jason read the fine print for Apple Pay, Google Pay, Microsoft Pay, and Samsung Pay. It doesn’t like there’s anything onerous in there that would stop you using them.
On some phones, you can now scan credit cards using the camera. This is built in to the operating system so as a site owner, you’ve just got to make sure not to break it. It’s really an extension of autofill. You should know what values the
autocomplete attribute can take. There are 48 different values; it’s not just for checkouts. When users use autofill, they fill out forms 30% faster. So make sure you don’t put obstacles in the way of autofill in your forms.
Jason proceeds to relate a long and involved story about buying burritos online from Chipotle. The upshot is: use the
pattern attributes correctly on
input elements. Test autofill with your forms. Make it part of your QA process.
So, to summarise, here’s how you make your forms disappear:
- Start by reducing the number of form fields.
- Use the correct HTML to support autofill. Support password managers and password-pasting. At least don’t break that behaviour.
- Provide alternate ways of logging in. Federated login or the Credentials API.
- Test autofill and other form features.
- Look for opportunities to replace forms entirely with biometrics.
Any sufficiently advanced technology is indistinguishable from magic.
—Arthur C. Clarke’s Third Law
Don’t our users deserve magical experiences?
Voice User Interface Design by Cheryl Platz
Why make a voice interface?
Successful voice interfaces aren’t necessarily solving new problems. They’re used to solve problems that other devices have already solved. Think about kitchen timers. There are lots of ways to set a timer. Your oven might have one. Your phone has one. Why use a $200 device to solve this mundane problem? Same goes for listening to music, news, and weather.
People are using voice interfaces for solving ordinary problems. Why? Context matters. If you’re carrying a toddler, then setting a kitchen timer can be tricky so a voice-activated timer is quite appealing. But why is voice is happening now?
Humans have been developing the art of conversation for thousands of years. It’s one of the first skills we learn. It’s deeply instinctual. Most humans use speach instinctively every day. You can’t necessarily say that about using a keyboard or a mouse.
Voice-based user interfaces are not new. Not just the idea—which we’ve seen in Star Trek—but the actual implementation. Bell Labs had Audrey back in 1952. It recognised ten words—the digits zero through nine. Why did it take so long to get to Alexa?
In the late 70s, DARPA issued a challenge to create a voice-activated system. Carnagie Mellon came up with Harpy (with a thousand word grammar). But none of the solutions could respond in real time. In conversation, we expect a break of no more than 200 or 300 milliseconds.
In the 1980s, computing power couldn’t keep up with voice technology, so progress kind of stopped. Time passed. Things finally started to catch up in the 90s with things like Dragon Naturally Speaking. But that was still about vocabulary, not grammar. By the 2000s, small grammars were starting to show up—starting an X-Box or pausing Netflix. In 2008, Google Voice Search arrived on the iPhone and natural language interaction began to arrive.
What makes natural language interactions so special? It requires minimal training because it uses the conversational muscles we’ve been working for a lifetime. It unlocks the ability to have more forgiving, less robotic conversations with devices. There might be ten different ways to set a timer.
Natural language interactions can also free us from “screen magnetism”—that tendency to stay on a device even when our original task is complete. Voice also enables fast and forgiving searches of huge catalogues without time spent typing or browsing. You can pick a needle straight out of a haystack.
Natural language interactions are excellent for older customers. These interfaces don’t intimidate people without dexterity, vision, or digital experience. Voice input often leads to more inclusive experiences. Many customers with visual or physical disabilities can’t use traditional graphical interfaces. Voice experiences throw open the door of opportunity for some people. However, voice experience can exclude people with speech difficulties.
Making the case for voice interfaces
There’s a misconception that you need to work at Amazon, Google, or Apple to work on a voice interface, or at least that you need to have a big product team. But Cheryl was able to make her first Alexa “skill” in a week. If you’re a web developer, you’re good to go. Your voice “interaction model” is just JSON.
How do you get your product team on board? Find the customers (and situations) you might have excluded with traditional input. Tell the stories of people whose hands are full, or who are vision impaired. You can also point to the adoption rate numbers for smart speakers.
You’ll need to show your scenario in context. Otherwise people will ask, “why can’t we just build an app for this?” Conduct research to demonstrate the appeal of a voice interface. Storyboarding is very useful for visualising the context of use and highlighting existing pain points.
Getting started with voice interfaces
You’ve got to understand how the technology works in order to adapt to how it fails. Here are a few basic concepts.
Utterance. A word, phrase, or sentence spoken by a customer. This is the true form of what the customer provides.
Intent. This is the meaning behind a customer’s request. This is an important distinction because one intent could have thousands of different utterances.
Prompt. The text of a system response that will be provided to a customer. The audio version of a prompt, if needed, is generated separately using text to speech.
Grammar. A finite set of expected utterances. It’s a list. Usually, each entry in a grammar is paired with an intent. Many interfaces start out as being simple grammars before moving on to a machine-learning model later once the concept has been proven.
Here’s the general idea with “artificial intelligence”…
There’s a human with a core intent to do something in the real world, like knowing when the cookies in the oven are done. This is translated into an intent like, “set a 15 minute timer.” That’s the utterance that’s translated into a string. But it hasn’t yet been parsed as language. That string is passed into a natural language understanding system. What comes is a data structure that represents the customers goal e.g. intent=timer; duration=15 minutes. That’s sent to the business logic where a timer is actually step. For a good voice interface, you also want to send back a response e.g. “setting timer for 15 minutes starting now.”
That seems simple enough, right? What’s so hard about designing for voice?
Natural language interfaces are a form of artifical intelligence so it’s not deterministic. There’s a lot of ruling out false positives. Unlike graphical interfaces, voice interfaces are driven by probability.
How do you turn a sound wave into an understandable instruction? It’s a lot like teaching a child. You feed a lot of data into a statistical model. That’s how machine learning works. It’s a probability game. That’s where it gets interesting for design—given a bunch of possible options, we need to use context to zero in on the most correct choice. This is where confidence ratings come in: the system will return the probability that a response is correct. Effectively, the system is telling you how sure or not it is about possible results. If the customer makes a request in an unusual or unexpected way, our system is likely to guess incorrectly. That’s because the system is being given something new.
Designing a conversation is relatively straightforward. But 80% of your voice design time will be spent designing for what happens when things go wrong. In voice recognition, edge cases are front and centre.
Here’s another challenge. Interaction with most voice interfaces is part conversation, part performance. Most interactions are not private.
Humans don’t distinguish digital speech fom human speech. That means these devices are intrinsically social. Our brains our wired to try to extract social information, even form digital speech. See, for example, why it’s such a big question as to what gender a voice interface has.
Delivering a voice interface
Storyboards help depict the context of use. Sample dialogues are your new wireframes. These are little scripts that not only cover the happy path, but also your edge case. Then you reverse engineer from there.
Flow diagrams communicate customer states, but don’t use the actual text in them.
Prompt lists are your final deliverable.
Functional prototypes are really important for voice interfaces. You’ll learn the real way that customers will ask for things.
If you build a working prototype, you’ll be building two things: a natural language interaction model (often a JSON file) and custom business logic (in a programming language).
Eventually voice design will become a core competency, much like mobile, which was once separate.
Ask yourself what tasks your customers complete on your site that feel clunkly. Remember that voice desing is almost never about new scenarious. Start your journey into voice interfaces by tackling old problems in new, more inclusive ways.
May the voice be with you!