Tags: worldwide

48

sparkline

Monday, July 18th, 2022

Fundamentals matter | Go Make Things

I really enjoyed Laurie’s talk in Berlin a few weeks back. I must blog my thoughts on it.

But I must admit that something didn’t sit quite right about the mocking tone he took on the matter of “the fundamentals” (whatever that may mean). Chris shares my misgivings:

Those websites that don’t load on slow connections, or break completely when a JS file fails to load, or don’t work for people with visual or physical impairments?

That’s not an issue of time. It’s an issue of fundamentals.

I think I agree with Laurie that there’s basically no such thing as fundamental technologies (and if there is such a thing, the goalposts are constantly moving). But I agree with Chris with that there is such a thing as fundamental concepts. On the web, for example, accessibility is a core principle of its design that should, in my opinion, be fundamental.

This, basically:

Do I wanna see teenagers building frivolous websites? Absolutely. But when people are getting paid well to build our digital world, they have a responsibility to ensure the right to engage with that world for everyone.

Wednesday, June 22nd, 2022

In And Out Of Style

A presentation from An Event Apart Spring Summit held online in April 2022 and the opening presentation at CSS Day held in Amsterdam in June 2022.

Hello, my name is Jeremy and I am speaking to you today from Brighton on the south coast of England.

I want to tell you about something that happened here in Brighton back in 1985 (pretty sure it took place in one of those buildings along the seafront there). In 1985 Brighton was host to the International Information Theory Symposium. Fascinating.

Something exciting happened there. Word began to go around that there was an unexpected guest attending the event. This unexpected guest was this man, Claude Shannon. The way it was described later was somebody said it was as if Newton had showed up at a physics conference.

He wasn’t even meant to be there, but he was convinced at the dinner after the event to get up and say a few words. And he did, he got up and he started to talk, but he felt like he was losing the audience. So he proceeded to do some juggling.

That’s so Claude Shannon. He was very much into games. He took games very seriously. He was a very playful kind of person.

For example, he invented this machine, which is called the most beautiful machine, also the most useless machine. But I think it’s just wonderful. I mean, it’s like the perfect encapsulation of cybernetics, the ideal feedback loop.

But the reason why people were excited that Claude Shannon was at this event wasn’t because of the most beautiful machine. And it wasn’t because of his juggling. It was because of information theory.

Because Claude Shannon, it’s not like he just revolutionized the field of information theory; Claude Shannon pretty much invented the field of information theory in one fell swoop. In a paper in 1948 called the Mathematical Theory of Communication.

Here’s the TLDR. This is the mathematical part. I won’t go into the details of the mathematical part, but what I recall from Claude Shannon’s work is that he was able to effectively boil information down into fundamental particles. The idea that there’s a single bit of information.

This idea of entropy, the idea that for information to travel between communicator and the receiver, you’ve got the signal that you’re trying to transmit, but there’s also noise. And this noise is unavoidable.

And like I said, this idea of the bit; that any piece of information could be reduced down to a fundamental particle: a one or a zero; on or off, which of course is exactly how computers work. So it’s no exaggeration to say that Claude Shannon is like the father of the digital age.

And one other thing I take from Claude Shannon’s work is that when it comes to communication of information, context matters. In other words, that the expectation between the receiver and the communicator can make a lot of difference.

Shared context

So to give you an example of shared context being very important in information communication I want to illustrate it with a story from the pre-digital age. This is a story from the age of the electrical telegraph.

Now this story is probably completely apocryphal. In some versions of the story, it involves the novelist Victor Hugo. In other versions, it’s Oscar Wilde. But the point is there’s an author. He’s just published a book and now he’s gone off on holiday after writing the book. But while he’s on holiday, he’s really curious to know, how is the book doing? What are the sales like?

So he sends a telegram to his publisher, but because there’s enough shared context between the publisher and the author, all he sends is a single character. A question mark.

?

And then, because there’s this shared context between the publisher and the author and the publisher wants to let the author know that actually sales are going really, really well, the publisher also sends back a telegram with a single character. An exclamation mark.

!

So this is a classic example of the importance of context. I mean, you’re just sending a single character and yet both parties understand the message being conveyed.

Context matters. Shared context matters.

Now I want to try an experiment with you to test how much shared context there is between me (I’m going to try to transmit a message) and you (the receiver of the message).

So we’re starting with a blank slate. And now I’m going to provide one piece of information. Okay. Here’s the piece of information. Probably doesn’t tell you much. A diagonal line. There’s not enough context here.

All right. Back to the blank slate. I’m now going to provide another piece of information.

Okay. Again, in isolation, this probably doesn’t tell you much just another diagonal line. But if I combine it with the first bit of information, then now maybe it starts to become something you can parse. And if I provide just one more bit of information, now maybe it clicks into place that the piece of information I’m trying to convey is ten minutes past ten.

And yet all I’ve done here is I’ve provided you with two diagonal lines in a circle. Yet somehow two diagonal lines in a circle, when we have the shared context of how to read an analogue clock face, is enough to communicate ten minutes past ten.

(The time, by the way, is completely arbitrary. The only reason I chose that time is just that if you ever look at an advertisement for a watch, it’ll usually be ten past ten because the angles of the arms on the watch nicely frame the logo of the watchmaker.)

But anyway, the point is: with enough shared context, two diagonal lines in a circle are enough for me to communicate the piece of information, “ten minutes past ten o’clock.”

I mean, maybe you’re a digital native born in a 21st century, in which case you’re looking at this and thinking, “I just see two diagonal lines in a circle”, but if you can read an analogue clock face, then we have that shared context.

Where did this context come from? Why is it that that clock faces are set up the way they are? Why do clocks go clockwise? It seems like a fairly arbitrary decision.

It is somewhat arbitrary, but one neat solution is that the reason why clocks go in a clockwise direction is that that’s the way that a shadow on a sundial would travel …in the Northern hemisphere.

Now if you look at a sundial in the Southern hemisphere, like this one here—this is in Wellington, in New Zealand—the shadow would actually go in a counterclockwise direction.

A sundial in the Southern hemisphere.

So really it’s almost an accident of history that we have clocks that go clockwise. If clocks had been invented in the Southern hemisphere, then they would go in the other direction. It’s pretty arbitrary, but now we’ve decided, we’ve kind of settled on this arbitrary movement of clocks that they go clockwise and we’re stuck with it.

Because inertia is a very powerful force. If you tried to change the way the clocks work you’d have your work cut out for you, even if the reason why clocks work the way they do is arbitrary to begin with.

Inertia

You know, a very wise person once said the most dangerous phrase in the English language is “We’ve always done it that way.” And that very wise person was the brilliant computer scientist, rear admiral Grace Hopper.

Grace Hopper

She used to say:

Humans are allergic to change.

She said:

I try to fight that. And that’s why I have a clock on my wall that runs counterclockwise.

Right? It kind of drives home this idea that, hey, this is an arbitrary decision.

And it’s kind of weird for us to look at a clock that runs counterclockwise. I actually managed to find a watch a few years ago that worked like this, that ran counterclockwise. And I wore it for a while and I was able to train my brain to read the clock this way. And it worked fine, but it completely broke my brain for reading normal clocks. So I kind of had to just stop doing it.

But I’m fascinated by these examples of fairly arbitrary decisions made sometime in the past that you’re then stuck with, because it’s very hard to change the inertia. But only recently did I find out that there’s a term for this phenomenon. This is called path dependence.

Path dependence

History is full of path dependence. The classic example is if you wanted to make a new train or a new stretch of railway track, you’re gonna have to use the existing gauge of the railway in question. Now it’s not that there’s one gauge of railway that’s better or worse than any other gauge, but if someone’s made that decision in the past, it’s very hard to change. And you really do want to settle on one gauge so that you don’t have to switch trains when you move between different parts of a country (this actually happened down in Australia, where they had different gauges for the railways, it was kind of a mess).

It’s the canonical example of why you need standards. But really the point of standards isn’t necessarily to enshrine the best way of doing something. The point of standards is to enshrine the agreement. “Hey, let’s all agree to do things this.”

Whether the standard is good, bad, better or worse than other ways of doing things is in some ways less important than the agreement. You just need to have everyone agree on something.

Like, there’s a standard for which side of the road you drive on in your country. And it doesn’t really matter whether the standard is for it to be on the left side of the road or the right hand side of the road. But it really matters that you all agree on the same side of the road.

Agreement and standards brings us very nicely to the World Wide web. Because I think the World Wide web is a fantastic example of agreement.

The web is agreement

There’s a friend of mine, Paul Downey, who does these wonderful illustrations. Fantastic. He has this one called “the web is agreement.” Whenever I think about the word agreement, this is what I think of: that the web is agreement. And he does these kind of Hieronymous Bosch and Breughel-like images of all the different formats and standards that we use on the web.

The Web Is Agreement by Paul Downey.

And if you think about what the World Wide Web is, this combination of HTTP and URLs and HTML. This was, you know, when the web was first created. Yeah, these are just agreements.

I mean, HTTP is a protocol and that word protocol literally means agreement. (If you think about diplomatic protocols that are like diplomatic agreements, right?)

URLs, “Hey, let’s all agree to use this addressing scheme.”

And HTML, “If we all agree to use this format, then we get interoperability.”

So these formats, these protocols, this set of standards or agreements came from Sir Tim Berners-Lee. This was back when he was working at CERN at the nuclear physics laboratory in the late 1980s, early ’90s.

I’m somewhat fascinated about the birth of the web, which is why it was a huge honour and pleasure for me to be invited to CERN a few years back. This was in the run up to the 30th anniversary of the original proposal that Tim Berners-Lee submitted for what later became the World Wide web. And we did this project and you can check out the project at worldwideweb30.com.

This wonderful group of people came together for a week to kind of hack on something. And what we were hacking on was this project to recreate the very first web browser …but that you could run it in a modern web browser. This is what it looked like.

The first web browser was also called confusingly WorldWideWeb. It was created by Tim Berners-Lee on his NeXT machine. And this was the first demonstration of those three things working together: HTTP, URLs and HTML.

Now I say we were working on this. I didn’t make this part. This was the really smart bit; much cleverer people were working on the smart bit. What I worked on was the website that accompanied the project.

And specifically I worked on creating a timeline.

Timeline

Remember this is coming up on the 30th anniversary of the web, and I thought it’d be fascinating to not just graph out the 30 years that the web has existed, but also look back at the 30 years before the web existed to see what were the influences that fed into the web.

What I was looking at here really was the path dependencies. What were the path dependencies in computing and networks, hypertext and formats that all fed into that creation of the World Wide Web?

So let’s take formats, for example. Tim Berners-Lee creates HTML along with URLs and HTTP.

And if I show you these elements, they should be quite familiar to you. You recognize what this language is, right? Yeah. Clearly this language is…

SGML. Standard Generalized Markup Language.

Specifically, it’s a flavor of SGML that was being used in CERN at the time. And Tim Berners-Lee thought rather than create his own format, he would use what people were already used to.

He kind of had the same insight that Grace Hopper had, that humans are allergic to change. But instead of trying to change that, he sort of went with the flow.

So he took SGML and basically copied it to make HTML, introducing one new element, which is the, A element.

So, you know, there’s a path dependency, even in the name, right? You think Standard Generalized Markup Language. Oh right. And now we have HyperText Markup Language. So even the name HTML seems to have a path dependency to SGML.

But it goes deeper.

SGML was a specialized version of GML. And GML was supposed to stand for Generalized Markup Language. Except the people who created GML were named Goldfarb, Mosher, and Lorie, which is probably the real reason why it was named GML.

And later we got SGML and then we got HTML. So it turns out there’s a path dependency in the phrase “HTML” that goes back to three dudes wanting to get their names into a format many, many years ago.

Styling

What about styling when it came to the early web?

There’s no CSS at this point. But if you look at this first web browser—and this is the very first web page on the web, which is still available at its original URL—you can see that different types of elements are styled differently.

The links are styled differently than the text around them. The heading is styled differently to the body copy. This definition list has formatting going on. You can see the spacing there. So something is doing some styling.

Well, when we were at this hack week at CERN, we had access to the original source code for this project. We found this file in there.

And this is, I guess, the user agent style sheet. This is the bit that tells the first web browser how to style headings, how to style lists, how to style definitions.

It’s not very readable, but you can tell that, you know, there’s a lot of values here, not many property names, but if you squinted it just right, you can imagine, okay, this is some form of a style sheet.

It became clear though that it wasn’t enough to just allow the user agent to do the styling. Authors—that’s developers and designers—authors also needed a way to provide styling information.

Now for a while there, it looked like the way this was gonna happen was going to be in HTML. People started adding these proprietary elements and attributes to HTML that were all about presentation, all about styling. And that’s not what HTML is for. HTML is about structure, about the semantics, the meaning of a document.

It was really important that there’d be some kind of separation of concerns, that you would use one format—HTML—for your structure, and that there should be a separate format, some other kind of language, for styling.

The question is, what should that other format be?

Well, pretty early on, some proposals started coming in early mailing lists to do with the World Wide Web. Pretty much everyone on the web in the first few months was making their own web browser. It was by nerds for nerds.

Rob Raisch

I think the first proposal for some kind of styling for authors came from Rob Raisch, who was at O’Reilly at the time.

He sent an email to the www-talk mailing list in June of 1993, with this proposal as a way of styling.

@BODY fo(fa=he,si=18)

Now, again, looking at this, it’s not CSS, but if you squint just right, you can sort of make sense of it. It’s kind of like looking at a clock running counterclockwise. It’s not what we’re used to, but you feel like you could parse it.

Clearly the priority here was to do with brevity. We’ve got these two character things like FO for font, FA for face, HE for Helvetica, SI for size. You put that all together and you can say the font face should be Helvetica and the font size should be 18 of whatever unit we’re talking about here.

So, you know, just about able to parse it, there is the concept here of, you know, some kind of selector, right? The way that we say we’re talking about the BODY, we’re talking about the paragraph or talking about B or I.

Pei Wei

The next proposal that came along was by Pei Wei, who was building the Viola web browser. He sent an email to the www-talk mailing list in October of 1993. And he was able to put his entire style sheet in his proposal.

This is what it looks like. Kind of similar to what we saw before. We’ve got the idea of properties and values, but with equal signs rather than colons that we used to now.

But what’s really interesting here is this idea of nesting. We’ve got nesting going on in this proposal which is something that we’re really only just getting in CSS.

Håkon Wium Lie

Now, the next proposal came from Håkon Wium Lie. This was October of 1994 and he called his proposal Cascading HTML Style Sheets. And it looked like this.

h1.font.size = 24pt 100%

And again, you can kind of parse this, right? It’s not what we’re used to, but you squint at it and you can make sense of it. You can see the way that the selector and the properties are kind of scrunched together with this dot syntax. And again, there’s an equal sign rather than a colon, but we get it. It’s like, okay, the font size of an H1 element should be 24 points. Got it.

But wait, what’s this percentage after it, like this 100% or this 40%?

Influence

Well, this is a really interesting part of this proposal. This was this idea of influence. The idea that an author should be able to effectively say how much they care about a particular style being applied.

So if you really want that heading to be that size, you say 100%, I care about this. But if you only half care, you could say 50%.

And the idea was that users would also be providing styles and users would also specify how much they cared, how much influence they wanted to exert exert on the styles.

And then there’s kind of a bit of hand-wavy logic where it’s like, “And then the user agent figures out what the final style should be.”

And that last part it turned out was really hard to do. So this idea of influence somewhat fell by the wayside. But I think it’s very powerful and it definitely matched the ideas of Tim Berners-Lee with his first web browser, this idea that the web should be a read and write medium.

Because that first web browser—WorldWideWeb—wasn’t just a browser. It was also an editor.

The idea was you would open a document from the web, you’re looking at it and you think “I wanna make changes to this document. I’m gonna create my own copy, put it on my server and make the changes.”

Now it turned out that was really hard. And so that was one of the first things that got dropped from the World Wide Web, which is a bit of a shame because I think it is a very, very powerful idea, a very empowering idea.

We somewhat got back this idea of a read/write web with things like wikis and blogs and even social media to a certain extent.

And the idea that users should have influence over the styles of a website? Well, that survived in web browsers for quite a while, with this concept of user style sheets.

This is different to user agent style sheets. This was literally that in your browser, you could specify styles to override what an author has specified.

This got dropped from browsers over time because it turned out to be a real power-user feature. Most people weren’t using this. These days, if you want to apply styles as a user, you have to install a browser extension, some kind of plugin. Or your operating system has some kind of translation of like, “these are my preferences at the operating system level” and those get translated to the browser.

I think it’s a real shame that we lost user style sheets. I thought it was a very empowering feature. I get it. It was, you know, somewhat of an edge case. It was power-user feature, but I think it’s a shame we lost it.

I do see, however, a bit of a resurgence in the idea of giving users control over styling with some of the things we’re seeing in CSS, particularly in the media queries level five, this idea of what are being called preference queries. You can say, you know, prefers a color scheme, like dark mode prefers reduced motion, prefers reduced data.

Now it’s a bit different because it’s still up to the author of the style sheet on that website to honour these preferences, right? You still have to write the styles to do the right thing to respect the colour scheme or reduced motion or reduced data.

Though, you know, some browsers are looking into automatically applying some of this stuff automatically: inverting colors and reducing motion.

And on the whole, I welcome the idea that users should have more of a say in how websites are styled. I think it’s a good thing.

So we’re seeing a bit of a resurgence of this idea of influence in modern CSS.

And speaking of modern CSS being somewhat foreshadowed in Håkon’s original proposal, here’s something else that was in that original proposal…

font.size = 12pt
h1.font.size = font.size * 3
h2.font.size = font.size * 2.5
h3.font.size = font.size * 2

If you look at this, you can kind of figure out what’s going on. That there’s kind of a declaration at the top to say there’s a variable, if you like, that’s 12 points. And then that variable is used throughout the style sheet. It’s multiplied by different numbers.

And really, we’ve got this now in CSS, thanks to custom properties and calc(), right? The ability to set variables and do calculations on those variables. But it took a long time between this original proposal and this very modern CSS that we have today.

Bert Bos

The final proposal I want to mention came from Bert Bos. This was also in 1994 and his proposal looked like this.

I think this is the first time we started to see colons rather than equal signs for properties and values. But again, you can see the way that the selectors and the properties sort of munged together with this dot syntax.

It’s parsable, right? Again, it’s like looking at a clock running counterclockwise, but you can understand what’s going on here.

So Bert’s proposal is interesting. It’s one more proposal. But what I really like about what Bert did was Bert also provided design principles.

In other words, the thinking behind the proposal. Because like I was kind of saying, you know, the standard itself, in some ways, isn’t the important thing. The important thing is the agreement. So let’s all try and agree on what we’re trying to accomplish with some kind of style sheet language. And I will also freely admit I’m just a sucker for design principles.

Design principles

I’m fascinated by design principles. I even collect them. This is like my equivalent of my interesting rock collection. A collection of design principles at principles.adactio.com.

And if you go there, I’ve collected design principles from individuals, from organizations, and I have Bert’s principles there.

And they’re worth reading through, but one of the issues with Bert’s principles is there’s a lot of them. These are all the different things that feed into the design of a style sheet language. And these are all good things, but I think what’s missing here is some kind of prioritization.

Because the hard part about design principles, isn’t saying what you value. The hard part about design principles is saying we value one thing over another.

So let’s take two of these. We see simplicity and longevity. Well, do we value simplicity more than longevity? Do we value longevity more than simplicity? That’s actually the hard part, to specify the priorities.

So I think it’s a bit of a shame that there isn’t prioritization here, but I think it’s still fascinating that we can look at all of the things that Bert was imagining we have to balance in some kind of style sheet language.

Well, it became pretty clear that Bert and Håkon were working on the same sort of thing. And so they pooled their resources together and kids, that’s where CSS comes from. Jointly from Bert and Håkon.

CSS

And what they settled on—with all of those different design principles and all of the ideas from the different proposals that came before—this is what we got:

selector {
  property: value;
}

This one pattern. You’ve got a selector, a property and a value. Then we’ve got these special characters for syntax, right? Curly braces, colons, semicolons, but really it’s somewhat arbitrary. The point is that all of CSS pretty much can be boiled down to this one pattern: selector, property, value.

It’s a very simple pattern. And yet it allows for endless complexity. I mean, this is our shared context on the web for styling. If you think about the number of websites out there, right? Billions. And every one of them has a different style sheet and every one of them is different. And yet all of them use the same pattern at its root.

It’s the classic example of how a simple rule can create a complex system. And I think this might also be at the heart of why CSS can be misunderstood. Because this pattern is very simple and because it’s very simple, people might think well, CSS is therefore easy.

But there’s a difference between simplicity and easiness.

Like, you can learn the idea of CSS in an hour, right? Because effectively this pattern is it. You need to get your head around selector, property, value.

But you can then spend a lifetime trying to master CSS because of all the possible combinations of selectors and properties and values, right? It’s a lifetime of learning.

So this is where I think some of the disconnect comes with people thinking, “Oh yeah, I’ll pick up CSS. No problem. It’s easy.” And actually, no. It’s simple, but it’s not easy.

And CSS has grown over time, right? We keep getting more selectors, we keep getting more properties and we keep getting more values. It grows and grows while still maintaining this fundamental pattern.

Hacks

And if we look at where the growth of CSS has come from, you know, a lot of the time it came from hacks. And I don’t mean literal CSS hacks, like the box model hack or tan hack for any anybody old enough to remember that.

I mean hacks in a sense of its original use of a clever solution to a problem, but probably not a great long term solution.

Layout

So the classic example of hacks on the web would’ve been layout. You know, in the early days we were using tables for layout. We had transparent gifs, one pixel by one pixel gifs that we would give width and height to allow us to make all the layouts we wanted. And it worked, but it was a hack.

So then we got CSS and we switched to using floats for layout, which was better. But, you know, it was still a hack because floats weren’t intended for layout. They were intended for, you know, having text flow around images.

And it’s only relatively recently in the history of the web that we finally are able to throw away our hacks and use proper layout tools.

Because now we’ve got flexbox and we’ve got grid and these are made for layout. It took quite a while for us to get there.

But you know, in the early days of the web, it wasn’t even clear if CSS should attempt to do layout or whether there should be a third format specifically for layout. Because maybe there needed to be that separation of concerns between structure (you’ve got HTML), styling (you’ve got CSS), and some third technology for layout which could be considered like its own its own thing.

I mean, if you think about it today, we kind of sequester layout into media queries. So you could imagine that being a separate technology.

But anyway, it became clear over time that CSS should be the home for layout as well as other kinds of styling.

And that’s what we’ve got now. We’ve got flexbox. We’ve got grid. We got proper layout on the web so we were able to stop using our hacks and use the real native tools.

Typography

It’s a similar story with typography.

If you wanted to use a font that wasn’t one of the system fonts that most people would have installed, well, you went into Photoshop and you made an image of text using the font you wanted, and now the user would have to download that image file and the text wasn’t selectable, it was a fixed width, it came with all sorts of problems. And we came up with very clever solutions to do what’s called image replacement in CSS, but they were all hacks.

And now we don’t need the hacks because we’ve got the @font-face rule. So we can literally specify the typeface we want to use. We can stop using the hacks.

Graphic design

We used a lot of hacks for graphic design as well. Things that we weren’t quite able to do natively in CSS.

I’m gonna air my laundry here and show you a website I made back in 2005. This was the website for my first book called DOM Scripting and it’s very much of its time.

This is the very 2005-feeling design. You see the way we’ve got that element there with rounded corners? And you see the way that there’s a gradient in the background of the page? Well, back in 2005, we didn’t have rounded corners in CSS and we didn’t have gradients.

So those rounded corners? Those are images that have been absolutely positioned into that element.

And that gradient is actually an image. It’s a one pixel wide, but very, very tall image that is tiled across the entire background.

So these were hacks and they worked, but obviously I wouldn’t need to do that today. Today, I’ve got border-radius to do my rounded corners and I got linear-gradient to do my gradients.

Ironically, right as we got the power to do rounded corners and gradients in CSS natively, we all collectively decided that, nah, actually what we want is flat design …which we could have been doing all along!

Anyway, let me show you another example. This is a website that dates back even further. This is my own personal website, adactio.com.

This design hasn’t really changed in over 20 years, but I have updated the technology.

Let’s the take image at the top. You can see that’s been given some treatment as a corner has been sliced out of one edge and a gradient has been applied so it fades out.

Now it used to be that I would have to do that in Photoshop. I’d take the original image, I’d slice off the corner, I’d add a gradient layer on top of it.

Well now I don’t need Photoshop because I can use clip-path to take off the corner. I can use a linear gradient using generated content—using the ::after pseudo element—to get the exact same result.

So now I’ve got something that looks pretty much the same, except where I had to do it in Photoshop before, now I’m able to do it natively in CSS.

And that might not sound like much of a win because the end result looks the same, right?

Except now when I’m applying a prefers-color-scheme style sheet and I give a dark mode to my site, I don’t have to make a separate image. Because this is being done natively in CSS. The gradient is in CSS. The clip path is in CSS. It’s all native to CSS.

Material honesty

This comes down to this idea of material honesty. About using the right material for the job, rather than using a material that’s pretending to be something else, whether that’s, you know, an image of text, pretending to be a font or an image of a gradient, pretending to be a real gradient or, you know, any of those graphic design tricks.

We can now be materially honest in CSS because we’ve got grid and flexbox and border-radius and @font-face. It’s more honest.

And also it’s easier, right? It’s less work to avoid the hacks.

Like, one of my favorite examples of something we got recently in CSS to avoid the hacks. It’s a small thing, but it makes a big difference in my opinion, is just styling things like check boxes and radio buttons.

We’ve always been able to do it, but it involved a hack where you’d hide the real checkbox off screen. And then you’d use a background image to show the different states of the checkbox. And it was fine and it worked and we could make it accessible.

But now we can just use accent-color and it’s easier.

So there’s been this movement from hacks to native CSS. And in a way, the hacks show the direction of travel. The hacks show us what the future could be.

Tools

The other place CSS has borrowed from or learned from, has been in our tools. Like the tools we use to generate our CSS.

Sass is the classic example, right? Sass is this enormously popular pre-processor for CSS and people were using Sass to do things you couldn’t do natively in CSS.

And I feel like one of the genius bits of design in Sass was that it worked a lot like the way HTML worked in the early days compared to SGML.

Remember, I was talking about how Tim Berners-Lee took SGML and literally turned it into HTML? Like, you were able to take an SGML document and change the file extension, and it would be a valid HTML document. And so that really helped with the adoption of HTML.

Well, that’s the same with Sass. You could take your existing style sheet document and just changed the file extension to .scss and it was already valid Sass, right? You didn’t have to learn a new syntax.

This isn’t supposition on my part—that this was a reason for the huge success of Sass—because actually Sass had two options, two different syntaxes, and you could choose.

There was this .sass syntax and the .scss syntax.

And with the .sass syntax, it used significant white space. It was more condensed, right? It used indentation.

But with .scss it used the syntax you were already familiar with from CSS.

So you could say that the .sass syntax was more efficient. You could say it’s a better format, but humans are allergic to change. And the .scss syntax was familiar enough that people go, “oh yeah, I get this.”

You could, you could take your existing CSS and just start using the features of Sass you wanted to.

And people overwhelmingly chose the .scss syntax over the .sass syntax. I got to meet Hampton Catlin who invented Sass and he confirmed the numbers for me. He said, yeah, it was a no brainer. Basically the .sass syntax lacks familiarity. It’s like looking at a clock running counter-clockwise.

But anyway, people started using Sass to do things like nesting, calculations (these mix-ins), variables; all these things that we now do in CSS.

Of course, the reason we can do these things in CSS is because Sass proved that there was a desire for these things.

So now you really don’t need Sass for a lot of stuff, but the reason you don’t need Sass is because of Sass. Sass paved the way. Sass showed that there was a demand for this stuff. And now it’s native in the CSS. We don’t need that tool anymore.

And I feel like that’s a test of a really good tool. A really successful tool is when it becomes redundant.

In the JavaScript world, jQuery is a classic example of this, right?

With jQuery you were able to do things using a CSS syntax, whereas otherwise you had to use this long winded DOM syntax of document.getElementsByTagName or getElementById.

Whereas in jQuerythe idea was, “Hey, if you already know how to select something in CSS, just use that syntax again!”

It’s using what people are familiar with. Humans are allergic to change.

But these days we don’t need jQuery because in the DOM, we’ve got querySelectorAll, we’ve got querySelector. We can use CSS selectors to do our DOM scripting.

Why don’t we need jQuery anymore? Because of jQuery.

jQuery showed that this was a really clever idea. It was something people wanted. And so now it’s been standardized. We don’t need jQuery.

So I really feel like the goal of any good library should be to make itself obsolete. It’s so successful, it’s no longer needed.

And you could kind of see this in the history of the web with Sass, with jQuery, even with something like Flash.

You know, Flash showed the way. It showed that, “Hey, we want a way to do animation. We need some way to do video on the web.” And people were using Flash because there was no other way to do those things.

Now we don’t need Flash or jQuery or Sass because we get them natively.

So all of these are almost like research and development for the web.

They’re kind of like hacks, but I think a better way of thinking about them is they’re more like polyfills—these are things we can use until we get a standardized way of doing it.

I think it’s fascinating to look at our tools and see what can they tell us about what’s coming into standards.

Methodologies

A whole set of tools are these methodologies that people came up with, like OOCSS from Nicole Sullivan. And there’s BEM. And SMACSS was another one. There’s a whole bunch.

But I’m fascinated by these because these aren’t an example of some technology we needed to lobby browser makers to implement. Because these are really agreements.

These are agreements. These are saying, “let’s all agree to structure our CSS in a certain way.” Nothing needed to change in browsers, right?

And all of these are testament to the power of the cascade. Because what they do is they almost deliberately limit the cascade, which is seen as almost being too powerful.

So they’re not really tools, they’re methodologies. Or another way of putting it is they are agreements.

Again, the power of saying “let’s all agree to do something.”

Scale

And the problem that most of them are trying to solve is trying to do CSS at scale, trying to do CSS when you’ve got a large team. Which is interesting to think about like, why wasn’t CSS designed to scale well like this?

Miriam Suzanne said:

Large companies find HTML and CSS frustrating “at scale” because the web is fundamentally an anticapitalist mashup art experiment designed to give consumers all the power.

Okay, it’s funny, but it’s kind of funny ’cause it’s true. If you look at those design principles that Bert Bos came up with, it was very much about empowering the end user, that CSS needed to be accessible. It needed to be something you could learn quickly.

So, you know, thinking about CSS as something that needs to be able to scale to multiple teams of people? That wasn’t really on the cards for CSS back then. It wasn’t a priority.

And maybe that’s why we came up with these methodologies like BEM and OOCSS and SMACSS to try and manage this stuff.

But even these methodologies, now the ideas behind them are finding their way into the standards. Now we’re getting cascade layers and scope in CSS.

To me, this feels like a return of this idea of influence that Håkon Wium Lie was talking about all those years ago.

Now it’s not so much about the influence between an author and a user; it’s about the influence between multiple authors working on, on a giant code base.

So it’s a very exciting time for CSS to see these new tools arrive that can solve these scale problems.

But I do ask myself, what’s still missing in CSS? And this is a great question to ask of JavaScript as well:

What’s still missing?

And you can answer the question by looking at what are we still using tools for? What are we still having to polyfil because we don’t yet have it natively in the browser?

And to answer this question, I’m gonna just quickly finish with three components that kind of demonstrate where CSS is still missing some features.

button

Let’s start with a button component.

All right. If you wanna implement a button component, you’ve kind of got two options. You can use a button element and then you style it with CSS to look however you want. Or you can, you know, make up your own button component using a div or a span, add the CSS and JavaScript and ARIA.

Really there’s absolutely no reason to do that. In this case, it’s a no brainer. You use a button element and you style it with CSS. I cannot think of any reason why you would not do that. There used to be reasons like ages ago in Internet Explorer it used to be hard to style buttons. Those days are long gone.

So in this case, really simple answer to the question. Material honesty. Use a button element. Use CSS to style it.

dropdown

All right. What about a dropdown component? There’s a number of options, you click in and you get a select dropdown.

Well, again, you can use a select element style with CSS, or you can just make your own dropdown component with a div and JavaScript and a bunch of ARIA to try and make it accessible.

Now here, I would still say, use a select element and style it with CSS, but you are gonna hit a limit. Like the open state of the dropdown is kind of out of our control. There’s not much you can do in CSS. So if you really care about styling the open state of a dropdown, I guess I can understand why you would reach for making your own with a div and JavaScript and ARIA.

I mean, I personally wouldn’t do it. But I guess I can see it, because CSS isn’t there yet. We don’t yet have the power to style a dropdown.

date picker

What about a date picker component? You click into it, the user chooses a date.

Again, there appears to be an obvious solution here, which is use input type="date". Boom. You’re done style it how you want, right?

Well, good luck with that. If you’ve ever try to style input type="date", there’s actually very little you can do. And so if you care about the styling of the date selector, you probably are gonna have to make up your own with a bunch of divs and JavaScript and try to make it accessible with ARIA.

Yeah, it’s a real shame.

So I think these three components kind of show the battle ground, if you will, where CSS is still falling short a bit, where we have to still use hacks. It’s kind of this battle between the under-engineered solution—just use the native HTML element—and the over-engineered solution, right? We’re gonna have to create all the functionality and all the accessibility by ourselves.

And I used to get mad at people choosing the hacks, choosing the over-engineered solution. But I realized that it’s kind of like, you know, trying to reduce teen pregnancy by telling people to just stop having sex. Abstinence isn’t realistic. People are going to do it anyway. And the question is, well, how do we make it better and safer in the long run?

So I think that’s the real battleground, is how do we style elements like select and date pickers natively in CSS? And that’s why I think the work being done by the open UI group is really, really important.

It’s at open-ui.org.

And they say:

The purpose of open UI to the web platform is to allow web developers to style and extend built in web controls, such as select dropdowns, check boxes, radio buttons, and date and color pickers.

I think it’s really important work. And I think that’s where we should be putting our effort.

The future

Because, you know, the difference between doing something natively in a web browser and doing something with a framework or with JavaScript is context.

Web browsers are the shared context between users and authors.

Whereas, if you want to use a framework or a library, you have to ship that context to the end user. And that puts a burden on them. It’s not good for performance. It’s not good for the user experience.

So web browsers are where past agreements live on today and they live on into the future.

When something lands in a browser, it stays in a browser. So by using what’s in web browsers, you are benefiting from decades of work by multitudes of people. And it’s better for users.

So treat frameworks and libraries as polyfils. Use them as a temporary measure when there’s something that’s still not possible in browsers (this is very true of JavaScript frameworks).

They can point the way to a shared context in the future, but they themselves are not the future. So don’t get too attached. Treat them as cattle, not pets.

Use frameworks and libraries as scaffolding to help you build. But they are not a foundation.

Web standards in the browser are your foundation to build upon.

You know, having an awareness of the history of technologies from sun dials to web browsers, it can help you understand the way things are today. And in some ways the lessons of path dependence and inertia are sort of grim, right? Because of some arbitrary decision in the past, we are now stuck with the consequences in our clockfaces, in CSS. And it’s very, very hard to change that.

But there’s another way to look at this.

Nothing was inevitable. Which means nothing is inevitable (you know, except for entropy and the heat death of the universe).

So if someone tells you, “Hey, that’s just the way things are; accept it”, don’t believe it.

Understand your position in the timeline.

Yes, the present moment is the result of decisions made in the past, many of them arbitrary, but that also means the future will be highly influenced by the decisions you make today even if those decisions seem small and inconsequential.

The choices you make now could turn out to have long-lasting repercussions into the future.

So make your decisions wisely. You are literally creating the future.

And I’m looking forward to seeing the results.

Thank you.

Monday, May 30th, 2022

Re-evaluating technology

There’s a lot of emphasis put on decision-making: making sure you’re making the right decision; evaluating all the right factors before making a decision. But we rarely talk about revisiting decisions.

I think perhaps there’s a human tendency to treat past decisions as fixed. That’s certainly true when it comes to evaluating technology.

I’ve been guilty of this. I remember once chatting with Mark about something written in PHP—probably something I had written—and I made some remark to the effect of “I know PHP isn’t a great language…” Mark rightly called me on that. The language wasn’t great in the past but it has come on in leaps and bounds. My perception of the language, however, had not updated accordingly.

I try to keep that lesson in mind whenever I’m thinking about languages, tools and frameworks that I’ve investigated in the past but haven’t revisited in a while.

Andy talks about this as the tech tool carousel:

The carousel is like one of those on a game show that shows the prizes that can be won. The tool will sit on there until I think it’s gone through enough maturing to actually be a viable tool for me, the team I’m working with and the clients I’m working for.

Crucially a carousel is circular: tools and technologies come back around for re-evaluation. It’s all too easy to treat technologies as being on a one-way conveyer belt—once they’ve past in front of your eyes and you’ve weighed them up, that’s it; you never return to re-evaluate your decision.

This doesn’t need to be a never-ending process. At some point it becomes clear that some technologies really aren’t worth returning to:

It’s a really useful strategy because some tools stay on the carousel and then I take them off because they did in fact, turn out to be useless after all.

See, for example, anything related to cryptobollocks. It’s been well over a decade and blockchains remain a solution in search of problems. As Molly White put it, it’s not still the early days:

How long can it possibly be “early days”? How long do we need to wait before someone comes up with an actual application of blockchain technologies that isn’t a transparent attempt to retroactively justify a technology that is inefficient in every sense of the word? How much pollution must we justify pumping into our atmosphere while we wait to get out of the “early days” of proof-of-work blockchains?

Back to the web (the actual un-numbered World Wide Web)…

Nolan Lawson wrote an insightful article recently about how he senses that the balance has shifted away from single page apps. I’ve been sensing the same shift in the zeitgeist. That said, both Nolan and I keep an eye on how browsers are evolving and getting better all the time. If you weren’t aware of changes over the past few years, it would be easy to still think that single page apps offer some unique advantages that in fact no longer hold true. As Nolan wrote in a follow-up post:

My main point was: if the only reason you’re using an SPA is because “it makes navigations faster,” then maybe it’s time to re-evaluate that.

For another example, see this recent XKCD cartoon:

“You look around one day and realize the things you assumed were immutable constants of the universe have changed. The foundations of our reality are shifting beneath our feet. We live in a house built on sand.”

The day I discovered that Apple Maps is kind of good now

Perhaps the best example of a technology that warrants regular re-evaluation is the World Wide Web itself. Over the course of its existence it has been seemingly bettered by other more proprietary technologies.

Flash was better than the web. It had vector graphics, smooth animations, and streaming video when the web had nothing like it. But over time, the web caught up. Flash was the hare. The World Wide Web was the tortoise.

In more recent memory, the role of the hare has been played by native apps.

I remember talking to someone on the Twitter design team who was designing and building for multiple platforms. They were frustrated by the web. It just didn’t feel as fully-featured as iOS or Android. Their frustration was entirely justified …at the time. I wonder if they’ve revisited their judgement since then though.

In recent years in particular it feels like the web has come on in leaps and bounds: service workers, native JavaScript APIs, and an astonishing boost in what you can do with CSS. Most important of all, the interoperability between browsers is getting better and better. Universal support for new web standards arrives at a faster rate than ever before.

But developers remain suspicious, still prefering to trust third-party libraries over native browser features. They made a decision about those libraries in the past. They evaluated the state of browser support in the past. I wish they would re-evaluate those decisions.

Alas, inertia is a very powerful force. Sticking with a past decision—even if it’s no longer the best choice—is easier than putting in the effort to re-evaluate everything again.

What’s the phrase? “Strong opinions, weakly held.” We’re very good at the first part and pretty bad at the second.

Just the other day I was chatting with one of my colleagues about an online service that’s available on the web and also as a native app. He was showing me the native app on his phone and said it’s not a great app.

“Why don’t you add the website to your phone?” I asked.

“You know,” he said. “The website’s going to be slow.”

He hadn’t tested this. But years of dealing with crappy websites on his phone in the past had trained him to think of the web as being inherently worse than native apps (even though there was nothing this particular service was doing that required any native functionality).

It has become a truism now. Native apps are better than the web.

And you know what? Once upon a time, that would’ve been true. But it hasn’t been true for quite some time …at least from a technical perspective.

But even if the technologies in browsers have reached parity with native apps, that won’t matter unless we can convince people to revisit their previously-formed beliefs.

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

Wednesday, March 23rd, 2022

What is the Web? - Jim Nielsen’s Blog

“Be linkable and accessible to any client” is a provocative test for whether something is “of the web”.

Thursday, January 6th, 2022

Today, the distant future

It’s a bit of a cliché to talk about living in the future. It’s also a bit pointless. After all, any moment after the big bang is a future when viewed from any point in time before it.

Still, it’s kind of fun when a sci-fi date rolls around. Like in 2015 when we reached the time depicted in Back To The Future 2, or in 2019 when we reached the time of Blade Runner.

In 2022 we are living in the future of web standards. Again, technically, we’re always living in the future of any past discussion of web standards, but this year is significant …in a very insignificant way.

It all goes back to 2008 and an interview with Hixie, editor of the HTML5 spec at the WHATWG at the time. In it, he mentioned the date 2022 as the milestone for having two completely interoperable implementations.

The far more important—and ambitious—date was 2012, when HTML5 was supposed to become a Candidate Recommendation, which is standards-speak for done’n’dusted.

But the mere mention of the year 2022 back in the year 2008 was too much for some people. Jeff Croft, for example, completely lost his shit (Jeff had a habit of posting angry rants and then denying that he was angry or ranty, but merely having a bit of fun).

The whole thing was a big misunderstanding and soon irrelevant: talk of 2022 was dropped from HTML5 discussions. But for a while there, it was fascinating to see web designers and developers contemplate a year that seemed ludicriously futuristic. Jeff wrote:

God knows where I’ll be in 13 years. Quite frankly, I’ll be pretty fucking disappointed in myself (and our entire industry) if I’m writing HTML in 13 years.

That always struck me as odd. If I thought like that, I’d wonder what the point would be in making anything on the web to begin with (bear in mind that both my own personal website and The Session are now entering their third decade of life).

I had a different reaction to Jeff, as I wrote in 2010:

Many web developers were disgusted that such a seemingly far-off date was even being mentioned. My reaction was the opposite. I began to pay attention to HTML5.

But Jeff was far from alone. Scott Gilbertson wrote an angry article on Webmonkey:

If you’re thinking that planning how the web will look and work 13 years from now is a little bit ridiculous, you’re not alone.

Even if your 2022 ronc-o-matic web-enabled toaster (It slices! It dices! It browses! It arouses!) does ship with Firefox v22.3, will HTML still be the dominant language of web? Given that no one can really answer that question, does it make sense to propose a standard so far in the future?

(I’m re-reading that article in the current version of Firefox: 95.0.2.)

Brian Veloso wrote on his site:

Two-thousand-twenty-two. That’s 14 years from now. Can any of us think that far? Wouldn’t our robot overlords, whether you welcome them or not, have taken over by then? Will the internet even matter then?

From the comments on Jeff’s post, there’s Corey Dutson:

2022: God knows what the Internet will look like at that point. Will we even have websites?

Dan Rubin, who has indeed successfully moved from web work to photography, wrote:

I certainly don’t intend to be doing “web work” by that time. I’m very curious to see where the web actually is in 14 years, though I can’t imagine that HTML5 will even get that far; it’ll all be obsolete before 2022.

Joshua Works made a prediction that’s worryingly close to reality:

I’ll be surprised if website-as-HTML is still the preferred method for moving around the tons of data we create, especially in the manner that could have been predicted in 2003 or even today. Hell, iPods will be over 20 years old by then and if everything’s not run as an iPhone App, then something went wrong.

Someone with the moniker Grand Caveman wrote:

In 2022 I’ll be 34, and hopefully the internet will be obsolete by then.

Perhaps the most level-headed observation came from Jonny Axelsson:

The world in 2022 will be pretty much like the world in 2009.

The world in 2009 is pretty much like 1996 which was pretty much like the world in 1983 which was pretty much like the world in 1970. Some changes are fairly sudden, others are slow, some are dramatic, others subtle, but as a whole “pretty much the same” covers it.

The Web in 2022 will not be dramatically different from the Web in 2009. It will be less hot and it will be less cool. The Web is a project, and as it succeeds it will fade out of our attention and into the background. We don’t care about things when they work.

Now that’s a sensible perspective!

So who else is looking forward to seeing what the World Wide Web is like in 2036?

I must remember to write a blog post then and link back to this one. I have no intention of trying to predict the future, but I’m willing to bet that hyperlinks will still be around in 14 years.

Speaking of long bets…

Monday, November 15th, 2021

Who is web3 for? • Robin Rendle

Thoughts from Robin, prompted by the Web History podcast I’m narrating and the other Robin’s notes on web3 that I linked to:

Who is the web for? Everyone, everywhere, and not only the few with a financial stake in it. It’s still this enormously beautiful thing that has so much potential.

But web3? That’s just not it, man.

Exactly! The blinkered web3 viewpoint is a classic example of this fallacious logic (also, as Robin points out, exemplified by AMP):

  1. Something must be done!
  2. This (terrible idea) is something.
  3. Something has been done.

Wednesday, November 3rd, 2021

1990: Programming the World Wide Web – Web Development History

Ah, this brings back memories of hacking on the WorldWideWeb project at CERN!

(Not the original one. I’m not that old. I mean the recreation.)

Saturday, July 24th, 2021

Reflections as the Internet Archive turns 25

Brewster Kahle:

The World Wide Web at its best is a mechanism for people to share what they know, almost always for free, and to find one’s community no matter where you are in the world.

Tuesday, July 20th, 2021

Hope

My last long-distance trip before we were all grounded by The Situation was to San Francisco at the end of 2019. I attended Indie Web Camp while I was there, which gave me the opportunity to add a little something to my website: an “on this day” page.

I’m glad I did. While it’s probably of little interest to anyone else, I enjoy scrolling back to see how the same date unfolded over the years.

’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.

But when I scroll down through my “on this day” page, it also feels like descending deeper into the dark waters of linkrot. For each year back in time, the probability of a link still working decreases until there’s nothing but decay.

Sadly this is nothing new. I’ve been lamenting the state of digital preservation for years now. More recently Jonathan Zittrain penned an article in The Atlantic on the topic:

Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone.

In one sense, linkrot is the price we pay for the web’s particular system of hypertext. We don’t have two-way linking, which means there’s no centralised repository of links which would be prohibitively complex to maintain. So when you want to link to something on the web, you just do it. An a element with an href attribute. That’s it. You don’t need to check with the owner of the resource you’re linking to. You don’t need to check with anyone. You have complete freedom to link to any URL you want to.

But it’s that same simple system that makes the act of linking a gamble. If the URL you’ve linked to goes away, you’ll have no way of knowing.

As I scroll down my “on this day” page, I come across more and more dead links that have been snapped off from the fabric of the web.

If I stop and think about it, it can get quite dispiriting. Why bother making hyperlinks at all? It’s only a matter of time until those links break.

And yet I still keep linking. I still keep pointing to things and saying “Check this out!” even though I know that over a long enough timescale, there’s little chance that the link will hold.

In a sense, every hyperlink on the World Wide Web is little act of hope. Even though I know that when I link to something, it probably won’t last, I still harbour that hope.

If hyperlinks are built on hope, and the web is made of hyperlinks, then in a way, the World Wide Web is quite literally made out of hope.

I like that.

Friday, February 19th, 2021

Yax.com · Blog · Out of the Matrix: Early Days of the Web (1991)

Thirty years later, it is easy to overlook the web’s origins as a tool for sharing knowledge. Key to Tim Berners-Lee’s vision were open standards that reflected his belief in the Rule of Least Power, a principle that choosing the simplest and least powerful language for a given purpose allows you to do more with the data stored in that language (thus, HTML is easier for humans or machines to interpret and analyze than PostScript). Along with open standards and the Rule of Least Power, Tim Berners-Lee wanted to make it easy for anyone to publish information in the form of web pages. His first web browser, named Nexus, was both a browser and editor.

Thursday, August 20th, 2020

Web on the beach

It was very hot here in England last week. By late afternoon, the stuffiness indoors was too much to take.

If you can’t stand the heat, get out of the kitchen. That’s exactly what Jessica and I did. The time had come for us to avail of someone else’s kitchen. For the first time in many months, we ventured out for an evening meal. We could take advantage of the government discount scheme with the very unfortunate slogan, “eat out to help out.” (I can’t believe that no one in that meeting said something.)

Just to be clear, we wanted to dine outdoors. The numbers are looking good in Brighton right now, but we’re both still very cautious about venturing into indoor spaces, given everything we know now about COVID-19 transmission.

Fortunately for us, there’s a new spot on the seafront called Shelter Hall Raw. It’s a collective of multiple local food outlets and it has ample outdoor seating.

We found a nice table for two outside. Then we didn’t flag down a waiter.

Instead, we followed the instructions on the table. I say instructions, but it was a bit simpler than that. It was a URL: shelterhall.co.uk (there was also a QR code next to the URL that I could’ve just pointed my camera at, but I’ve developed such a case of QR code blindness that I blanked that out initially).

Just to be clear, under the current circumstances, this is the only way to place an order at this establishment. The only (brief) interaction you’ll have with another persn is when someone brings your order.

It worked a treat.

We had frosty beverages chosen from the excellent selection of local beers. We also had fried chicken sandwiches from Lost Boys chicken, purveyors of the best wings in town.

The whole experience was a testament to what the web can do. You browse the website. You make your choice on the website. You pay on the website (you can create an account but you don’t have to).

Thinking about it, I can see why they chose the web over a native app. Online ordering is the only way to place your order at this place. Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.

It hasn’t been a great week for the web. Layoffs at Mozilla. Google taking aim at URLs. It felt good to see experience an instance of the web really shining.

And it felt really good to have that cold beer.

Checked in at Shelter Hall Raw. Having a beer on the beach — with Jessica

Saturday, April 11th, 2020

Jeremy Keith ‘We’ve ruined the Web. Here’s how we fix it.’ - This is HCD

Did you hear the one about two Irishmen on a podcast?

I really enjoyed this back-and-forth discussion with Gerry on performance, waste, and more. We agreed on much, but we also clashed sometimes.

Wednesday, March 11th, 2020

Networked information services: The world-wide web [PDF]

A 1992 paper by Tim Berners-Lee, Robert Cailliau, and Jean-Françoise Groff.

The W3 project is not a research project, but a practical plan to implement a global information system.

A curl in every port

A few years back, Zach Bloom wrote The History of the URL: Path, Fragment, Query, and Auth. He recently expanded on it and republished it on the Cloudflare blog as The History of the URL. It’s well worth the time to read the whole thing. It’s packed full of fascinating tidbits.

In the section on ports, Zach says:

The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.

Ooh, I can give you an exact date! It was January 24th, 1992. I know this because of the hack week in CERN last year to recreate the first ever web browser.

Kimberly was spelunking down the original source code, when she came across this line in the HTUtils.h file:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

We showed this to Jean-François Groff, who worked on the original web technologies like libwww, the forerunner to libcurl. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).

Then he told us something interesting about the next line of code:

#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */

Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.

Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.

Those were the last four digits of his parents’ phone number.

The History of the URL

This is a wonderful deep dive into all the parts of a URL:

scheme:[//[user:password@]host[:port]][/]path[?query][#fragment]

There’s a lot of great DNS stuff about the host part:

Root DNS servers operate in safes, inside locked cages. A clock sits on the safe to ensure the camera feed hasn’t been looped. Particularily given how slow DNSSEC implementation has been, an attack on one of those servers could allow an attacker to redirect all of the Internet traffic for a portion of Internet users. This, of course, makes for the most fantastic heist movie to have never been made.

Tuesday, December 31st, 2019

Running Code Over Time – Eric’s Archived Thoughts

We should think of our code, even our designs, as running for decades, and alter our work to match.

Thursday, November 7th, 2019

Information mesh

Timelines of people, interfaces, technologies and more:

30 years of facts about the World Wide Web.

Friday, October 18th, 2019

Web talk

At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.

The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:

How We Built The World Wide Web In Five Days

This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.

At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.

But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.

I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.

There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.

We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:

It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.

In the meantime, you can read the text, watch the video, or look at the slides (but the slides really don’t make much sense in isolation).

Wednesday, October 16th, 2019

How We Built The World Wide Web In Five Days

This talk about recreating the first ever web browser was a joint presentation with Remy Sharp, delivered at the Fronteers conference in Amsterdam in October 2019.

Jeremy

Our story begins with the Big Bang.

13.8 billion years ago

This sets a chain of events in motion that gives us elementary particles, then more complex particles like atoms, which form stars and planets, including our own, on which life evolves, which brings us to the recent past when this whole process results in the universe generating a way of looking at itself: physicists.

A physicist is the atom’s way of knowing about atoms.

—George Wald

By the end of World War Two, physicists in Europe were in short supply. If they hadn’t already fled during Hitler’s rise to power, they were now being actively wooed away to the United States.

64 years ago

To counteract this brain drain, a coalition of countries forms the European Organization for Nuclear Research, or to use its French acronym, CERN.

They get some land in a suburb of Geneva on the border between Switzerland and France, where they set about smashing particles together and recreating the conditions that existed at the birth of the universe.

The Syncrocyclotron.

Every year, CERN is host to thousands of scientists who come to run their experiments.

Remy

Fast forward to February 2019, a group of 9 of us were invited to CERN as an elite group of hackers to recreate a different experiment.

The group.

We are there to recreate a piece of software first published 30 years ago. Given this goal, we need to answer some important questions first:

  • How does this software look and feel?
  • How does it work?
  • How you interact with it?
  • How does it behave?

The software is so old that it doesn’t run on any modern machines, so we have a NeXT machine specially shipped from the nearby museum. This is no ordinary machine. It was one of the only two NeXT machines that existed at CERN in the late 80s.

Now we have the machine to run this special software.

By some fluke the good people of the web have captured several different versions of this software and published them on Github.

So we selected the oldest version we could find. We download it from Github to our computers. Now we have to transfer it to the NeXT machine.

Except there’s no USB drive. It didn’t exist. CD ROM? Floppy drive? The NeXT computer had a “floptical drive”—bespoke to NeXT computers—all very well, but in 2019 we don’t have those drives.

To transfer the software from our machine, to the NEXT machine, we needed to use the network.

Jeremy

62 years ago

In 1957, J.C.R. Licklider was the first person to publicly demonstrate the idea of time sharing: linking one computer to another.

56 years ago

Six years later, he expanded on the idea in a memo that described an Intergalactic Computer Network.

By this time, he was working at ARPA: the department of Defense’s Advanced Research Projects Agency. They were very interested in the idea of linking computers together, for very practical reasons.

America’s military communications had a top-down command-and-control structure. That was a single point of failure. One pre-emptive strike and it’s game over.

The solution was to create a decentralised network of computers that used Paul Baran’s brilliant idea of packet switching to move information around the network without any central authority.

This idea led to the creation of the ARPANET. Initially it connected a few universities. The ARPANET grew until it wasn’t just computers at each endpoint; it was entire networks. It was turning into a network of networks …an internetwork, or internet, for short. In order for these networks to play nicely with one another, they needed to agree on using the same set of protocols for packet switching.

Bob Kahn and Vint Cerf crafted the simplest possible set of low-level protocols: the Transmission Control Protocol and the Internet Protocol. TCP/IP.

TCP/IP is deliberately dumb. It doesn’t care about the contents of the packets of data being passed around the internet. People were then free to create more task-specific protocols to sit on top of TCP/IP.

There are protocols specifically for email, for example. Gopher is another example of a bespoke protocol. And there’s the File Transfer Protocol, or FTP.

Remy

Back in our war room in 2019, we finally work out that can use FTP to get the software across. FTP is an arcane protocol, but we can agree that it will work across the two eras.

Although we have to manually install FTP servers onto our machines. FTP doesn’t ship with new machines because it’s generally considered insecure.

Now we finally have the software installed on the NeXT computer and we’re able to run the application.

We double click the shading looking, partly hand drawn icon with a lightning bolt on it, and we wait…

Once the software’s finally running, we’re able to see that it looks a bit like an ancient word processor. We can read, edit and open documents. There’s some basic styles lots of heavy margins. There’s a super weird menu navigation in place.

But there’s something different about this software. Something that makes this more than just a word processor.

These documents, they have links…

Jeremy

Ted Nelson is fond of coining neologisms. You can thank him for words like “intertwingled” and “teledildonics”.

56 years ago

He also coined the word “hypertext” in 1963. It is defined by what it is not.

Hypertext is text which is not constrained to be linear.

Ever played a “choose your own adventure” book? That’s hypertext. You can jump from one point in the book to a different point that has its own unique identifier.

The idea of hypertext predates the word. In 1945, Vannevar Bush published a visionary article in The Atlantic Monthly called As We May Think.

He imagines a mechanical device built into a desk that can summon reams of information stored on microfilm, allowing the user to create “associative trails” as they make connections between different concepts. He calls it the Memex.

Memex

Also in 1945, a young American named Douglas Engelbart has been drafted into the navy and is shipping out to the Pacific to fight against Japan. Literally as the ship is leaving the harbour, word comes through that the war is over. He still gets shipped out to the Philippines, but now he’s spending his time lounging in a hut reading magazines. That’s how he comes to read Vannevar Bush’s Memex article, which lodges in his brain.

51 years ago

Douglas Engelbart decides to dedicate his life to building the computer equivalent of the Memex.

On December 9th, 1968, he unveils his oNLine System—NLS—in a public demonstration. Not only does he have a working implementation of hypertext, he also shows collaborative real-time editing, windows, graphics, and oh yeah—for this demo, he invents the mouse.

It truly is The Mother of All Demos.

Douglas Engelbart has a posse.

39 years ago

There were a number of other attempts at creating hypertext systems. In 1980, a young computer scientist named Tim Berners-Lee found himself working at CERN, where scientists were having a heck of time just keeping track of information.

He created a system somewhat like Apple’s Hypercard, but with clickable links. He named it ENQUIRE, after a Victorian book of manners called Enquire Within Upon Everything.

ENQUIRE didn’t work out, but Tim Berners-Lee didn’t give up on the problem of managing information at CERN. He thinks about all the work done before: Vannevar Bush’s Memex; Ted Nelson’s Xanadu project; Douglas Engelbart’s oNLine System.

A lot of hypertext ideas really are similar to a choose-your-own-adventure: jumping around from point to point within a book. But what if, instead of imagining a hypertext book, we could have a hypertext library? Then you could jump from one point in a book to a different point in a different book in a completely different part of the library.

The Library Of Babel

In other words, what if you took the world of hypertext and the world of networks, and you smashed them together?

30 years ago

On the 12th of March, 1989, Tim Berners-Lee circulates the first draft of a document titled Information Management: A Proposal.

The diagrams are incomprehensible. But his supervisor at CERN, Mike Sendall, sees the potential. He reads the proposal and scrawls these words across the top: “vague, but exciting.”

Tim Berners-Lee gets the go-ahead to spend some time on this project. And he gets the budget for a nice shiny NeXT machine. With the support of his colleague Robert Cailliau, Berners-Lee sets about making his theoretical project a reality. They kick around a few ideas for the name.

They thought of calling it The Mesh. They thought of calling it The Information Mine, but Tim rejected that, knowing that whatever they called it, the words would be abbreviated to letters, and The Information Mine would’ve seemed quite egotistical.

So, even though it’s only going to exist on one single computer to begin with, and even though the letters of the abbreviation take longer to say than the words being abbreviated, they call it …the World Wide Web.

As Robert Cailliau told us, they were thinking “Well, we can always change it later.”

Tim Berners-Lee brainstorms a new protocol for hypertext called the HyperText Transfer Protocol—HTTP.

He thinks about a format for hypertext called the Hypertext Markup Language—HTML.

He comes up with an addressing scheme that uses Unique Document Identifiers—UDIs, later renamed to URIs, and later renamed again to URLs.

But he needs to put it all together into running code. And so Tim Berners-Lee sets about writing a piece of software…

Remy

Tim Berners-Lee’s document is a proposal at that stage 30 years ago. It’s just theory. So he needs to build a prototype to actually demonstrate how the World Wide Web would work.

The NeXT computer is the perfect ground for rapid software development because the NeXT operating system ships with a program called NSBuilder.

NeXT

NSBuilder is software to build software. In fact, the “NS” (meaning NeXTSTEP) can be found in existing software today - you’ll find references to NSText in Safari and Mac developer documentation.

Tim Berners-Lee, using NSBuilder was able to create a working prototype of this software in just 6 weeks

He called it: WorldWideWeb.

We finally have the software working the way it ran 30 years ago.

But our project is to replicate this browser so that you can try it out, and see how web pages look through the lens of 1990.

So we enter some of our blog urls. https://remysharp.com, https://adactio.com

But HTTPS doesn’t work. There was no HTTPS. There’s no HTTP2. HTTP1.0 hadn’t even been invented.

So I make a proxy. Effectively a monster-in-the-middle attack on all web requests, stripping the SSL layer and then returning the HTML over the HTTP 0.9 protocol.

And finally, we see…

We see junk.

We can see the text content of the website, but there’s a lot of HTML junk tags being spat out onto the screen, particularly at the start of the document.

Jeremy

<h1> <h2> <h3> <h4> <h5> <h6> 
<ol> <ul> <li> <p>

These tags are probably very familiar to you. You recognise this language, right?

That’s right. It’s SGML.

SGML is the successor to GML, which supposedly stands for Generalised Markup Language. But that may well be a backronym. The format was created by Goldfarb, Mosher, and Lorie: G, M, L.

SGML is supposed to be short for Standard Generalised Markup Language.

A flavour of SGML was already being used at CERN when Tim Berners-Lee was working on his World Wide Web project. Rather than create a whole new format from scratch, he repurposed what people were already familiar with. This was his HyperText Markup Language, HTML.

One thing he did add was a tag called A for anchor.

Its href attribute is short for “hypertext reference”. Plop a URL in there and you’ve got a link.

The hypertext community thought this was a terrible way to make links.

They believed that two-way linking was vital. With two-way linking, the linked resource connects back to where the link originates. So if the linked resource moved, the link would stay intact.

That’s not the case with the World Wide Web. If the linked resource moves, the link is broken.

Perhaps you’ve experienced broken links?

When Tim Berners-Lee wrote the code for his WorldWideWeb browser, there was a grand total of 26 tags in HTML. I know that we’d refer to them as elements today, but that term wasn’t being used back then.

Now there are well over 100 elements in HTML. The reason why the language has been able to expand so much is down to the way web browsers today treat unknown elements: ignore any opening and closing tags you don’t recognise and only render the text in between them.

Remy

The parsing algorithm was brittle (when compared to modern parsers). There’s no DOM tree being built up. Indeed, the DOM didn’t exist.

Remember that the WorldWideWeb was a browser that effectively smooshed together a word processor and network requests, the styling method was based (mostly) around adding margins as the tags were parsed.

Kimberly Blessing was digging through the original 7344 lines of code for the WorldWideWeb source. She found the code that could explain why we were seeing junk.

<link rel="..."

In this case, when the parser encountered <link rel="…" it would see the <.

<

“Yes, a tag; let’s slurp it up”.

<li

Then it reads li and the parser is thinking, “This looks like a list item, good stuff.”

<lin

Then encounters the n (of link) and, excusing the paring algorithm because it was the first, would then abort the style it was about to apply and promptly spit out the rest of the content on screen, having already swallowed up the first four characters: <lin.

k rel="stylesheet" href="...">

With that, we decided to make the executive design decision that we would strip out any elements that were unknown to the original WorldWideWeb browser — link, script, video and img — which of course there was no image support in the world’s first browser.

This is the first little cheat we applied, so that the page would be more pleasing to you, the visitor of our emulator. Otherwise you’d be presented with a lot of scary looking junk.

So now we have all the reference we need to be able to replicate this browser:

  • The machine running the original operating system, which gives us colours, fonts, menus and so on.
  • The browser itself, how windows behave, what’s in the menus, what makes the experience unique to that period of time.
  • And finally how it looks when we visit URLs.

So off we go.

🤯

Jeremy

While Remy sets about recreating the functionality of the WorldWideWeb browser, Angela was recreating the user interface using CSS.

Inputs. Buttons. Icons. Menus. All with the exact borders, highlights and shadows used in the UI of the NeXT operating system, including having the scrollbar on the left side of windows.

Meanwhile the rest of us were putting together an explanatory website to give some backstory to what we were doing. I spent most of my time working on a timeline showing thirty years before and thirty years after the original proposal for the web.

Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

The WorldWideWeb browser inherited fonts from the NeXTSTEP operating system. It mostly used Helvetica and a font called Ohlfs (created by Keith Ohlfs). Helvetica is ubiquitous but Ohlfs was never seen outside of a NeXT machine.

Our teammates Mark and Brian were obsessed with accurately recreating the typography. We couldn’t use modern fonts which are vector based. We need pixeliness.

So Mark and Brian took a screenshot of the NeXT machine’s alphabet. With help from afar from font designer David Jonathan Ross, they traced each square pixel in a vector program and then exported that as a web font. Now we’ve got a web font that deliberately isn’t anti-aliased. It’s a vector format that recreates the look of a bitmap.

Put the pixelly font together with the CSS interface elements and you’ve got something that really looks like the old WorldWideWeb programme.

Remy

This is the final product of our work at CERN that week. A fully working WorldWideWeb emulator giving a reasonable close experience of what it was like to surf the web as if it were 30 years ago.

This is entirely in the browser and was written using:

  • React,
  • React Draggable for the windows and menus,
  • React Hotkeys for keyboard combo shortcuts (we replicated the original OS as much as we could),
  • idb-keyval for some local storage,
  • Parcel for bundling.

These tools weren’t chosen particular because they were the best tools for the job, but rather because they were the tools I knew that well enough that would help speed up my development process.

We worked hard to replicate the look and feel as much as we could. We even replicated typos found throughout the WorldWideWeb app:

An excercise in global information availability

Why don’t we see how it looks…

Jeremy

There’s kind of irony in this in that it relies heavily on JavaScript. In fact, there’s nothing there other than JavaScript. But of course the WorldWideWeb browser couldn’t deal with JavaScript—JavaScript hadn’t been invented yet. So the one URL that definitely wouldn’t work in this emulator is …the emulator itself.

Remy

(Which Jeremy was blaming me for.)

This is what you see when you visit the WorldWideWeb browser for the first time. We can see we are welcomed by the universe of hypertext. We’ve got these menus over here that you can drag off and open panels (I always thought this was an ordering bug but the operating system actually works like this).

We’ll go ahead and open the Fronteers website. I go to “Document” and then I go to “Open from full document reference” (because the word URL didn’t exist). I’m going to pop the Fronteers URL in here. And there it is. We’ve got the Fronteers website. Looks pretty good. (One of my favourite UI bits is this scrollbar on the left hand side instead of the right.)

We can follow the links. Actually one of my favourite features that was in this original browser that we replicated was this “Navigate” menu. I’ve just opened the first link in the document, but I can click on “Next”, and “Next” a bunch of times and it will cycle through each one of the links on the page that I launched from and let me read all the pages that the Fronteers site links to (which I really like). I can go backwards and forwards, and so on.

One thing you might have already noticed is that there are no URLs here. And in fact, to view source, it was considered a kind of diagnostic option and it was very very tucked away. The reason for this is that URLs—and the source HTML or SGML—was considered ugly and potentially a bad user experience.

But there’s one thing about navigating here that’s different. To open this link, I had to double-click.

Jeremy

The WorldWideBrowser was more of a prototype than anything else. It demonstrated the potential of the World Wide Web project, but it only worked on NeXT machines.

To show how the World Wide Web could work on any computer, the second ever web browser was the Line Mode Browser, coded by Nicola Pellow. It had a very basic text interface—no clicking on links—but it could be installed anywhere.

Lots of other geeks and nerds were working on their own web browsers, but it was Marc Andreesen’s Mosaic browser that really blew the doors open for the web. It had a nice usable interface, and it (unilaterrally) introduced the innovation of images on the web.

Andreesen went on to found Netscape. The World Wide Web took off at an unprecedented rate. Microsoft brought out their Internet Explorer browser and started trying to catch up with Netscape. We had the browser wars. Later we got even more browsers, like Safari and Chrome, while Netscape morphed into Firefox and Internet Explorer morphed into Edge. And the rest is history.

But all of these browsers were missing something that was in the original WorldWideWeb browser.

Remy

The reason I have to double-click on these links is that, when I do a single click, it actually places the cursor. The cursor is blinking there on “Fronteers.” And the reason I can place the cursor is because I can edit the document.

I see Fronteers here is missing a heading. We want to welcome you all:

Welkom

We want to make that a heading. Let’s style that. It’s a heading.

So the browser was meant to edit documents. Let’s put a bit of text here:

Great talks from Remy and Jeremy

(forget about everyone else). Now if I want to create a link, I’ll go ahead and navigate to Jeremy’s site, https://adactio.com. I’m going to do “Link”, then “Mark all”, which is a way of copying the URL to that window. Then I go back to the Fronteers website, select “Jeremy”, and then do “Link to marked.” I can double-click on Jeremy’s name it will open up his website.

I can save this document as well. I’m going to call it fronteers.html.

Let’s do a hard reboot—a browser refresh. I come back to my machine a couple of days later, “Ah, the Fronteers page!”. I’m going to open that again, and it linked to that really handsome guy in the sprite shirt. And yes, the links still work.

In fact, this documentation that you see when the WorldWideWeb browser launches was written, styled, and linked using the WorldWideWeb browser. The WorldWideWeb browser was for a web that you could read and write.

But this didn’t survive. It was a hurdle that was too tricky to propose or implement across the different types servers that existed and for the upcoming browsers that were on the horizon.

And so it wasn’t standardised and doesn’t exist today.

But this is an important lesson from the time: reducing complexity increases the chances of mass adoption.

In the end, simplicity wins.

Jeremy

I think that’s a pattern we see over and over again, not just in the history of the web, but before the web. Simplicity wins.

Ted Nelson famously to this day thinks that the World Wide Web is weak sauce. It didn’t try to solve complex right out of the gate, like handling micro-payments.

As we saw, the hypertext community that one-way linking was ridiculous. But simplicity does win out.

Unfortunately that’s why browsers ended up just being browsers. We got some of the functionality back with wikis, content management systems, and social media to a certain extent. But I think it’s still a bit of a shame that when I want to browse a web page, I’m using one piece of software—the browser—but when I want to make a web page, I’m using another piece of software (or multiple pieces of software) to get something on to the web.

I feel like we lost something.

Remy

We head home after a week of hacking.

We were all invited back in March earlier this year for the Web@30 event that was taking place to celebrate the web but also Sir Tim Berners-Lee.

A NeXT machine from 1989 running the WorldWideWeb browser and my laptop in 2019 running https://worldwideweb.cern.ch

A few of us, Jeremy, Martin, and myself, went back to CERN for the the first leg of the event. There was even a video showing off our work as part of the main conference. Jeremy and I even chased Tim Berners-Lee back to London at the science museum like obsessive web fanboys. It was a lot of fun!

The night before I got a message from Jean-François Groff, pictured here on the right. JF Groff joined Tim Berners-Lee 30 years ago and created libwww (a precursor to libcurl).

The message read:

Sitting with Tim right now. He loves your browser!

Crushed it.

It’s amazing that we were able to pull this off in a week just with text editors and information that’s freely available. It’s mind boggling how much we can do today and how far it can reach. And it all started on that NeXTSTEP machine 30 years ago.

What I really loved about this project was working with this brilliantly old technology, digging around at the birth of browsers and the web.

I wouldn’t be stood here today, if it weren’t for the web.

I wouldn’t even know Jeremy, if it weren’t for the web.

I wouldn’t have a career, if it weren’t for the web.

I loved seeing how such old technology, the original WorldWideWeb browser was still able to render my blog. Because I put content first, delivered markup from the server. The page rendered because HTML really is backward compatible.

HTML and HTTP are just text. Nothing terribly fancy. Dare I say, beautifully simple, and as we said before, simplicity wins the day.

This same simplicity is what allows us all to have the chance for an equal voice. The web allows us to freely publish our thoughts and experiences. We have to fight to protect that kind of web.

And we’ve got to work at keeping it simple.

Jeremy

When we returned to CERN for the 30th anniversary celebrations, one of the other people there was the journalist Zeynep Tefepkçi.

When @Zeynep met NeXT.

Lou, Zeynep, Tim, Robert, and Jean-François. #Web30

She was on a panel along with Tim Berners-Lee, Robert Caillau, Jean-François Groff, and Lou Montoulli. At the end of the panel discussion, she was asked:

What would you tell the next generation about how to use this wonderful tool?

She replied:

If you have something wonderful, if you do not defend it, you will lose it.

If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.

Defend the simplicity and resilience that’s so central to the web.

I don’t know about you, but I often feel that just trying to make a web page has become far too complicated. But this is complexity that we have chosen with our tools, processes, and assumptions. We’ve buried the magic. The magic of linking web pages together. The magic of a working global hypertext system, where nobody needs to ask for permission to publish.

Tim Berners-Lee prototyped the first web browser, but the subsequent world wide web wasn’t created by any one person. It was created by everyone. That. Is. Magical.

I don’t want the web to become a place where only an elite priesthood get to experience the magic of creation. I’m going to fight to defend the openness of the world wide web. This is for everyone. Not just for everyone to use; it’s for everyone to create.

Tuesday, October 15th, 2019

Jeremy Keith & Remy Sharp - How We Built the World Wide Web in Five Days on Vimeo

Here’s the talk that Remy and I gave at Fronteers in Amsterdam, all about our hack week at CERN. We’re both really pleased with how this turned out and we’d love to give it again!