Tags: medium

435

sparkline

Wednesday, December 11th, 2019

The Technical Side of Design Systems by Brad Frost

Day two of An Event Apart San Francisco is finishing with a talk from Brad on design systems (so hot right now!):

You can have a killer style guide website, a great-looking Sketch library, and robust documentation, but if your design system isn’t actually powering real software products, all that effort is for naught. At the heart of a successful design system is a collection of sturdy, robust front-end components that powers other applications’ user interfaces. In this talk, Brad will cover all that’s involved in establishing a technical architecture for your design system. He’ll discuss front-end workshop environments, CSS architecture, implementing design tokens, popular libraries like React and Vue.js, deploying design systems, managing updates, and more. You’ll come away knowing how to establish a rock-solid technical foundation for your design system.

I will attempt to liveblog the Frostmeister…

“Design system” is an unfortunate name …like “athlete’s foot.” You say it to someone and they think they know what you mean, but nothing could be further from the truth.

As Mina said:

A design system is a set of rules enforced by culture, process and tooling that govern how your organization creates products.

A design system the story of how an organisation gets things done.

When Brad talks to companies, he asks “Have you got a design system?” They invariably say they do …and then point to a Sketch library. When the focus goes on the design side of the process, the production side can suffer. There’s a gap between the comp and the live site. The heart and soul of a design system is a code library of reusable UI components.

Brad’s going to talk through the life cycle of a project.

Sell

He begins with selling in a design system. That can start with an interface inventory. This surfaces visual differences. But even if you have, say, buttons that look the same, the underlying code might not be consistent. Each one of those buttons represents time and effort. A design system gives you a number of technical benefits:

  • Reduce technical debt—less frontend spaghetti code.
  • Faster production—less time coding common UI components and more time building real features.
  • Higher-quality production—bake in and enforce best practices.
  • Reduce QA efforts—centralise some QA tasks.
  • Potentially adopt new technologies faster—a design system can help make additional frameworks more managable.
  • Useful reference—an essential resource hub for development best practices.
  • Future-friendly foundation—modify, extend, and improve over time.

Once you’ve explained the benefits, it’s time to kick off.

Kick off

Brad asks “What’s yer tech stack?” There are often a lot of tech stacks. And you know what? Users don’t care. What they see is one brand. That’s the promise of a design system: a unified interface.

How do you make a design system deal with all the different tech stacks? You don’t (at least, not yet). Start with a high priority project. Use that as a pilot project for the design system. Dan talks about these projects as being like television pilots that could blossom into a full season.

Plan

Where to build the design system? The tech stack under the surface is often an order of magnitude greater than the UI code—think of node modules, for example. That’s why Brad advocates locking off that area and focusing on what he calls a frontend workshop environment. Think of the components as interactive comps. There are many tools for this frontend workshop environment: Pattern Lab, Storybook, Fractal, Basalt.

How are you going to code this? Brad gets frontend teams in a room together and they fight. Have you noticed that developers have opinions about things? Brad asks questions. What are your design principles? Do you use a CSS methodology? What tools do you use? Spaces or tabs? Then Brad gets them to create one component using the answers to those questions.

Guidelines are great but you need to enforce them. There are lots of tools to automate coding style.

Then there’s CSS architecture. Apparently we write our styles in React now. Do you really want to tie your CSS to one environment like that?

You know what’s really nice? A good ol’ sturdy cacheable CSS file. It can come in like a fairy applying all the right styles regardless of tech stack.

Design and build

Brad likes to break things down using his atomic design vocabulary. He echoes what Mina said earlier:

Embrace the snowflakes.

The idea of a design system is not to build 100% of your UI entirely from components in the code library. The majority, sure. But it’s unrealistic to expect everything to come from the design system.

When Brad puts pages together, he pulls in components from the code library but he also pulls in one-off snowflake components where needed.

The design system informs our product design. Our product design informs the design system.

—Jina

Brad has seen graveyards of design systems. But if you make a virtuous circle between the live code and the design system, the design system has a much better chance of not just surviving, but thriving.

So you go through those pilot projects, each one feeding more and more into the design system. Lather, rinse, repeat. The first one will be time consuming, but each subsequent project gets quicker and quicker as you start to get the return on investment. Velocity increases over time.

It’s like tools for a home improvement project. The first thing you do is look at your current toolkit. If you don’t have the tool you need, you invest in buying that new tool. Now that tool is part of your toolkit. Next time you need that tool, you don’t have to go out and buy one. Your toolkit grows over time.

The design system code must be intuitive for developers using it. This gets into the whole world of API design. It’s really important to get this right—naming things consistently and having predictable behaviour.

Mina talked about loose vs. strict design systems. Open vs. locked down. Make your components composable so they can adapt to future requirements.

You can bake best practices into your design system. You can make accessibility a requirement in the code.

Launch

What does it mean to “launch” a design system?

A design system isn’t a project with an end, it’s the origin story of a living and evolving product that’ll serve other products.

—Nathan Curtis

There’s a spectrum of integration—how integrated the design system is with the final output. The levels go from:

  1. Least integrated: static.
  2. Front-end reference code.
  3. Most integrated: consumable compents.

Chris Coyier in The Great Divide talked about how wide the spectrum of front-end development is. Brad, for example, is very much at the front of the front end. Consumable UI components can create a bridge between the back of the front end and the front of the front end.

Consumable UI components need to be bundled, packaged, and published.

Maintain

Now we’ve entered a new mental space. We’ve gone from “Let’s build a website” to “Let’s maintain a product which other products use as a dependency.” You need to start thinking about things like semantic versioning. A version number is a promise.

A 1.0.0 designation comes with commitment. Freewheeling days of unstable early foundations are behind you.

—Nathan Curtis

What do you do when a new tech stack comes along? How does your design system serve the new hotness. It gets worse: you get products that aren’t even web based—iOS, Android, etc.

That’s where design tokens come in. You can define your design language in a platform-agnostic way.

Summary

This is hard.

  • Your design system must live in the technologies your products use.
  • Look at your product roadmaps for design system pilot project opportunities.
  • Establish code conventions and use tooling and process to enforce them.
  • Build your design system and pilot project UI screens in a frontend workshop environment.
  • Bake best practices into reusable components & make them as rigid or flexible as you need them to be.
  • Use semantic versioning to manage ongoing design system product work.
  • Use design tokens to feed common design properties into different platforms.

You won’t do it all at once. That’s okay. Baby steps.

Tuesday, December 10th, 2019

The Mythology of Design Systems by Mina Markham

It’s day two of An Event Apart San Francisco. The brilliant Mina Markham is here to talk to us about design systems (so hot right now!). I’m going to attempt to liveblog it:

Design systems have dominated web design conversations for a few years. Just as there’s no one way to make a website, there is no one way to make a design system. Unfortunately this has led to a lot of misconceptions around the creation and impact of this increasingly important tool.

Drawing on her experiences building design systems at two highly visible and vastly different organizations, Mina will debunk some common myths surrounding design systems.

Mina is a designer who codes. Or an engineer who designs. She makes websites. She works at Slack, but she doesn’t work on the product; she works on slack.com and the Slack blog. Mina also makes design systems. She loves design systems!

There are some myths she’s heard about design systems that she wants to dispel. She will introduce us to some mythological creatures along the way.

Myth 1: Designers “own” the design system

Mina was once talking to a product designer about design systems and was getting excited. The product designer said, nonplussed, “Aren’t you an engineer? Why do you care?” Mina explained that she loved design systems. The product designer said “Y’know, design systems should really be run by designers” and walked away.

Mina wondered if she had caused offense. Was she stepping on someone’s toes? The encounter left her feeling sad.

Thinking about it later, she realised that the conversation about design systems is dominated by product designers. There was a recent Twitter thread where some engineers were talking about this: they felt sidelined.

The reality is that design systems should be multi-disciplinary. That means engineers but it also means other kinds of designers other than product designers too: brand designers, content designers, and so on.

What you need is a hybrid, or unicorn: someone with complimentary skills. As Jina has said, design systems themselves are hybrids. Design systems give hybrids (people) a home. Hybrids help bring unity to an organization.

Myth 2: design systems kill creativity

Mina hears this one a lot. It’s intertwined with some other myths: that design systems don’t work for editorial content, and that design systems are just a collection of components.

Components are like mermaids. Everyone knows what one is supposed to look like, and they can take many shapes.

But if you focus purely on components, then yes, you’re going to get frustrated by a feeling of lacking creativity. Mina quotes @brijanp saying “Great job scrapbookers”.

Design systems encompass more than components:

  • High level principles.
  • Brand guidelines.
  • Coding standards.
  • Accessibility compliance.
  • Governance.

A design system is a set of rules enforced by culture, process and tooling that govern how your organization creates products.

—Mina

Rules and creativity are not mutually exclusive. Rules can be broken.

For a long time, Mina battled against one-off components. But then she realised that if they kept coming up, there must be a reason for them. There is a time and place for diverging from the system.

It’s like Alice Lee says about illustrations at Slack:

There’s a time and place for both—illustrations as stock components, and illustrations as intentional complex extensions of your specific brand.

Yesenia says:

Your design system is your pantry, not your cookbook.

If you keep combining your ingredients in the same way, then yes, you’ll keep getting the same cake. But if you combine them in different ways, there’s a lot of room for creativity. Find the key moments of brand expression.

There are strict and loose systems.

Strict design systems are what we usually think of. AirBnB’s design system is a good example. It’s detailed and tightly controlled.

A loose design system will leave more space for experimentation. TED’s design system consists of brand colours and wireframes. Everything else is left to you:

Consistency is good only insofar as it doesn’t prevent you from trying new things or breaking out of your box when the context justifies it.

Yesenia again:

A good design sytem helps you improvise.

Thinking about strict vs. loose reminds Mina of product vs. marketing. A design system for a product might need to be pixel perfect, whereas editorial design might need more breathing room.

Mina has learned to stop fighting the one-off snowflake components in a system. You want to enable the snowflakes without abandoning the system entirely.

A loose system is key for maintaining consistency while allowing for exploration and creativity.

Myth 3: a design system is a side project

Brad guffaws at this one.

Okay, maybe no one has said this out loud, but you definitely see a company’s priorities focused on customer-facing features. A design system is seen as something for internal use only. “We’ll get to this later” is a common refrain.

“Later” is a mythical creature—a phoenix that will supposedly rise from the ashes of completed projects. Mina has never seen a phoenix. You never see “later” on a roadmap.

Don’t treat your design system as a second-class system. If you do, it will not mature. It won’t get enough time and resources. Design systems require real investment.

Mina has heard from people trying to start design systems getting the advice, “Just do it!” It seems like good advice, but it could be dangerous. It sets you up for failure (and burnout). “Just doing it” without support is setting people up for a bad experience.

The alternative is to put it on the roadmap. But…

Myth 4: a design system should be on the product roadmap

At a previous company, Mina once put a design system on the product roadmap because she saw it wasn’t getting the attention it needed. The answer came back: nah. Mina was annoyed. She had tried to “just do it” and now when she tried to do it through the right channels, she’s told she can’t.

But Mina realised that it’s not that simple. There are important metrics she might not have been aware of.

A roadmap is multi-faceted thing, like Cerebus, the three-headed dog of the underworld.

Okay, so you can’t put the design sytem on the roadmap, but you can tie it to something with a high priority. You could refactor your way to a design system. Or you could allocate room in your timeline to slip in design systems work (pad your estimates a little). This is like a compromise between “Just do it!” and “Put it on the roadmap.”

A system’s value is realized when products ship features that use a system’s parts.

—Nathan Curtis

The other problem with putting a design system on the roadmap is that it implies there’s an end date. But a design system is never finished (unless you abandon it).

Myth 5: our system should do what XYZ’s system did

It’s great that there are so many public design systems out there to look to and get inspired by. We can learn from them. “Let’s do that!”

But those inspiring public systems can be like a succubus. They’re powerful and seductive and might seem fun at first but ultimately leave you feeling intimidated and exhausted.

Your design system should be build for your company’s specific needs, not Google’s or Github’s or anyone’s.

Slack has multiple systems. There’s one for the product called Slack Kit. It’s got great documentation. But if you go on Slack’s marketing website, it doesn’t look like the product. It doesn’t use the same typography or even colour scheme. So it can’t use the existing the design system. Mina created the Spacesuit design system specifically for the marketing site. The two systems are quite different but they have some common goals:

  • Establish common language.
  • Reduce technical debt.
  • Allow for modularity.

But there are many different needs between the Slack client and the marketing site. Also the marketing site doesn’t have the same resources as the Slack client.

Be inspired by other design systems, but don’t expect the same resutls.

Myth 6: everything is awesome!

When you think about design systems, everything is nice and neat and orderly. So you make one. Then you look at someone else’s design system. Your expectations don’t match the reality. Looking at these fully-fledged design systems is like comparing Instagram to real life.

The perfect design system is an angel. It’s a benevolent creature acting as an intermediary between worlds. Perhaps you think you’ve seen one once, but you can’t be sure.

The truth is that design system work is like laying down the railway tracks while the train is moving.

For a developer, it is a rare gift to be able to implement a project with a clean slate and no obligations to refactor an existing codebase.

Mina got to do a complete redesign in 2017, accompanied by a design system. The design system would power the redesign. Everything was looking good. Then slowly as the rest of the team started building more components for the website, unconnected things seemed to be breaking. This is what design systems are supposed to solve. But people were creating multiple components that did the same thing. Work was happening on a deadline.

Even on the Hillary For America design system (Pantsuit), which seemed lovely and awesome on the outside, there were multiple components that did the same thing. The CSS got out of hand with some very convoluted selectors trying to make things flexible.

Mina wants to share those stories because it sometimes seems that we only share the success stories.

Share work in progress. Learn out in the open. Be more vulnerable, authentic, and real.

Sunday, December 8th, 2019

On this day

I’m in San Francisco to speak at An Event Apart, which kicks off tomorrow. But I arrived a few days early so that I could attend Indie Web Camp SF.

Yesterday was the discussion day. Most of the attendees were seasoned indie web campers, so quite a few of the discussions went deep on some of the building blocks. It was a good opportunity to step back and reappraise technology decisions.

Today is the day for making, tinkering, fiddling, and hacking. I had a few different ideas of what to do, mostly around showing additional context on my blog posts. I could, for instance, show related posts—other blog posts (or links) that have similar tags attached to them.

But I decided that a nice straightforward addition would be to show a kind of “on this day” context. After all, I’ve been writing blog posts here for eighteen years now; chances are that if I write a blog post on any given day, there will be something in the archives from that same day in previous years.

So that’s what I’ve done. I’ll be demoing it shortly here at Indie Web Camp, but you can see it in action now. If you look at the page for this blog post, you should see a section at the end with the heading “Previously on this day”. There you’ll see links to other posts I’ve written on December 8th in years gone by.

It’s quite a mixed bag. There’s a post about when I used to have a webcam from sixteen years ago. There’s a report from the Flash On The Beach conference from thirteen years ago (I wrote that post while I was in Berlin). And five years ago, I was writing about markup patterns for web components.

I don’t know if anyone other than me will find this feature interesting (but as it’s my website, I don’t really care). Personally, I find it fascinating to see how my writing has changed, both in terms of subject matter and tone.

Needless to say, the further back in time you go, the more chance there is that the links in my blog posts will no longer work. That’s a real shame. But then it’s a pleasant surprise when I find something that I linked to that is still online after all this time. And I can take comfort from the fact that if anyone has ever linked to anything I’ve written on my website, then those links still work.

Wednesday, November 27th, 2019

Oh, Vienna!

Earlier this year I was in Düsseldorf for a triple bill of events:

  1. Indie Web Camp
  2. Beyond Tellerrand
  3. Accessibility Club

At Accessibility Club, I had the pleasure of seeing a great presentation from Manuel Matuzovic. Afterwards, a gaggle of us geeks went out for currywurst and beer. I got chatting with Manuel, who mentioned that he’s based in Vienna, where he organises a web meetup. I told him I’d love to come and speak at it sometime. He seemed very keen on the idea!

A few weeks later, I dropped him a line so he knew I was serious with my offer:

Hi Manuel,

Just wanted to drop a quick line to say how nice it was to hang out in Düsseldorf—albeit briefly.

I’d definitely be up for coming over to Vienna sometime for a meet up. Hope we can make that work sometime!

Cheers,

Jeremy

Manuel responded:

thank you for reaching out to me. Your timing couldn’t be better. :)

I was so excited that you showed interest in visiting Vienna that I thought about organising something that’s a little bit bigger than a meetup but smaller than a conference. 

I’m meeting today with my friend Max Böck to tell him about the idea and to ask him if he would want to help me organise a event.

Well, they did it. I just got back from the inaugural Web Clerks Community Conf in Vienna. It was a day full of excellent talks given to a very warm and appreciate audience.

The whole thing was livestreamed so you can catch up on the talks. I highly recommend watching Max’s talk on the indie web.

I had a really nice time hanging out with friends like Charlie, Rachel, Heydon, and my travelling companion, Remy. But it was equally great to meet new people, like the students who were volunteering and attending. I love having the chance to meet the next generation of people working on the web.

Accessibility on The Session revisited

Earlier this year, I wrote about an accessibility issue I was having on The Session. Specifically, it was an issue with Ajax and pagination. But I managed to sort it out, and the lesson was very clear:

As is so often the case, the issue was with me trying to be too clever with ARIA, and the solution was to ease up on adding so many ARIA attributes.

Well, fast forward to the past few weeks, when I was contacted by one of the screen-reader users on The Session. There was, once again, a problem with the Ajax pagination, specifically with VoiceOver on iOS. The first page of results were read out just fine, but subsequent pages were not only never announced, the content was completely unavailable. The first page of results would’ve been included in the initial HTML, but the subsequent pages of results are injected with JavaScript (if JavaScript is available—otherwise it’s regular full-page refreshes all the way).

This pagination pattern shows up all over the site: lists of what’s new, search results, and more. I turned on VoiceOver and I was able to reproduce the problem straight away.

I started pulling apart my JavaScript looking for the problem. Was it something to do with how I was handling focus? I just couldn’t figure it out. And other parts of the site that used Ajax didn’t seem to be having the same problem. I was mystified.

Finally, I tracked down the problem, and it wasn’t in the JavaScript at all.

Wherever the pagination pattern appears, there are “previous” and “next” links, marked up with the appropriate rel="prev" and rel="next" attributes. Well, apparently past me thought it would be clever to add some ARIA attributes in there too. My thinking must’ve been something like this:

  • Those links control the area of the page with the search results.
  • That area of the page has an ID of “results”.
  • I should add aria-controls="results" to those links.

That was the problem …which is kind of weird, because VoiceOver isn’t supposed to have any support for aria-controls. Anyway, once I removed that attribute from the links, everything worked just fine.

Just as the solution last time was to remove the aria-atomic attribute on the updated area, the solution this time was to remove the aria-controls attribute on the links that trigger the update. Maybe this time I’ll learn my lesson: don’t mess with ARIA attributes you don’t understand.

Thursday, November 21st, 2019

Rams

I’ve made a few trips to Germany recently. I was in Berlin last week for the always-excellent Beyond Tellerrand. Marc did a terrific job of curating an entertaining and thought-provoking line-up of speakers. He also made sure that those speakers—myself included—were very well taken care of.

I was also in Frankfurt last month. It was for an event, but for once, it wasn’t an event that involved me in any way. Jessica was there for the Frankfurt Book Fair. I was tagging along for the ride.

While Jessica was out at the sprawling exhibition hall on the edge of town, I was exploring downtown Frankfurt. One lunch time, I found myself wandering around the town’s charming indoor market hall.

While I was perusing the sausages on display, I noticed an older gentleman also inspecting the meat wares. He looked familiar. That’s when the part of my brain responsible for facial recognition said “That’s Dieter Rams.” A more rational part of my brain said “It can’t be!”, but it seemed that my pattern matching was indeed correct.

As he began to walk away, the more impulsive part of my brain shouted “Say something!”, and before my calmer nature could intervene, I was opening my mouth to speak.

I think I would’ve been tongue-tied enough introducing myself to someone of Dieter Rams’s legendary stature, but it was compounded by having to do it in a second language.

Entschulding Sie!”, I said (“Excuse me”). “Sind Sie Dieter Rams?” (“Are you Dieter Rams?”)

“Ja, bin ich”, he said (“Yes, I am”).

At this point, my brain realised that it had nothing further planned and it left me to my own devices. I stumbled through a sentence saying something about what a pleasure it was to see him. I might have even said something stupid along the lines of “I’m a web designer!”

Anyway, he smiled politely as I made an idiot of myself, and then I said goodbye, reiterating that it was a real treat for me to meet him.

After I walked outside, I began questioning reality. Did that really just happen? It felt utterly surreal.

Of course afterwards I thought of all the things I could’ve said. L’esprit de l’escalier. Or as the Germans put it, Treppenwitz.

I could’ve told him that I collect design principles, of which his are probably the most well-known.

I could’ve told him about the time that Clearleft went on a field trip to the Design Museum in London to see an exhibition of his work, and how annoyed I was by the signs saying “Do Not Touch” …in front of household objects that were literally designed to be touched!

I could’ve told him how much I enjoyed the documentary that Gary Hustwit made about him.

But I didn’t say any of those things. I just spouted some inanity, like the starstruck fanboy I am.

There’ll be a lunchtime showing of the Rams documentary at An Event Apart in San Francisco, where I’ll be speaking in a few weeks. Now I wonder if rewatching it is just going to make me cringe as I’m reminded of my encounter in Frankfurt.

But I’m still glad I said something.

Tuesday, November 19th, 2019

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

Tuesday, November 12th, 2019

Third party

The web turned 30 this year. When I was back at CERN to mark this anniversary, there was a lot of introspection and questioning the direction that the web has taken. Everyone I know that uses the web is in agreement that tracking and surveillance are out of control. It seems only right to question whether the web has lost its way.

But here’s the thing: the technologies that enable tracking and surveillance didn’t exist in the early years of the web—JavaScript and cookies.

Without cookies, the web was stateless. This was by design. Now, I totally understand why cookies—or something like cookies—were needed. Without some way of keeping track of state, there’s no good way for a website to “remember” what’s in your shopping cart, or whether you’ve authenticated yourself.

But why would cookies ever need to work across domains? Authentication, shopping carts and all that good stuff can happen on the same domain. Third-party cookies, on the other hand, seem custom made for tracking and frankly, not much else.

Browsers allow you to disable third-party cookies, though it’s not yet the default. If enough people do it—and complain about the sites that stop working when third-party cookies are disabled—then maybe it can become the default.

Firefox is taking steps in this direction, automatically disabling some third-party cookies—the ones that known trackers. Safari is also taking steps to prevent cross-site tracking. It’s not too late to change the tide of third-party cookies.

Then there’s third-party JavaScript.

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default. But I guess the precedent had been set with images and style sheets: they could be embedded regardless of whether their domain names matched yours. Still, this is executable code we’re talking about here: that’s quite a footgun that the web has given site owners. And boy, oh boy, has it been used by the worst people to do the most damage.

Again, as with cookies, if we were to imagine what the web would be like if JavaScript was restricted by a same-domain policy, there are certainly things that would be trickier to do.

  • Embedding video, audio, and maps would get a lot finickier.
  • Analytics would need to be self-hosted. I don’t think that would bother any site owners. An analytics platform like Google Analytics that tracks people across domains is doing it for its own benefit rather than that of site owners.
  • Advertising wouldn’t be creepy and annoying. Instead of what’s so euphemistically called “personalisation”, advertisers would have to rely on serving relevant ads based on the content of the site rather than an invasive psychological profile of the user. (I honestly think that advertisers would benefit from this kind of targetting.)

It’s harder to imagine putting the genie back in the bottle when it comes to third-party JavaScript than it is with third-party cookies. All the same, I wish that browsers made it easier to experiment with it. Just as I can choose to accept all cookies, reject all cookies, or only accept same-origin cookies, I wish I could accept all JavaScript, reject all JavaScript, or only accept same-origin JavaScript.

As it is, browsers are making it harder and harder to exercise any control over JavaScript at all. So we reach for third-party tools. We don’t call them JavaScript managers though. We call them ad blockers. But honestly, most of the ad-blocker users I know—myself included—are not bothered by the advertising; we’re bothered by the tracking. We should really call them surveillance blockers.

If third-party JavaScript weren’t the norm, not only would it make the web more secure, it would make it way more performant. Read the chapter on third parties in this year’s newly-released Web Almanac. The figures are staggering.

93% of pages include at least one third-party resource, 76% of pages issue a request to an analytics domain, the median page requests content from at least 9 unique third-party domains that represent 35% of their total network activity, and the most active 10% of pages issue a whopping 175 third-party requests or more.

I don’t think all the web’s performance ills are due to third-party scripts; developers are doing a bang-up job of making their sites big and bloated with their own self-hosted frameworks and code. But as long as third-party JavaScript is allowed onto a site, there’s a limit to how much good developers can do to improve the performance of their sites.

I go to performance-related conferences and you know who I’ve never seen at those events? The people who write the JavaScript for third-party tracking scripts. Those developers are wielding an outsized influence on the health of the web.

I’m very happy to see the work being done by Mozilla and Apple to normalise the idea of rejecting third-party cookies. I’d love to see the rejection of third-party JavaScript normalised in the same way. I know that it would make my life as a developer harder. But that’s of lesser importance. It would be better for the web.

CSS for all

There have been some great new CSS properties and values shipping in Firefox recently.

Miriam Suzanne explains the difference between the newer revert value and the older inherit, initial and unset values in a video on the Mozilla Developer channel:

display: revert;

In another video, Jen describes some new properties for styling underlines (on links, for example):

text-decoration-thickness:  0.1em;
text-decoration-color: red;
text-underline-offset: 0.2em;
text-decoration-skip-ink: auto;

Great stuff!

As far as I can tell, all of these properties are available to you regardless of whether you are serving your website over HTTP or over HTTPS. That may seem like an odd observation to make, but I invite you to cast your mind back to January 2018. That’s when the Mozilla Security Blog posted about moving to secure contexts everywhere:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

(emphasis mine)

Buzz Lightyear says to Woody: Secure contexts …secure contexts everywhere!

Despite that “effective immediately” clause, I haven’t observed any of the new CSS properties added in the past two years to be restricted to HTTPS. I’m glad about that. I wrote about this announcement at the time:

I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement.

There’s no official word from the Mozilla Security Blog about any change to their two-year old “effective immediately” policy, and the original blog post hasn’t been updated. Maybe we can all just pretend it never happened.

Monday, November 11th, 2019

FF Conf 2019

Friday was FF Conf day here in Brighton. This was the eleventh(!) time that Remy and Julie have put on the event. It was, as ever, excellent.

It’s a conference that ticks all the boxes for me. For starters, it’s a single-track event. The more I attend conferences, the more convinced I am that multi-track events are a terrible waste of time for attendees (and a financially bad model for organisers). I know that sounds like a sweeping broad generalisation, but ask me about it next time we meet and I’ll go into more detail. For now, I just want to talk about this mercifully single-track conference.

FF Conf has built up a rock-solid reputation over the years. I think that’s down to how Remy curates it. He thinks about what he wants to know and learn more about, and then thinks about who to invite to speak on those topics. So every year is like a snapshot of Remy’s brain. By happy coincidence, a snapshot of Remy’s brain right now looks a lot like my own.

You could tell that Remy had grouped the talks together in themes. There was a performance-themed chunk right after lunch. There was a people-themed chunk in the morning. There was a creative-coding chunk at the end of the day. Nice work, DJ.

I think it was quite telling what wasn’t on the line-up. There were no talks about specific libraries or frameworks. For me, that was a blessed relief. The only technology-specific talk was Alice’s excellent talk on Git—a tool that’s useful no matter what you’re coding.

One of the reasons why I enjoyed the framework-free nature of the day is that most talks—and conferences—that revolve around libraries and frameworks are invariably focused on the developer experience. Think about it: next time you’re watching a talk about a framework or library, ask yourself how it impacts user experience.

At FF Conf, the focus was firmly on people. In the case of Laura’s barnstorming presentation, those people are end users (I’m constantly impressed by how calm and measured Laura remains even when talking about blood-boilingly bad behaviour from the tech industry). In the case of Amina’s talk, the people are junior developers. And for Sharon’s presentation, the people are everyone.

One of the most useful talks of the day was from Anna who took us on a guided tour of dev tools to identify performance improvements. I found it inspiring in a very literal sense—if I had my laptop with me, I think I would’ve opened it up there and then and started tinkering with my websites.

Harry also talked about performance, but at Remy’s request, it was more business focused. Specifically, it was focused on Harry’s consultancy business. I think this would’ve been the perfect talk for more of an “industry” event, whereas FF Conf is very much a community event: Harry’s semi-serious jibes about keeping his performance secrets under wraps didn’t quite match the generous tone of the rest of the line-up.

The final two talks from Charlotte and Suz were a perfect double whammy.

When I saw Charlotte speak at Material in Iceland last year, I wrote this aside in my blog post summary:

(Oh, and Remy, when you start to put together the line-up for next year’s FF Conf, be sure to check out Charlotte Dann—her talk at Material was the perfect mix of code and creativity.)

I don’t think I can take credit for Charlotte being on the line-up, but I will take credit for saying she’d be the perfect fit.

And then Suz Hinton closed out the conference with this rallying cry that resonated perfectly with Laura’s talk:

Less mass-produced surveillance bullshit and more Harry Potter magic (please)!

I think that rallying cry could apply equally well to conferences, and I think FF Conf is a good example of that ethos in action.

Cat encounters

The latest episode of Ariel’s excellent Offworld video series (and podcast) is all about Close Encounters Of The Third Kind.

I have such fondness for this film. It’s one of those films that I love to watch on a Sunday afternoon (though that’s true of so many Spielberg films—Jaws, Raiders Of The Lost Ark, E.T.). I remember seeing it in the cinema—this would’ve been the special edition re-release—and feeling the seat under me quake with the rumbling of the musical exchange during the film’s climax.

Ariel invited Rose Eveleth and Laura Welcher on to discuss the film. They spent a lot of time discussing the depiction of first contact communication—Arrival being the other landmark film on this topic.

This is a timely discussion. There’s a new book by Daniel Oberhaus published by MIT Press called Extraterrestrial Languages:

If we send a message into space, will extraterrestrial beings receive it? Will they understand?

You can a read an article by the author on The Guardian, where he mentions some of the wilder ideas about transmitting signals to aliens:

Minsky, widely regarded as the father of AI, suggested it would be best to send a cat as our extraterrestrial delegate.

Don’t worry. Marvin Minsky wasn’t talking about sending a real live cat. Rather, we transmit instructions for building a computer and then we can transmit information as software. Software about, say, cats.

It’s not that far removed from what happened with the Voyager golden record, although that relied on analogue technology—the phonograph—and sent the message pre-compiled on hardware; a much slower transmission rate than radio.

But it’s interesting to me that Minsky specifically mentioned cats. There’s another long-term communication puzzle that has a cat connection.

The Yukka Mountain nuclear waste repository is supposed to store nuclear waste for 10,000 years. How do we warn our descendants to stay away? We can’t use language. We probably can’t even use symbols; they’re too culturally specific. A think tank called the Human Interference Task Force was convened to agree on the message to be conveyed:

This place is a message… and part of a system of messages… pay attention to it! Sending this message was important to us. We considered ourselves to be a powerful culture.

This place is not a place of honor…no highly esteemed deed is commemorated here… nothing valued is here.

What is here is dangerous and repulsive to us. This message is a warning about danger.

A series of thorn-like threatening earthworks was deemed the most feasible solution. But there was another proposal that took a two pronged approach with genetics and folklore:

  1. Breed cats that change colour in the presence of radioactive material.
  2. Teach children nursery rhymes about staying away from cats that change colour.

This is the raycat solution.

Thursday, November 7th, 2019

Near miss

When I was travelling across the Atlantic ocean on the Queen Mary 2 back in August, I had the pleasure of attending a series of on-board lectures by Charles Barclay from the Royal Astronomical Society.

One of those presentations was on the threat of asteroid impacts—always a fun topic! Charles mentioned Spaceguard, the group that tracks near-Earth objects.

Spaceguard is a pretty cool-sounding name for any organisation. The name comes from a work of (science) fiction. In Arthur C. Clarke’s 1973 book Rendezvous with Rama, Spaceguard is the name of a fictional organisation formed after a devastating asteroid impact on northen Italy—an event which is coincidentally depicted as happening on September 11th. That’s not a spoiler, by the way. The impact happens on the first page of the book.

At 0946 GMT on the morning of September 11 in the exceptionally beautiful summer of the year 2077, most of the inhabitants of Europe saw a dazzling fireball appear in the eastern sky.  Within seconds it was brighter than the Sun, and as it moved across the heavens—at first in utter silence—it left behind it a churning column of dust and smoke.

Somewhere above Austria it began to disintegrate, producing a series of concussions so violent that more than a million people had their hearing permanently damaged.  They were the lucky ones.

Moving at fifty kilometers a second, a thousand tons of rock and metal impacted on the plains of northern Italy, destroying in a few flaming moments the labor of centuries.

Later in the same lecture, Charles talked about the Torino scale, which is used to classify the likelihood and severity of impacts. Number 10 on the Torino scale means an impact is certain and that it will be an extinction level event.

Torino—Turin—is in northern Italy. “Wait a minute!”, I thought to myself. “Is this something that’s also named for that opening chapter of Rendezvous with Rama?”

I spoke to Charles about it afterwards, hoping that he might know. But he said, “Oh, I just assumed that a group of scientists got together in Turin when they came up with the scale.”

Being at sea, there was no way to easily verify or disprove the origin story of the Torino scale. Looking something up on the internet would have been prohibitively slow and expensive. So I had to wait until we docked in New York.

On our first morning in the city, Jessica and I popped into a bookstore. I picked up a copy of Rendezvous with Rama and re-read the details of that opening impact on northern Italy. Padua, Venice and Verona are named, but there’s no mention of Turin.

Sure enough, when I checked Wikipedia, the history and naming of the Torino scale was exactly what Charles Barclay surmised:

A revised version of the “Hazard Index” was presented at a June 1999 international conference on NEOs held in Torino (Turin), Italy. The conference participants voted to adopt the revised version, where the bestowed name “Torino Scale” recognizes the spirit of international cooperation displayed at that conference toward research efforts to understand the hazards posed by NEOs.

Thursday, October 31st, 2019

Indy maps

Remember when I wrote about adding travel maps to my site at the recent Indie Web Camp Brighton? I must confess that the last line I wrote was an attempt to catch a fish from the river of the lazy web:

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

In the spirit of Cunningham’s Law, I was hoping that somebody was going to respond with “It’s totally possible to use Stamen’s watercolour tiles for static maps, dumbass—look!” (to which my response would have been “thank you very much!”).

Alas, no such response was forthcoming. The hoped-for schooling never forthcame.

Still, I couldn’t quite let go of the idea of using those lovely watercolour maps somewhere on my site. But I had decided that dynamic maps would have been overkill for my archive pages:

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles.

Then I had a thought. What if I keep the static maps on my archive pages, but make them clickable? Then, on the other end of that link, I can have the dynamic version. In other words, what if I had a separate URL just for the dynamic maps?

These seemed like a good plan to me, so while I was travelling by Eurostar—the only way to travel—back from the lovely city of Antwerp where I had been speaking at Full Stack Europe, I started hacking away on making the dynamic maps even more dynamic. After all, now that they were going to have their own pages, I could go all out with any fancy features I wanted.

I kept coming back to my original goal:

I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I found a plug-in for Leaflet.js that animates polylines—thanks, Iván! With a bit of wrangling, I was able to get it to animate between the lat/lon points of whichever archive section the map was in. Rather than have it play out automatically, I also added a control so that you can start and stop the animation. While I was at it, I decided to make that “play/pause” button do something else too. Ahem.

If you’d like to see the maps in action, click the “play” button on any of these maps:

You get the idea. It’s all very silly really. It’s right up there with the time I made my sparklines playable. But that’s kind of the point. It’s my website so I can do whatever I want with it, no matter how silly.

First of all, the research department for adactio.com (that’s me) came up with the idea. Then that had to be sold in to upper management (that’s me too). A team was spun up to handle design and development (consisting of me and me). Finally, the finished result went live thanks to the tireless efforts of the adactio.com ops group (that would be me). Any feedback should be directed at the marketing department (no idea who that is).

Tuesday, October 29th, 2019

Periodic background sync

Yesterday I wrote about how much I’d like to see silent push for the web:

I’d really like silent push for the web—the ability to update a cache with fresh content as soon as it’s published; that would be nifty! At the same time, I understand the concerns. It feels more powerful than other permission-based APIs like notifications.

Today, John Holt Ripley responded on Twitter:

hi there, just read your blog post about Silent Push for acthe web, and wondering if Periodic Background Sync would cover a few of those use cases?

Periodic background sync looks very interesting indeed!

It’s not the same as silent push. As the name suggests, this is about your service worker waking up periodically and potentially fetching (and caching) fresh content from the network. So the service worker is polling rather than receiving a push. But I’ll take it! It’s definitely close enough for the kind of use-cases I’ve been thinking about.

Interestingly, periodic background sync also ties into the other part of what I was writing about: permissions. I mentioned that adding a site the home screen could be interpreted as a signal to potentially allow more permissions (or at least allow prompts for more permissions).

Well, Chromium has a document outlining metrics for attempting to gauge site engagement. There’s some good thinking in there.

Monday, October 28th, 2019

Silent push for the web

After Indie Web Camp in Berlin last year, I wrote about Seb’s nifty demo of push without notifications:

While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Phil Nash left a comment on the Medium copy of my post explaining that Seb’s demo of using the Push API without showing a notification wouldn’t work for long:

The browsers allow a certain number of mistakes(?) before they start to show a generic notification to say that your site sent a push notification without showing a notification. I believe that after ~10 or so notifications, and that’s different between browsers, they run out of patience.

He also provided me with the name to describe what I’m after:

You’re looking for “silent push” as are many others.

Silent push is something that is possible in native apps. It isn’t (yet?) available on the web, presumably because of security concerns.

It’s an API that would ripe for abuse. I mean, just look at the mess we’ve made with APIs like notifications and geolocation. Sure, they require explicit user opt-in, but these opt-ins are seen so often that users are sick of seeing them. Silent push would be one more permission-based API to add to the stack of annoyances.

Still, I’d really like silent push for the web—the ability to update a cache with fresh content as soon as it’s published; that would be nifty! At the same time, I understand the concerns. It feels more powerful than other permission-based APIs like notifications.

Maybe there could be another layer of permissions. What if adding a site to your home screen was the first step? If a site is running on HTTPS, has a service worker, has a web app manifest, and has been added to the homescreen, maybe then and only then should it be allowed to prompt for permission to do silent push.

In other words, what if certain very powerful APIs were only available to progressive web apps that have successfully been added to the home screen?

Frankly, I’d be happy if the same permissions model applied to web notifications too, but I guess that ship has sailed.

Anyway, all this is pure conjecture on my part. As far as I know, silent push isn’t on the roadmap for any of the browser vendors right now. That’s fair enough. Although it does annoy me that native apps have this capability that web sites don’t.

It used to be that there was a long list of features that only native apps could do, but that list has grown shorter and shorter. The web’s hare is catching up to native’s tortoise.

Monday, October 21st, 2019

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

Friday, October 18th, 2019

Web talk

At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.

The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:

How We Built The World Wide Web In Five Days

This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.

At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.

But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.

I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.

There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.

We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:

It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.

In the meantime, you can read the text, watch the video, or look at the slides (but the slides really don’t make much sense in isolation).

Wednesday, October 16th, 2019

How We Built The World Wide Web In Five Days

This talk about recreating the first ever web browser was a joint presentation with Remy Sharp, delivered at the Fronteers conference in Amsterdam in October 2019.

Jeremy

Our story begins with the Big Bang.

13.8 billion years ago

This sets a chain of events in motion that gives us elementary particles, then more complex particles like atoms, which form stars and planets, including our own, on which life evolves, which brings us to the recent past when this whole process results in the universe generating a way of looking at itself: physicists.

A physicist is the atom’s way of knowing about atoms.

—George Wald

By the end of World War Two, physicists in Europe were in short supply. If they hadn’t already fled during Hitler’s rise to power, they were now being actively wooed away to the United States.

64 years ago

To counteract this brain drain, a coalition of countries forms the European Organization for Nuclear Research, or to use its French acronym, CERN.

They get some land in a suburb of Geneva on the border between Switzerland and France, where they set about smashing particles together and recreating the conditions that existed at the birth of the universe.

The Syncrocyclotron.

Every year, CERN is host to thousands of scientists who come to run their experiments.

Remy

Fast forward to February 2019, a group of 9 of us were invited to CERN as an elite group of hackers to recreate a different experiment.

The group.

We are there to recreate a piece of software first published 30 years ago. Given this goal, we need to answer some important questions first:

  • How does this software look and feel?
  • How does it work?
  • How you interact with it?
  • How does it behave?

The software is so old that it doesn’t run on any modern machines, so we have a NeXT machine specially shipped from the nearby museum. This is no ordinary machine. It was one of the only two NeXT machines that existed at CERN in the late 80s.

Now we have the machine to run this special software.

By some fluke the good people of the web have captured several different versions of this software and published them on Github.

So we selected the oldest version we could find. We download it from Github to our computers. Now we have to transfer it to the NeXT machine.

Except there’s no USB drive. It didn’t exist. CD ROM? Floppy drive? The NeXT computer had a “floptical drive”—bespoke to NeXT computers—all very well, but in 2019 we don’t have those drives.

To transfer the software from our machine, to the NEXT machine, we needed to use the network.

Jeremy

62 years ago

In 1957, J.C.R. Licklider was the first person to publicly demonstrate the idea of time sharing: linking one computer to another.

56 years ago

Six years later, he expanded on the idea in a memo that described an Intergalactic Computer Network.

By this time, he was working at ARPA: the department of Defense’s Advanced Research Projects Agency. They were very interested in the idea of linking computers together, for very practical reasons.

America’s military communications had a top-down command-and-control structure. That was a single point of failure. One pre-emptive strike and it’s game over.

The solution was to create a decentralised network of computers that used Paul Baran’s brilliant idea of packet switching to move information around the network without any central authority.

This idea led to the creation of the ARPANET. Initially it connected a few universities. The ARPANET grew until it wasn’t just computers at each endpoint; it was entire networks. It was turning into a network of networks …an internetwork, or internet, for short. In order for these networks to play nicely with one another, they needed to agree on using the same set of protocols for packet switching.

Bob Kahn and Vint Cerf crafted the simplest possible set of low-level protocols: the Transmission Control Protocol and the Internet Protocol. TCP/IP.

TCP/IP is deliberately dumb. It doesn’t care about the contents of the packets of data being passed around the internet. People were then free to create more task-specific protocols to sit on top of TCP/IP.

There are protocols specifically for email, for example. Gopher is another example of a bespoke protocol. And there’s the File Transfer Protocol, or FTP.

Remy

Back in our war room in 2019, we finally work out that can use FTP to get the software across. FTP is an arcane protocol, but we can agree that it will work across the two eras.

Although we have to manually install FTP servers onto our machines. FTP doesn’t ship with new machines because it’s generally considered insecure.

Now we finally have the software installed on the NeXT computer and we’re able to run the application.

We double click the shading looking, partly hand drawn icon with a lightning bolt on it, and we wait…

Once the software’s finally running, we’re able to see that it looks a bit like an ancient word processor. We can read, edit and open documents. There’s some basic styles lots of heavy margins. There’s a super weird menu navigation in place.

But there’s something different about this software. Something that makes this more than just a word processor.

These documents, they have links…

Jeremy

Ted Nelson is fond of coining neologisms. You can thank him for words like “intertwingled” and “teledildonics”.

56 years ago

He also coined the word “hypertext” in 1963. It is defined by what it is not.

Hypertext is text which is not constrained to be linear.

Ever played a “choose your own adventure” book? That’s hypertext. You can jump from one point in the book to a different point that has its own unique identifier.

The idea of hypertext predates the word. In 1945, Vannevar Bush published a visionary article in The Atlantic Monthly called As We May Think.

He imagines a mechanical device built into a desk that can summon reams of information stored on microfilm, allowing the user to create “associative trails” as they make connections between different concepts. He calls it the Memex.

Memex

Also in 1945, a young American named Douglas Engelbart has been drafted into the navy and is shipping out to the Pacific to fight against Japan. Literally as the ship is leaving the harbour, word comes through that the war is over. He still gets shipped out to the Philippines, but now he’s spending his time lounging in a hut reading magazines. That’s how he comes to read Vannevar Bush’s Memex article, which lodges in his brain.

51 years ago

Douglas Engelbart decides to dedicate his life to building the computer equivalent of the Memex.

On December 9th, 1968, he unveils his oNLine System—NLS—in a public demonstration. Not only does he have a working implementation of hypertext, he also shows collaborative real-time editing, windows, graphics, and oh yeah—for this demo, he invents the mouse.

It truly is The Mother of All Demos.

Douglas Engelbart has a posse.

39 years ago

There were a number of other attempts at creating hypertext systems. In 1980, a young computer scientist named Tim Berners-Lee found himself working at CERN, where scientists were having a heck of time just keeping track of information.

He created a system somewhat like Apple’s Hypercard, but with clickable links. He named it ENQUIRE, after a Victorian book of manners called Enquire Within Upon Everything.

ENQUIRE didn’t work out, but Tim Berners-Lee didn’t give up on the problem of managing information at CERN. He thinks about all the work done before: Vannevar Bush’s Memex; Ted Nelson’s Xanadu project; Douglas Engelbart’s oNLine System.

A lot of hypertext ideas really are similar to a choose-your-own-adventure: jumping around from point to point within a book. But what if, instead of imagining a hypertext book, we could have a hypertext library? Then you could jump from one point in a book to a different point in a different book in a completely different part of the library.

The Library Of Babel

In other words, what if you took the world of hypertext and the world of networks, and you smashed them together?

30 years ago

On the 12th of March, 1989, Tim Berners-Lee circulates the first draft of a document titled Information Management: A Proposal.

The diagrams are incomprehensible. But his supervisor at CERN, Mike Sendall, sees the potential. He reads the proposal and scrawls these words across the top: “vague, but exciting.”

Tim Berners-Lee gets the go-ahead to spend some time on this project. And he gets the budget for a nice shiny NeXT machine. With the support of his colleague Robert Cailliau, Berners-Lee sets about making his theoretical project a reality. They kick around a few ideas for the name.

They thought of calling it The Mesh. They thought of calling it The Information Mine, but Tim rejected that, knowing that whatever they called it, the words would be abbreviated to letters, and The Information Mine would’ve seemed quite egotistical.

So, even though it’s only going to exist on one single computer to begin with, and even though the letters of the abbreviation take longer to say than the words being abbreviated, they call it …the World Wide Web.

As Robert Cailliau told us, they were thinking “Well, we can always change it later.”

Tim Berners-Lee brainstorms a new protocol for hypertext called the HyperText Transfer Protocol—HTTP.

He thinks about a format for hypertext called the Hypertext Markup Language—HTML.

He comes up with an addressing scheme that uses Unique Document Identifiers—UDIs, later renamed to URIs, and later renamed again to URLs.

But he needs to put it all together into running code. And so Tim Berners-Lee sets about writing a piece of software…

Remy

Tim Berners-Lee’s document is a proposal at that stage 30 years ago. It’s just theory. So he needs to build a prototype to actually demonstrate how the World Wide Web would work.

The NeXT computer is the perfect ground for rapid software development because the NeXT operating system ships with a program called NSBuilder.

NeXT

NSBuilder is software to build software. In fact, the “NS” (meaning NeXTSTEP) can be found in existing software today - you’ll find references to NSText in Safari and Mac developer documentation.

Tim Berners-Lee, using NSBuilder was able to create a working prototype of this software in just 6 weeks

He called it: WorldWideWeb.

We finally have the software working the way it ran 30 years ago.

But our project is to replicate this browser so that you can try it out, and see how web pages look through the lens of 1990.

So we enter some of our blog urls. https://remysharp.com, https://adactio.com

But HTTPS doesn’t work. There was no HTTPS. There’s no HTTP2. HTTP1.0 hadn’t even been invented.

So I make a proxy. Effectively a monster-in-the-middle attack on all web requests, stripping the SSL layer and then returning the HTML over the HTTP 0.9 protocol.

And finally, we see…

We see junk.

We can see the text content of the website, but there’s a lot of HTML junk tags being spat out onto the screen, particularly at the start of the document.

Jeremy

<h1> <h2> <h3> <h4> <h5> <h6> 
<ol> <ul> <li> <p>

These tags are probably very familiar to you. You recognise this language, right?

That’s right. It’s SGML.

SGML is the successor to GML, which supposedly stands for Generalised Markup Language. But that may well be a backronym. The format was created by Goldfarb, Mosher, and Lorie: G, M, L.

SGML is supposed to be short for Standard Generalised Markup Language.

A flavour of SGML was already being used at CERN when Tim Berners-Lee was working on his World Wide Web project. Rather than create a whole new format from scratch, he repurposed what people were already familiar with. This was his HyperText Markup Language, HTML.

One thing he did add was a tag called A for anchor.

Its href attribute is short for “hypertext reference”. Plop a URL in there and you’ve got a link.

The hypertext community thought this was a terrible way to make links.

They believed that two-way linking was vital. With two-way linking, the linked resource connects back to where the link originates. So if the linked resource moved, the link would stay intact.

That’s not the case with the World Wide Web. If the linked resource moves, the link is broken.

Perhaps you’ve experienced broken links?

When Tim Berners-Lee wrote the code for his WorldWideWeb browser, there was a grand total of 26 tags in HTML. I know that we’d refer to them as elements today, but that term wasn’t being used back then.

Now there are well over 100 elements in HTML. The reason why the language has been able to expand so much is down to the way web browsers today treat unknown elements: ignore any opening and closing tags you don’t recognise and only render the text in between them.

Remy

The parsing algorithm was brittle (when compared to modern parsers). There’s no DOM tree being built up. Indeed, the DOM didn’t exist.

Remember that the WorldWideWeb was a browser that effectively smooshed together a word processor and network requests, the styling method was based (mostly) around adding margins as the tags were parsed.

Kimberly Blessing was digging through the original 7344 lines of code for the WorldWideWeb source. She found the code that could explain why we were seeing junk.

<link rel="..."

In this case, when the parser encountered <link rel="…" it would see the <.

<

“Yes, a tag; let’s slurp it up”.

<li

Then it reads li and the parser is thinking, “This looks like a list item, good stuff.”

<lin

Then encounters the n (of link) and, excusing the paring algorithm because it was the first, would then abort the style it was about to apply and promptly spit out the rest of the content on screen, having already swallowed up the first four characters: <lin.

k rel="stylesheet" href="...">

With that, we decided to make the executive design decision that we would strip out any elements that were unknown to the original WorldWideWeb browser — link, script, video and img — which of course there was no image support in the world’s first browser.

This is the first little cheat we applied, so that the page would be more pleasing to you, the visitor of our emulator. Otherwise you’d be presented with a lot of scary looking junk.

So now we have all the reference we need to be able to replicate this browser:

  • The machine running the original operating system, which gives us colours, fonts, menus and so on.
  • The browser itself, how windows behave, what’s in the menus, what makes the experience unique to that period of time.
  • And finally how it looks when we visit URLs.

So off we go.

🤯

Jeremy

While Remy sets about recreating the functionality of the WorldWideWeb browser, Angela was recreating the user interface using CSS.

Inputs. Buttons. Icons. Menus. All with the exact borders, highlights and shadows used in the UI of the NeXT operating system, including having the scrollbar on the left side of windows.

Meanwhile the rest of us were putting together an explanatory website to give some backstory to what we were doing. I spent most of my time working on a timeline showing thirty years before and thirty years after the original proposal for the web.

Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

The WorldWideWeb browser inherited fonts from the NeXTSTEP operating system. It mostly used Helvetica and a font called Ohlfs (created by Keith Ohlfs). Helvetica is ubiquitous but Ohlfs was never seen outside of a NeXT machine.

Our teammates Mark and Brian were obsessed with accurately recreating the typography. We couldn’t use modern fonts which are vector based. We need pixeliness.

So Mark and Brian took a screenshot of the NeXT machine’s alphabet. With help from afar from font designer David Jonathan Ross, they traced each square pixel in a vector program and then exported that as a web font. Now we’ve got a web font that deliberately isn’t anti-aliased. It’s a vector format that recreates the look of a bitmap.

Put the pixelly font together with the CSS interface elements and you’ve got something that really looks like the old WorldWideWeb programme.

Remy

This is the final product of our work at CERN that week. A fully working WorldWideWeb emulator giving a reasonable close experience of what it was like to surf the web as if it were 30 years ago.

This is entirely in the browser and was written using:

  • React,
  • React Draggable for the windows and menus,
  • React Hotkeys for keyboard combo shortcuts (we replicated the original OS as much as we could),
  • idb-keyval for some local storage,
  • Parcel for bundling.

These tools weren’t chosen particular because they were the best tools for the job, but rather because they were the tools I knew that well enough that would help speed up my development process.

We worked hard to replicate the look and feel as much as we could. We even replicated typos found throughout the WorldWideWeb app:

An excercise in global information availability

Why don’t we see how it looks…

Jeremy

There’s kind of irony in this in that it relies heavily on JavaScript. In fact, there’s nothing there other than JavaScript. But of course the WorldWideWeb browser couldn’t deal with JavaScript—JavaScript hadn’t been invented yet. So the one URL that definitely wouldn’t work in this emulator is …the emulator itself.

Remy

(Which Jeremy was blaming me for.)

This is what you see when you visit the WorldWideWeb browser for the first time. We can see we are welcomed by the universe of hypertext. We’ve got these menus over here that you can drag off and open panels (I always thought this was an ordering bug but the operating system actually works like this).

We’ll go ahead and open the Fronteers website. I go to “Document” and then I go to “Open from full document reference” (because the word URL didn’t exist). I’m going to pop the Fronteers URL in here. And there it is. We’ve got the Fronteers website. Looks pretty good. (One of my favourite UI bits is this scrollbar on the left hand side instead of the right.)

We can follow the links. Actually one of my favourite features that was in this original browser that we replicated was this “Navigate” menu. I’ve just opened the first link in the document, but I can click on “Next”, and “Next” a bunch of times and it will cycle through each one of the links on the page that I launched from and let me read all the pages that the Fronteers site links to (which I really like). I can go backwards and forwards, and so on.

One thing you might have already noticed is that there are no URLs here. And in fact, to view source, it was considered a kind of diagnostic option and it was very very tucked away. The reason for this is that URLs—and the source HTML or SGML—was considered ugly and potentially a bad user experience.

But there’s one thing about navigating here that’s different. To open this link, I had to double-click.

Jeremy

The WorldWideBrowser was more of a prototype than anything else. It demonstrated the potential of the World Wide Web project, but it only worked on NeXT machines.

To show how the World Wide Web could work on any computer, the second ever web browser was the Line Mode Browser, coded by Nicola Pellow. It had a very basic text interface—no clicking on links—but it could be installed anywhere.

Lots of other geeks and nerds were working on their own web browsers, but it was Marc Andreesen’s Mosaic browser that really blew the doors open for the web. It had a nice usable interface, and it (unilaterrally) introduced the innovation of images on the web.

Andreesen went on to found Netscape. The World Wide Web took off at an unprecedented rate. Microsoft brought out their Internet Explorer browser and started trying to catch up with Netscape. We had the browser wars. Later we got even more browsers, like Safari and Chrome, while Netscape morphed into Firefox and Internet Explorer morphed into Edge. And the rest is history.

But all of these browsers were missing something that was in the original WorldWideWeb browser.

Remy

The reason I have to double-click on these links is that, when I do a single click, it actually places the cursor. The cursor is blinking there on “Fronteers.” And the reason I can place the cursor is because I can edit the document.

I see Fronteers here is missing a heading. We want to welcome you all:

Welkom

We want to make that a heading. Let’s style that. It’s a heading.

So the browser was meant to edit documents. Let’s put a bit of text here:

Great talks from Remy and Jeremy

(forget about everyone else). Now if I want to create a link, I’ll go ahead and navigate to Jeremy’s site, https://adactio.com. I’m going to do “Link”, then “Mark all”, which is a way of copying the URL to that window. Then I go back to the Fronteers website, select “Jeremy”, and then do “Link to marked.” I can double-click on Jeremy’s name it will open up his website.

I can save this document as well. I’m going to call it fronteers.html.

Let’s do a hard reboot—a browser refresh. I come back to my machine a couple of days later, “Ah, the Fronteers page!”. I’m going to open that again, and it linked to that really handsome guy in the sprite shirt. And yes, the links still work.

In fact, this documentation that you see when the WorldWideWeb browser launches was written, styled, and linked using the WorldWideWeb browser. The WorldWideWeb browser was for a web that you could read and write.

But this didn’t survive. It was a hurdle that was too tricky to propose or implement across the different types servers that existed and for the upcoming browsers that were on the horizon.

And so it wasn’t standardised and doesn’t exist today.

But this is an important lesson from the time: reducing complexity increases the chances of mass adoption.

In the end, simplicity wins.

Jeremy

I think that’s a pattern we see over and over again, not just in the history of the web, but before the web. Simplicity wins.

Ted Nelson famously to this day thinks that the World Wide Web is weak sauce. It didn’t try to solve complex right out of the gate, like handling micro-payments.

As we saw, the hypertext community that one-way linking was ridiculous. But simplicity does win out.

Unfortunately that’s why browsers ended up just being browsers. We got some of the functionality back with wikis, content management systems, and social media to a certain extent. But I think it’s still a bit of a shame that when I want to browse a web page, I’m using one piece of software—the browser—but when I want to make a web page, I’m using another piece of software (or multiple pieces of software) to get something on to the web.

I feel like we lost something.

Remy

We head home after a week of hacking.

We were all invited back in March earlier this year for the Web@30 event that was taking place to celebrate the web but also Sir Tim Berners-Lee.

A NeXT machine from 1989 running the WorldWideWeb browser and my laptop in 2019 running https://worldwideweb.cern.ch

A few of us, Jeremy, Martin, and myself, went back to CERN for the the first leg of the event. There was even a video showing off our work as part of the main conference. Jeremy and I even chased Tim Berners-Lee back to London at the science museum like obsessive web fanboys. It was a lot of fun!

The night before I got a message from Jean-François Groff, pictured here on the right. JF Groff joined Tim Berners-Lee 30 years ago and created libwww (a precursor to libcurl).

The message read:

Sitting with Tim right now. He loves your browser!

Crushed it.

It’s amazing that we were able to pull this off in a week just with text editors and information that’s freely available. It’s mind boggling how much we can do today and how far it can reach. And it all started on that NeXTSTEP machine 30 years ago.

What I really loved about this project was working with this brilliantly old technology, digging around at the birth of browsers and the web.

I wouldn’t be stood here today, if it weren’t for the web.

I wouldn’t even know Jeremy, if it weren’t for the web.

I wouldn’t have a career, if it weren’t for the web.

I loved seeing how such old technology, the original WorldWideWeb browser was still able to render my blog. Because I put content first, delivered markup from the server. The page rendered because HTML really is backward compatible.

HTML and HTTP are just text. Nothing terribly fancy. Dare I say, beautifully simple, and as we said before, simplicity wins the day.

This same simplicity is what allows us all to have the chance for an equal voice. The web allows us to freely publish our thoughts and experiences. We have to fight to protect that kind of web.

And we’ve got to work at keeping it simple.

Jeremy

When we returned to CERN for the 30th anniversary celebrations, one of the other people there was the journalist Zeynep Tefepkçi.

When @Zeynep met NeXT.

Lou, Zeynep, Tim, Robert, and Jean-François. #Web30

She was on a panel along with Tim Berners-Lee, Robert Caillau, Jean-François Groff, and Lou Montoulli. At the end of the panel discussion, she was asked:

What would you tell the next generation about how to use this wonderful tool?

She replied:

If you have something wonderful, if you do not defend it, you will lose it.

If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.

Defend the simplicity and resilience that’s so central to the web.

I don’t know about you, but I often feel that just trying to make a web page has become far too complicated. But this is complexity that we have chosen with our tools, processes, and assumptions. We’ve buried the magic. The magic of linking web pages together. The magic of a working global hypertext system, where nobody needs to ask for permission to publish.

Tim Berners-Lee prototyped the first web browser, but the subsequent world wide web wasn’t created by any one person. It was created by everyone. That. Is. Magical.

I don’t want the web to become a place where only an elite priesthood get to experience the magic of creation. I’m going to fight to defend the openness of the world wide web. This is for everyone. Not just for everyone to use; it’s for everyone to create.

The Web Share API in Safari on iOS

I implemented the Web Share API over on The Session back when it was first available in Chrome in Android. It’s a nifty and quite straightforward API that allows websites to make use of the “sharing drawer” that mobile operating systems provide from within a web browser.

I already had sharing buttons that popped open links to Twitter, Facebook, and email. You can see these sharing buttons on individual pages for tunes, recordings, sessions, and so on.

I was already intercepting clicks on those buttons. I didn’t have to add too much to also check for support for the Web Share API and trigger that instead:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      text: document.querySelector('meta[name="description"]').getAttribute('content'),
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

That worked a treat. As you can see, there are three fields you can pass to the share() method: title, text, and url. You don’t have to provide all three.

Earlier this year, Safari on iOS shipped support for the Web Share API. I didn’t need to do anything. ‘Cause that’s how standards work. You can make use of APIs before every browser supports them, and then your website gets better and better as more and more browsers add support.

But I recently discovered something interesting about the iOS implementation.

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence. But I don’t think this is a bug, per se. There’s nothing in the spec to say how operating systems should handle the data sent via the Web Share API. Still, I think it’s a bit counterintuitive. If I’m looking at a web page, and I opt to share it, then surely the URL is the most important piece of data?

I’m not even sure where to direct this feedback. I guess it’s under the purview of the Safari team, but it also touches on OS-level interactions. Either way, I hope that somebody at Apple will consider changing the current behaviour for copying Web Share data to the clipboard.

In the meantime, I’ve decided to update my code to remove the text parameter:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

If the behaviour of Safari on iOS changes, I’ll reinstate the missing field.

By the way, if you’re making progressive web apps that have display: standalone in the web app manifest, please consider using the Web Share API. When you remove the browser chrome, you’re removing the ability for users to easily share URLs. The Web Share API gives you a way to reinstate that functionality.

Monday, October 14th, 2019

Something for the weekend

Your weekends are valuable. Spend them wisely. I have some suggestion on how you might spend next weekend, October 19th and 20th, depending on where you are in the world.

If you’re in the bay area, or anywhere near San Francisco, I highly recommend that you go to Science Hack Day—two days of science, hacking, and fun. This will be the last one in San Francisco so don’t miss your chance.

If you’re in the south of England, or anywhere near Brighton, come along to Indie Web Camp. Saturday will feature discussions on owning your data. Sunday will be a day of doing. I’ve written about previous Indie Web Camps before, and I really can’t recommend it highly enough!

Do me a favour and register for a spot—it’s free—so I’ve got some idea of numbers. Looking forward to seeing you there!