Tags: dev

2182

sparkline

Thursday, January 16th, 2020

Thinking about the past, present, and future of web development – Baldur Bjarnason

The divide between what you read in developer social media and what you see on web dev websites, blogs, and actual practice has never in my recollection been this wide. I’ve never before seen web dev social media and forum discourse so dominated by the US west coast enterprise tech company bubble, and I’ve been doing this for a couple of decades now.

Baldur is really feeling the dev perception.

Web dev driven by npm packages, frameworks, and bundling is to the field of web design what Java and C# in 2010s was to web servers. If you work in enterprise software it’s all you can see. Web developers working on CMS themes (or on Rails-based projects) using jQuery and plain old JS—maybe with a couple of libraries imported directly via a script tag—are the unseen dark matter of the web dev community.

Monday, January 13th, 2020

Smaller HTML Payloads with Service Workers — Philip Walton

This is a great progressive enhancement for performance that uses a service worker to combine reusable bits of a page with fresh content. The numbers are very convincing!

Alas, the code is using the Workbox library, but figuring out the vanilla code to write shouldn’t be too tricky seeing as Philip talks through his logic step by step.

Thursday, January 9th, 2020

Adding Response Metadata to Cache API Explainer by Aaron Gustafson and Jungkee Song

This is a great proposal that would make the Cache API even more powerful by adding metadata to cached items, like when it was cached, how big it is, and how many times it’s been retrieved.

Tuesday, January 7th, 2020

[css-grid-2] Masonry layout · Issue #4650 · w3c/csswg-drafts

This is an interesting looking proposal for CSS grid to be ever so slightly extended to enable Masonry-style auto placement—something’s that tantalisingly close right now, but still requires some JavaScript to do calculations.

Monday, January 6th, 2020

Browser defaults

I’ve been thinking about some of the default behaviours that are built into web browsers.

First off, there’s the decision that a browser makes if you enter a web address without a protocol. Let’s say you type in example.com without specifying whether you’re looking for http://example.com or https://example.com.

Browsers default to HTTP rather than HTTPS. Given that HTTP is older than HTTPS that makes sense. But given that there’s been such a push for TLS on the web, and the huge increase in sites served over HTTPS, I wonder if it’s time to reconsider that default?

Most websites that are served over HTTPS have an automatic redirect from HTTP to HTTPS (enforced with HSTS). There’s an ever so slight performance hit from that, at least for the very first visit. If, when no protocol is specified, browsers were to attempt to reach the HTTPS port first, we’d get a little bit of a speed improvement.

But would that break any existing behaviour? I don’t know. I guess there would be a bit of a performance hit in the other direction. That is, the browser would try HTTPS first, and when that doesn’t exist, go for HTTP. Sites served only over HTTP would suffer that little bit of lag.

Whatever the default behaviour, some sites are going to pay that performance penalty. Right now it’s being paid by sites that are served over HTTPS.

Here’s another browser default that Rob mentioned recently: the viewport meta tag:

I thought I might be able to get away with omitting meta name="viewport". Apparently not! Maybe someday.

This all goes back to the default behaviour of Mobile Safari when the iPhone was first released. Most sites wouldn’t display correctly if one pixel were treated as one pixel. That’s because most sites were built with the assumption that they would be viewed on monitors rather than phones. Only weirdos like me were building sites without that assumption.

So the default behaviour in Mobile Safari is assume a page width of 1024 pixels, and then shrink that down to fit on the screen …unless the developer over-rides that behaviour with a viewport meta tag. That default behaviour was adopted by other mobile browsers. I think it’s a universal default.

But the web has changed since the iPhone was released in 2007. Responsive design has swept the web. What would happen if mobile browsers were to assume width=device-width?

The viewport meta element always felt like a (proprietary) band-aid rather than a long-term solution—for one thing, it’s the kind of presentational information that belongs in CSS rather than HTML. It would be nice if we could bid it farewell.

Thursday, January 2nd, 2020

Making a ‘post-it game’ PWA with mobile accelerometer API’s | Trys Mudford

I made an offhand remark at the Clearleft Christmas party and Trys ran with it…

Monday, December 30th, 2019

How creating a Progressive Web App has made our website better for people and planet

Creating a PWA has saved a lot of kilobytes after the initial load by storing files on the device to reuse on subsequent requests – this in turn lowers the load time and carbon footprint on subsequent page views, making the website better for both people and planet. We’ve also enabled offline access, which significantly improves user experience for people in areas with patchy connections, such as mobile users on their commute.

Sunday, December 29th, 2019

Move Fast & Don’t Break Things | Filament Group, Inc.

This is the transcript of a brilliant presentation by Scott—read the whole thing! It starts with a much-needed history lesson that gets to where we are now with the dismal state of performance on the web, and then gives a whole truckload of handy tips and tricks for improving performance when it comes to styles, scripts, images, fonts, and just about everything on the front end.

Essential!

Sunday, December 22nd, 2019

This Page is Designed to Last: A Manifesto for Preserving Content on the Web

Geocities, LiveJournal, what.cd, now Yahoo Groups. One day, Medium, Twitter, and even hosting services like GitHub Pages will be plundered then discarded when they can no longer grow or cannot find a working business model.

Considering the needs of someone who wants to make and maintain a website, without the ridiculous complexity of “modern” web tooling:

How do we make web content that can last and be maintained for at least 10 years? As someone studying human-computer interaction, I naturally think of the stakeholders we aren’t supporting. Right now putting up web content is optimized for either the professional web developer (who use the latest frameworks and workflows) or the non-tech savvy user (who use a platform).

Friday, December 20th, 2019

The Layers Of The Web

The opening presentation from the Beyond Tellerrand conference held in Berlin in November 2019.

Guten Morgen. All right. I’m just going to get started because I’ve got a lot to talk about and I’m very, very excited to be here.

I’m excited to talk about the web. I’ve been thinking a lot about the web. You know, I think a lot about the web all the time, but this year, in particular, thinking about where the web came from; asking myself where the web came from, which is kind of a dumb question because it’s pretty obvious where the web came from.

It came from this guy. This is Tim Berners-Lee and he is the creator of the World Wide Web. It was 30 years ago, March 1989, that he wrote a proposal while he was at CERN, a very dull-looking proposal called “Information Management: A Proposal” that had incomprehensible diagrams trying to explain what he had in mind. But a supervisor, Mike Sendall, saw the potential and scrawled across the top, “Vague but exciting.”

Tim Berners-Lee starts working on this idea he has for a global hypertext system and he starts creating the world’s first web browser and the world’s first web server, which is this NeXT machine which is in the Science Museum in London, a lovely machine, the NeXT box.

I was in the neighbourhood so I just had to come by and say hello…

I have a great affection for it because, earlier this year, I was very honored to be invited to CERN, along with this bunch of hackers, to take part in a project related to the 30th anniversary of that proposal. I will show you a video that explains the project.

So, we came to CERN this week in order to create some sort of modern-day interpretation of the very first web browser.

—Kimberly Blessing

Well, the project is to restore the first browser which was developed by the inventor of the Web, and the idea is to create an experience for the people who could not use the web in its early days to have an idea how it felt to use the web at that time.

—Martin Akolo Chiteri

I think the biggest difficulty was to make the browser work in the NeXT machine that we had.

—Angela Ricci

We really needed to work with an original NeXT box in order to really understand what that experience was like in order to be able to write some code and replicate that experience.

—Kimberly Blessing

My role is code, so generating the code to create the interactive aspect of the World Wide web browser, recreated browser. It’s very much writing JavaScript to kind of create all the NeXT operating system UI, making requests to servers to go and get the HTML and massage the HTML back into a format that looks good in the World Wide Web browser; and making sure we end up with a URL that goes into production that someone can visit and see their own webpages. The tangible software is what I’m responsible for, so I have to make sure it all gets done. Otherwise, we have no browser to look at, basically.

—Remy Sharp

We got together a few years back to do a similar sort of hack project here at CERN which was creating the world’s second-ever web browser, which was the Line Mode browser. We had a lot of fun with it and it’s a great bunch of people from all over the world. It’s been really great to get back together and it’s always amazing to be here at CERN, to be at not just the birthplace of the Web, but the most important place on the planet for science.

Yeah, it’s been a lot of fun. I kind of don’t want it to be over because we are in our element, hacking away, having fun, and just soaking up the atmosphere, and we are getting to chat with people who were there 30 years ago, Jean-Francois Groff and Robert Cailliau, these people who were involved in the creation of the World Wide web. To me, that’s amazing to be surrounded by so much World Wide Web history.

The plan is that this will go online and anyone will be able to access it because it’s on the web, and that’s the beautiful thing about the web is that anyone can visit a website, and so everyone will have the opportunity to try using the world’s first web browser and see what modern webpages would look like if they were passed through this first web browser.

—Jeremy Keith

Well, spoiler alert. The project was a success and you can, indeed, look at your websites in a recreation of the first-ever web browser. This is the URL. It’s worldwideweb.cern.ch.

Success, that was good. But as you could probably tell from that video, Remy was the one basically making this all happen. He was the one writing the JavaScript to recreate this in a modern browser. This is the first-ever web page viewed in the first-ever web browser.

As you gathered, again, I was really fascinated by the history of the Web, like, where did it come from, and the people who were there at the time and getting to pick their brains. I spent most of my time working on the accompanying website to go with this project. I was creating this timeline.

Timeline

Because this was to mark the 30th anniversary of this proposal, I thought, well, we could easily look at what has happened in the last 30 years: websites, web servers, formats, standards - all that stuff. But I thought it would be fascinating to look at the previous 30 years as well and try and figure out the things that were happening that influenced Tim Berners-Lee in terms of hypertext, networks, computing, and all this stuff.

But I’d kind of had given myself this arbitrary cut-off point of 30 years to make this nice symmetry of it being the 30th anniversary of the World Wide Web. I could go further back. I could start asking, well, what happened before 30 years ago? What were the biggest influences on Tim Berners-Lee and the World Wide Web?

Now, if you were to ask Tim Berners-Lee himself who his biggest influencers were, he would give you a straight-up answer. He will say his biggest influencers were Conway Berners-Lee and Mary Lee Woods, his father and mother, which is fair enough. Normally, when you ask people who their influences are, they say, “Oh, my parents. They gave me a loving environment. They kindled my curiosity,” and all that stuff. I’m sure that’s true but, in this case, it was also a big influence in a practical sense in that both Mary and Conway worked on the Ferranti Mark 1. That’s where they met. They were programmers. Tim Berners-Lee’s parents were programmers on the Ferranti Mark 1, a very early computer. This is in the 1950s in Britain.

Okay, this feels like a good origin story for the web, right? They were working on this early computer.

But it’s an early computer; it’s not the first computer. Maybe I need to go back further. How far back do I go to find the first computer?

Time machine.

Is this the first computer, the Antikythera mechanism? You can see this in a museum in Athens. This was recovered from a shipwreck. It was recovered at the start of the 20th Century, but it dates back thousands of years, a mechanism for predicting the position of stars and planets. It does calculations. It is a calculating device. Not a programmable computer as such, though.

Worshipping the reliquary of Babbage’s brain

If you’re thinking about the origins of the idea of a programmable computer, I think we could start to look at this gentleman, Charles Babbage. This is half of Charles Babbage’s brain, which is in the Science Museum in London along with that original NeXT box that the World Wide web was created on. The other half is in the Computing History Museum in California.

Charles Babbage lived in the 19th Century, and kind of got a lot of seed funding from the U.K. government to build a device, the Difference Engine, which would do calculations. Later on, he scrapped that and started working on the Analytical Engine which would be even better — a 2.0 version. It never got finished, by the way, but it was a really amazing idea because you could see the architecture of like a central processing unit, but it was still fundamentally a calculator, a calculating machine.

The breakthrough in terms of programming maybe came from Charles Babbage’s collaborator. This is Ada Lovelace. She was translating documents by an Italian mathematician about Difference Engines and calculations. She realized that—hang on—if we’re doing operations on numbers, what if those numbers could stand for other concepts, non-numerical like words or thoughts? Then we could do operations on things other than numbers, which is exactly what we do today in modern computing.

If you use a word processor, you’re not processing words; you’re operating on ones and zeros. If you use a graphics program, you’re not actually moving pixels around; you’re operating on ones and zeroes. This idea of how anything could stand in for ones and zeros for numbers kind of started with Ada Lovelace.

But, as I said, the Difference Engine and the Analytical Engine, they never got finished, and this was kind of a dead-end. It turns out, they weren’t an influence.

Later on, for example, this genius who was definitely responsible for the first working computers, Alan Turing, he wasn’t aware of the work of Babbage and Lovelace, which is a shame. He was kind of working in isolation.

He came up with the idea of the universal machine, the Turing Machine. Give it an infinitely long tape and enough state, enough time, you could calculate literally anything, which is pretty much what computers are.

He was working at Bletchley Park breaking the code for the Enigma machines, and that leads to the creation of what I think would be the first programmable computer. This is Colossus at Bletchley Park. This was created by a colleague of Turing, Tommy Flowers.

It is programmable. It’s using valves, but it’s absolutely programmable. It was top secret, so even for years after the war, this was not known about. In the history books, even to this day, you’ll often see ENIAC listed as the first programmable computer, but I think that honor goes to Tommy Flowers and Colossus.

By the way, Alan Turing, after the war, after 1945, he did go on to work and keep on working in the field of computing. In fact, he worked as a consultant at Ferranti. He was working on the Ferranti Mark 1, the same computer where Tim Berners-Lee’s parents met when they were programmers.

As I say, that was after the war ended in 1945. Now, we can’t say that the work at Bletchley Park was responsible for winning the war, but we could probably say that it’s certainly responsible for shortening the war. If it weren’t for the work done by the codebreakers at Bletchley Park, the war might not have finished in 1945.

1945 is the year that this gentleman wrote a piece that was certainly influential on Tim Berners-Lee. This is Vannevar Bush, a scientist, a thinker. In 1945, he published a piece in the Atlantic Monthly under the heading, “A Scientist Looks at Tomorrow,” he publishes, “As We May Think.”

In this piece, he describes an imaginary device. It’s a mechanical device inside a desk, and the operator is allowed to work on reams and reams of microfilm and to connect ideas together, make these associative trails. This is kind of like hypertext before the word hypertext has been coined. Vannevar Bush calls this device the Memex. That’s published in 1945.

Memex

Also, in 1945, this young man has been drafted into the U.S. Navy and he’s shipping out to the Pacific. His name is Douglas Engelbart. Literally as the ship is leaving the harbor to head to the Pacific, word comes through that the war is over.

Now, he still gets shipped out to the Pacific. He’s in the Philippines. But now, instead of fighting against the Japanese, he’s lounging around in a hut on stilts reading magazines and that’s where he reads “As We May Think,” by Vannevar Bush.

Fast-forward years later; he’s trying to decide what to do with his life other than settle down, get married, have a job, you know, that kind of thing. He thinks, “No, no, I want to make the world a better place.” He realizes that computers could be the way to do this if they could implement something very much like the Memex. Instead of a mechanical device, what if computers could create the Memex, this hypertext system? He devotes his life to this and effectively invents the field of Human Computer Interaction.

On December 9th, 1968, he demonstrates what he’s been working at. This is in San Francisco, and he demonstrates bitmap screens. He demonstrates real-time collaboration on documents, working hypertext …and also he invents the mouse for the demo.

We have a pointing device called a mouse, a standard keyboard, and a special key set we have here. And we are going to go for a picture down on our laboratory in Menlo Park and pipe it up. It’ll show you, from another point of view, more about how that mouse works.

Come in, Menlo Park. Okay, there’s Don Anders’ hand in Menlo Park. In a second, we’ll see the screen that he’s working and the way the tracking spot moves in conjunction with movements of that mouse. I don’t know why we call it a mouse sometimes. I apologize; it started that way and we never did change it.

—Douglas Engelbart

This was ground-breaking. The mother of all demos, it came to be known as. This was a big influence on Tim Berners-Lee.

At this point, we’ve entered the time cone of those 30 years before the proposal that Tim Berners-Lee made, which is good because this is the moment where I like to branch off from this timeline and sort of turn it around.

The question I’m sure nobody is asking—because you saw there was a video link-up there; Douglas Engelbart is in San Francisco, and he has a video link-up with Menlo Park to demonstrate real-time collaboration with computers—the question nobody is asking is, who is operating the video camera in Menlo Park?

Well, I’ll tell you the answer to that question that nobody is asking. The man operating the video camera in Menlo Park is this man. His name is Stuart Brand. Now, Stuart Brand has spent most of the ‘60s doing what you would do in the ‘60s; he was dropping acid. This was all kosher. This was before it was illegal.

He was on the Merry Pranksters bus with Ken Kesey and, on one particular acid trip, he literally saw the Earth curving away and realizing that, yeah, we’re all on one planet, man! And he started a campaign with badges called, “Why haven’t we seen a photograph of the whole Earth yet?” I like the “yet” part in there like it’s a conspiracy that we haven’t seen a photograph of the whole Earth.

He was kind of onto something here, realizing that seeing our planet as a whole planet from space could be a consciousness-changing thing much like LSD is a consciousness-changing thing. Sure enough, people did talk about the effect it had when we got photographs like Earthrise from Apollo 8, and he used those pictures when he published the Whole Earth Catalog, which was a series of books.

The Whole Earth Catalog was basically like Wikipedia before the internet. It was this big manual of how to do everything. The idea was, if you were running a commune, living in a commune, you needed to know about technology, and agriculture, and weather, and all the stuff, and you could find that in the Whole Earth Catalog.

He was quite an influential guy, Stuart Brand. You probably heard the Steve Jobs commencement speech where he quotes Stuart Brand, “Stay hungry, stay foolish,” all that stuff.

Stuart Brand also did a lot of writing. After Douglas Engelbart’s demo, he started to see that this computer thing was something else. He literally said computers are the new LSD, so he starts really investigating computing and computers.

He writes this great article in “Rolling Stone” magazine in 1972 about space war, one of the first games you could play on the screen. But he has a wide range of interests. He kind of kicked off the environmental movement in some ways.

At one point, he writes a book about architecture. He writes a book called “How Buildings Learn.” There’s a television series that goes with it as well. This is a classic book (the definition of a classic book being a book that everyone has heard of and nobody has read).

In this book, he starts looking at the work of a British architect called Frank Duffy. Frank Duffy has this idea about architecture he calls shearing layers. The way that Frank Duffy puts it is that a building, properly conceived, consists of several layers of longevity, so kind of different rates of change.

He diagrams this out in terms of a building, and you see that you’ve got the site that the building is on that’s moving at a geological timescale, right? That should be around for thousands of years, we would hope.

Then you’ve got the actual structure that could stand for centuries.

Then you get into the infrastructure inside. You know, the plumbing and all that, you probably want to swap out every few decades.

Basically, until you get down to the stuff inside a room, the furniture that you can move around on a daily basis. You’ve got all these timescales moving from fast to slow as you move inwards into the house.

What I find fascinating about this idea of these different layers as well is the way that each layer depends on the layer below. Like, you can’t have the structure of a building without first having a site to put it on. You can’t move furniture around inside a room until you’ve made the room using the walls and the doors, right? This idea of shearing layers is kind of fascinating, and we’re going to get back to it.

Something else that Stuart Brand went on to do; he was one of the co-founders of the Long Now Foundation. Anybody here part of the Long Now Foundation? Any members of the Long Now Foundation? Ah… It’s a great organization. It’s literally dedicated to long-term thinking. It was founded by Stuart Brand and Danny Hillis, the computer scientist, and Brian Eno, the musician and producer. Like I said, dedicated to long-term thinking. This is my membership card made out of a durable metal because it’s got to last for thousands of years.

If you go on the website of the Long Now Foundation, you’ll notice that the years are made up of five digits, so instead of 2019, it will be 02019. Well, you know, you’ve got to solve the Y10K problem. They’re dedicated to long-term thinking, to trying to think in the longer now.

Admiring the Clock Of The Long Now

One of the most famous projects is the clock of the Long Now. This is a clock that will tell time for 10,000 years. Brian Eno has done the chimes. They’re generative. It’ll never chime the same way twice. It chimes once a century. This is a scale model that’s in the Science Museum in London along with half of Charles Babbage’s brain and the original NeXT machine that Tim Berners-Lee created World Wide Web on.

This is just a scale model. The full-sized clock is going to be inside a mountain in west Texas. You’ll be able to visit it. It’ll be like a pilgrimage. Construction is underway. I hope to visit the clock one day.

Stuart Brand collected his thoughts. It’s a really fascinating project when you think about, how do you design something to last 10,000 years? How do you communicate over 10,000 years? One of those tricky design problems almost like the Voyager Golden Record or the Yucca Mountain waste disposal. How do you communicate to future generations? You can’t rely on language. You can’t rely on semiotics. Anyway, he collected a lot of his thoughts into this book called “The Clock of the Long Now,” subtitled “Time and Responsibility: The Ideas Behind the World’s Slowest Computer.” He’s thinking about time. That’s when he comes back to shearing layers and these different layers of rates of change; different layers of time.

Stuart Brand abstracts the idea of shearing layers into something called “pace layers.” What if it’s not just architecture? What if any kind of system has these different rates of change, these layers?

He diagrams this out in terms of the human species, so think of humans. We have these different layers that we operate at.

At the lowest, slowest level, there is our nature, literally, like what makes us human in terms of our DNA. That doesn’t change for tens of thousands of years. Physiologically, there’s no difference between a caveman and an astronaut, right?

Then you’ve got culture, which cumulates of centuries, and the tribal identities we have around things like nations, language, and things like that.

Governance, models of governance, so not governments but governance, as in the way we choose to run things, whether that’s a feudal society, or a monarchy, or representative democracy, right? Those things do change, but not too fast, hopefully.

Infrastructure: you’ve got to keep up with the times, you know? This needs to move at a faster pace, again.

Commerce: much more fast-moving. Commerce needs to — you’re getting into the faster timescales there.

Then he puts fashion at the top. By fashion, he means anything that is supposed to be new and exciting, so that includes pop music, for example. The whole idea with fashion is that it’s there to try stuff out and discard it very quickly.

“What about this?” “No.” “What about that?” “Try this.” “No, try that.”

The good stuff, the stuff that kind of sticks to the wall, will maybe find its way down to the longer-lasting layers. Maybe a really good pop song from fashion ends up becoming part of culture, over time.

Here’s the way that Stuart Brand describes pace layers. He says:

Fast learns; slow remembers. Fast proposals, and slow disposes. Fast is discontinuous; slow is continuous. Fast and small instructs slow and big by a crude innovation and by occasional revolution, but slow and big controls small and fast by constraint and constancy.

He says:

Fast gets all of our attention but slow has all the power.

Pace layers is one of those ideas that, once you see it, you can’t unsee it. You know when you want to make someone’s life a misery, you just teach them about typography. Now they can’t unsee all the terrible kerning in the world. I can’t unsee pace layers. I see them wherever I look.

Does anyone remember this book, UX designers in the room, “The Elements of User Experience,” by Jesse James Garrett? It’s old now. We’re going back in the way but, in it, he’s got this diagram about the different layers to a user experience. You’ve got the strategy below that finally ends up with an interface at the top.

I look at this, and I go, “Oh, right. It’s pace layers. It’s literally pace layers.” Each layer depending on the layer below, the slower layers at the bottom, the faster-moving things at the top.

With this mindset that pace layers are everywhere, I thought, “Can I map out the web in terms of pace layers, the technology stack of the web?” I’m going to give it a go.

At the lowest stack, the slowest moving, I would say there’s the internet itself, as in TCP/IP, the transmission control protocol, Internet protocol created by Bob Kahn and Vint Cerf in the ‘70s and pretty much unchanged since then, deliberately dumb, deliberately simple. All it does is move packets around. Pretty much unchanged.

On top of that, you get the other protocols that use TCP/IP, like in the case of the web, the hypertext transfer protocol. Now, this has changed over time. We now have HTTP/2. But it hasn’t been rapid change. It’s been gradual. Again, that kind of feels right. It feels good that HTTP isn’t constantly changing underneath us too much.

Then we serve up over HTTP are URLs. I wish that URLs were down here. I wish that URLs were everlasting, never changing. But, unfortunately, I must acknowledge that that’s not true. Links die. We have to really work hard to keep them alive. I think we should work hard to keep them alive.

What do you put at those URLs? At the simplest level, it’s supposed to be plain text. But this is the web, so let’s say structured text. This is going to be HTML, the hypertext markup language, which Tim Berners-Lee came up with when he created the World Wide Web. I say, “Came up with.” He basically stole it from SGML that scientists at CERN were already using and sprinkled in one or two new tags, as they were calling it back then.

There were maybe like 20-something tags in HTML when Tim Berners-Lee created the web. Now we’ve got over 100 elements, as we call them. But I feel like I’ve been able to keep up with the pace of change. I mean, the vast big kind of growth spurt with HTML was probably HTML5. That’s been a while back now. It’s definitely change that I can keep on top of.

Then we have CSS, the presentation layer. That feels like it’s been moving at a nice clip lately. I feel like we’ve been getting a lot of cool stuff in CSS, like Flexbox and Grid, and all this new stuff that browsers are shipping. Still, I feel like, yeah, yeah, this is good. It’s right that we get lots of CSS pretty rapidly. It’s not completely overwhelming.

Then there’s the JavaScript ecosystem. I specifically say the “JavaScript ecosystem” as opposed to the “JavaScript language” because the JavaScript language is being developed at a nice pace. I feel like it’s going at a good speed of standardization. But the ecosystem, the frameworks, the libraries, the build tools, all of that stuff, that feels like, “You know what? Try this. No, try that. What about this? What about that? Oh, you’re still using that framework? No, no, we stopped using that last week. Oh, you’re still using that build tool? No, no, no, that’s so … we’ve moved on.”

I find this very overwhelming. Can I get a show of hands of anybody else who feels overwhelmed by this rate of change? All right. Keep your hands up. Keep your hands up and just look around. I want you to see you are not alone. You are not alone.

But I tell you what; after mapping these layers out into the pace layer diagram, I realized, wait a minute. The JavaScript layer, the fashion layer, if you will, it’s supposed to be like that. It’s supposed to be trying stuff out. Throw this at the wall. No, throw that at the wall. How about this? How about that?

It’s true that the good stuff does stick. Like if I think back to the first uses of JavaScript—okay, I’m showing my age, but—when JavaScript first came along, we’d use it for things like image rollovers or form validation, right? These days, if I wanted to do an image rollover—you mouseover something and it changes its appearance—I wouldn’t use JavaScript. I’d use CSS because we’ve got :hover.

If I was doing a form validation, like, “Oh, has that field actually been filled in?” because it’s required and, “Does that field actually look like an email address?” because it’s supposed to be an email address, I wouldn’t even use JavaScript. I would use HTML; input type="email" required. Again, the good stuff moves down into the sort of slower layers. Fast learns; slow remembers.

The other thing I realized when I diagrammed this out was that, “Huh, this kind of maps to how I approach building on the web.” I pretty much take this for granted that it’s going to be on the Internet. There’s not much I can do about that. Then I start thinking about URLs like URL-first design, the information architecture of a site. I think it’s underrated. I think people should create a URL-first design. URL design, in general, I think it’s a really good place to start if you’re building a product or a service.

Then, about your content in terms of structure. What is the most important thing on this page? That should be an H1. Is this a paragraph? Is it a list? Thinking about the structure first and then going on to think about the appearance which is definitely the way you want to go if you’re making something responsive. Think about the structure first and then the appearance and all these different form factors.

Then finally, add in behavior with JavaScript. Whatever HTML and CSS can’t do, that’s what I will use JavaScript for to kind of enhance it from there.

This maps really nicely to how I personally approach building things on the web. But, it is a testament to the flexibility of the World Wide Web that, if you don’t want to build in this way, you don’t have to.

If you want, you could build like this. JavaScript is a really powerful language. If you wanted to do navigations and routing in JavaScript, you can. If you want to inject all your content into the page using JavaScript, you can. CSS in JS? You can. Right? I mean, this is pretty much the architecture of a single-page app. It’s on the internet and everything is in JavaScript. The internet is a delivery mechanism for a chunk of JavaScript that does everything: the markup, the CSS, the routing.

This isn’t how I approach building on the web. I kind of ask myself why this doesn’t feel quite right to me. I think it’s because of the way it kind of turns everything into a single point of failure, which is the JavaScript, rather than spreading out those points of failure. We’re on the Internet and, as long as the JavaScript runs okay, the user gets everything. It turns what you’re building into a binary proposition that either it doesn’t work at all or it works great. Those are your own two options.

Now I’ll point out that, in another medium, this would make complete sense. Like if you’re building a native app. If you build an iOS app and I’ve got an iOS device, I get 100% of what you’ve designed and built. But if you’re building an iOS app and I have an Android device, I get zero percent of what you’ve designed and built because you can’t install an iOS app on an Android device. Either it works great or it doesn’t work at all; 100% or zero percent.

The web doesn’t have to be like that. If you build in that layered way on the web, then maybe I don’t get 100% of what you’ve designed and built but I don’t get zero percent, either. I’ll get something somewhere along the way, hopefully, closer to working great. It goes from not working at all to just about working. It works fine and works well; it works great.

You’re building up these layers of experience, the idea being that nobody gets left behind. Everybody gets something regardless of their device, their network, their browser. Everyone is not going to get the same experience, but everybody gets something. That feels very true to the original sort of founding ideas of the web and it maps so nicely to our technical stack on the web, the fact that you can start to think about things like URLs-first and think about the structure, then the presentation, and then the behavior.

I’m not the only one who likes thinking in this layered kind of way when it comes to the web. I’m going to quote my friend Ethan Marcotte. He says:

I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it will change if, say, fonts aren’t available, or JavaScript doesn’t work, or someone doesn’t see the design as you or I might and is having the page read allowed to them.

That’s a really good point that when you build in the layered way, you’re building in the resilience that something can fall back to a layer a little further down.

This brings up something I’ve mentioned here before at Beyond Tellerrand, which is that, when we’re evaluating technologies, the question we tend to ask is, how well does it work? That’s an absolutely valid question. You’re about to try a new tool, a new framework, a new standard. You ask yourself; how well does it work?

I think the more important question to ask is:

How well does it fail?

What happens if that piece of technology fails? That’s why I like this layered approach because this fails really well. JavaScript’s no longer a single point of failure. Neither is CSS, frankly. If the CSS never loaded, the user still gets something.

Now, this brings up an idea, a principle that definitely influenced Tim Berners-Lee. It was at the heart of his design principles for the World Wide Web. It’s called the Principle of Least Power that states, “Choose the least powerful language suitable for a given purpose,” which sounds really counterintuitive. Why would I choose the least powerful language to do something?

It’s kind of down to the fact that there’s a trade-off. With power, you get a fragility, right? Maybe something that is really powerful isn’t as universal as something simpler. It makes sense to figure out the simplest technology you can use to achieve a task.

I’ll give an example from my friend Derek Featherstone. He says:

In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

Again, he’s talking about the resilience you get by building in a layered way and choosing the least powerful technology.

It’s like a classic example being ARIA. The first rule of ARIA is, don’t use ARIA if you don’t have to. Rather than using a div and then adding the event handlers and the ARIA roles to make it look like a button, just use a button. Use the simpler technology lower in the stack.

Now, I get pushback on this because people will tell me, “Well, that’s fine if you’re building something simple, but I’m not building something simple. I’m building something complex.” Everyone likes to think they’re building something complex, right? Everyone is convinced they’re working on really hard things, which makes sense. That’s human nature.

If you’re at a cocktail party and someone says, “What do you do?” and you describe your work and they say, “Oh, okay. That sounds really easy,” you’d be offended, right?

If you’re at a party and someone says, “What do you do?” you describe your work and they go, “Wow, God, that sounds hard,” you’d be like, “Yeah! Yeah, it is hard. What I do is hard.”

I think we gravitate to this, especially when someone markets it as, “This is a serious tool for serious, complex sites.” I’m like, “That’s me. I’m working on a serious, complex site.”

I don’t think the reality is quite like that. Reality is just messier. There’s nothing quite that simple. Very few things are really that complex.

Everything kind of exists on this continuum somewhere along the way. Even the simplest website has some form of interaction, something appy about it.

Those are those other two terms people use when talking about simple and complex is website and web app, as if you can divide the entirety of the whole World Wide Web into two categories: websites and web apps.

Again, that just doesn’t make sense to me. I think the truth is, things are messier and schmooshier between this continuum of websites and web apps. I don’t get why we even need the separate word. It’s all web stuff.

Though, there is this newer term, “Progressive Web App,” that I kind of like.

Who has heard of Progressive Web Apps? All right.

Who thinks they have a good handle on what a Progressive Web App is?

Okay. See, that’s a lot fewer hands, which is totally understandable because, if you start googling, “What is a Progressive Web App?” you get these Zen-like articles. “It’s a state of mind.” “It’s about rich, native-like interactions, man.”

No. No, it’s not. Worse still, you read, “Oh, a Progressive Web App is a Single PageApp.” I was like, no, you’ve lost me there. No, it’s not. Or least it can be, but any website can be a Progressive Web App. You can elevate a website to be a Progressive Web App.

I don’t mean in some sort of Zen-like fashion. I mean using technologies, three particular technologies.

  1. You make sure that website is running HTTPS,
  2. you have a web app manifest that’s a JSON file with metadata, and
  3. you have a service worker that gets installed on the user’s device.

That’s it. These three technologies turn a website into a Progressive Web App — no mystery about it.

The tricky bit is that service worker part. It’s kind of a weird thing because it’s JavaScript but it’s JavaScript that gets installed on the user’s device and acts like a proxy. It intercepts network requests and can do different things like grab things from the cache instead of going to the network.

I’m not going to go into how it works because I’ve written plenty about that in this book “Going Offline,” so if you want to know the code, you can go read the book.

I will say that when I first came across service workers, it totally did my head in because this is my mental model of the web. We’ve got the stack of technologies that we’re building on top of, each layer depending on the layer below. Then service workers come along and say, “Well, actually, you could have a website like this,” where the lowest layer, the network, the Internet goes away and the website still works. Mind is blown!

It took me a while to get my head around that. The service worker file is on the user’s device and, if they’ve got no Internet connect, it can still make decisions and serve up something like a custom offline page.

Here’s a website I run called huffduffer.com. It’s for making your own podcast out of found sounds. If you’re offline or the website is down, which happens, and you visit Huffduffer, you get this offline page saying, “Sorry, you’re offline.” Not very useful, but it’s branded like the site, okay? It’s almost like the way you have a custom 404 page. Now you can have a custom offline page that matches your site. It’s a small thing, but it can be handy.

We ran this conference in Brighton for two years, Ampersand. It’s a web typography conference. That also has a very simple offline page that just says, “Sorry. You’re offline,” but then it has the bare minimum information you need about the conference like where is this conference happening; what time does it start?

You can imagine a restaurant website having this, an offline page that tells you, “Here is the address. Here are the opening hours.” I would like it if restaurant websites had that information when you’re online as well, but…

You can also have fun with this, like Trivago. Their site relies on search, so there’s not much you can do when you’re offline, so they give you a game to play, the offline maze to keep you entertained.

That’s kind of at the simplest level of what you could do, a custom offline page.

Then at the other level, I’ve written this book called “Resilient web Design.” A lot of the ideas I’m talking about here are in this book. The book is a website. You go to the website and you read the book. That’s it. It’s free. You just go to resilientwebdesign.com. I mean free. I don’t ask for your email address and I’m not tracking any information at all.

This is how it looks when you’re online, and then this is how it looks when you’re offline. It is exactly the same. In fact, the moment you visit the website, it basically downloads the whole book.

Now, that’s the extreme example. Most websites, you wouldn’t want to do that because you kind of want the HTML to be fresh. This is never going to get updated. I’m done with this so I’m totally fine with, you go straight to the cache; never even go to the network. It’s absolutely offline first. You’re probably going to want something in between those two extremes.

On my own website, adactio.com, if you’re browsing around the website and you’re reading things, that’s all fine. Then what if you lose your internet connection? You get the custom offline page that says, “Sorry, you’re offline,” but then it also shows you the things you’ve previously visited.

You can revisit any of these pages. These have all been cached, so you can cache things as people are browsing around the site. That’s a nice little pattern that a lot of websites could benefit from. It only suffers from the fact that all I can show you is stuff you’ve already seen. You have to have already visited these pages for them to show up in this list. Another pattern that I think is maybe better from a user experience point is when you put the control in the hands of the user.

This website, archive.dconstruct.org, this is what it sounds like. It’s an archive. It’s conference talks.

We ran a conference called dConstruct for 10 years from 2005 to 2015. Breaking news; we’re bringing it back for a one-off event next year, September 2020.

Anyway, all the talks from ten years are online here as audio files. You can browse around and listen to these talks.

You’ll also see that there’s this option to save for offline, exposed on the interface. What that does it is doesn’t just save the page offline; it also saves the audio offline. Then, when you’re an airplane or at the bottom of the ocean or whatever, you can then listen to the things you explicitly asked to be saved offline. It’s effectively a podcast player in the browser.

You see there’s a lot of things you can do. There are kind of a lot of layers you can build upon once you have a service worker.

At the very least, you can do caching because that’s the stuff we do anyway, like put this file in the cache, your CSS, your JavaScript, your icons, whatever.

Then think about, well, maybe I should have a custom offline page, even if it’s just for the branding reasons of having that nice page, just like we have a custom 404 page.

Then you start thinking, well, I want the adding to home screen experience to be good, so you’ve got the web app manifest.

You implement one of those patterns there allowing the users to save things offline, maybe.

Also, push notifications are now possible thanks to service workers. It used to be, if you wanted to make someone’s life a misery, you had to build a native app to give them push notifications all day long. Now you can make someone’s life a misery on the web too.

There’s even more advanced APIs like background sync where the website can talk to the web server even when that website isn’t open in the browser and sync up information — super powerful stuff.

Now, the support for something like service worker and the cache API is almost universal at this point. The support for stuff like background sync notifications is spottier, not universal, and that’s okay because, as long as you’re adding these things in layers. Then it’s fine if something doesn’t have universal support, right? It’s making something work great but, if someone doesn’t get that, it still works good. It’s all about building in that layered way.

Now you’re probably thinking, “Ah-ha! I’ve hoisted him by his own petard because service workers use JavaScript. That means they rely on JavaScript. You’ve made JavaScript a single point of failure!” Exactly what I was complaining about with Single Page Apps, right?

There’s a difference. With a Single Page App, you’re relying on JavaScript. The user gets absolutely nothing if JavaScript doesn’t work.

In the case of service workers, you literally cannot make a website that relies on a service worker. You have to make a website that works first without a service worker and then add the service worker on top, because, think about it; the first time anybody visits the website, even if their browser supports service workers, the service worker is not installed. So you have to build in layers.

I think this is why it appeals to me so much. The design of service workers is a layered design. You have to have something that works first, and then you elevate it. You improve the user experience using these technologies but you don’t rely on it. It’s not a single point of failure. It’s an enhancement.

That means you can take any website. Somebody’s homepage; a book that’s online; this archived stuff ; something that is more appy, sure, and make it work pretty much like a native app. It can appear full screen, add to the home screen, be indistinguishable from native apps so that the latest and greatest browsers and devices get the best experience. They’re making full use of the newest technologies.

But, as well as these things working in the latest and greatest browsers, they still work in the first web browser ever created. You can still look at these things in that very first web browser that Tim Berners-Lee created at worldwideweb.cern.ch.

WorldWideWeb

It’s like it is an unbroken line over 30 years on the web. We’re not talking about the Long Now when we’re talking about 30 years but, in terms of technology, that does feel special.

You can also look at the world’s first webpage in the first-ever web browser but, almost more amazingly, you can look at the world’s first web page at its original URL in a modern web browser and it still works.

We managed to make the web so much better with new APIs, new technologies, without breaking it, without breaking that backward compatibility. There’s something special about that. There’s something special about the web if you build in layers.

I’m encouraging you to think in terms of layers and use the layers of the web.

Thank you.

Monday, December 16th, 2019

Liveblogging An Event Apart 2019

I was at An Event Apart in San Francisco last week. It was the last one of the year, and also my last conference of the year.

I managed to do a bit of liveblogging during the event. Combined with the liveblogging I did during the other two Events Apart that I attended this year—Seattle and Chicago—that makes a grand total of seventeen liveblogged presentations!

  1. Slow Design for an Anxious World by Jeffrey Zeldman
  2. Designing for Trust in an Uncertain World by Margot Bloomstein
  3. Designing for Personalities by Sarah Parmenter
  4. Generation Style by Eric Meyer
  5. Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew
  6. Designing Intrinsic Layouts by Jen Simmons
  7. How to Think Like a Front-End Developer by Chris Coyier
  8. From Ideation to Iteration: Design Thinking for Work and for Life by Una Kravets
  9. Move Fast and Don’t Break Things by Scott Jehl
  10. Mobile Planet by Luke Wroblewski
  11. Unsolved Problems by Beth Dean
  12. Making Research Count by Cyd Harrell
  13. Voice User Interface Design by Cheryl Platz
  14. Web Forms: Now You See Them, Now You Don’t! by Jason Grigsby
  15. The Weight of the WWWorld is Up to Us by Patty Toland
  16. The Mythology of Design Systems by Mina Markham
  17. The Technical Side of Design Systems by Brad Frost

For my part, I gave my talk on Going Offline. Time to retire that talk now.

Here’s what I wrote when I first gave the talk back in March at An Event Apart Seattle:

I was quite nervous about this talk. It’s very different from my usual fare. Usually I have some big sweeping arc of history, and lots of pretentious ideas joined together into some kind of narrative arc. But this talk needed to be more straightforward and practical. I wasn’t sure how well I would manage that brief.

I’m happy with how it turned out. I had quite a few people come up to me to say how much they appreciated how I was explaining the code. That was very nice to hear—I really wanted this talk to be approachable for everyone, even though it included plenty of JavaScript.

The dates for next year’s Events Apart have been announced, and I’ll be speaking at three of them:

The question is, do I attempt to deliver another practical code-based talk or do I go back to giving a high-level talk about ideas and principles? Or, if I really want to challenge myself, can I combine the two into one talk without making a Frankenstein’s monster?

Come and see me at An Event Apart in 2020 to find out.

Saturday, December 14th, 2019

Building The Web

An interview conducted by Vitaly Friedman ahead of the 2019 View Source conference in Amsterdam.

Do you think, as of today, the Web is in the best shape it will ever be?

Well, to paraphrase Charles Dickens, “It is the best of the times; it is the worst of times,” because, in a sense, things are absolutely great today. Let’s just take it from the point of view of browsers and browser support for standards.

What you can do in a browser today just straight out of the box is amazing compared to the past. There are some little differences between browsers but, honestly, not like it used to be. Back in the day, if you were a Web developer, you spent maybe 50% of your time battling specific browser bugs trying to make one browser work like another browser, all this stuff, trying to make up for lack of standards.

It’s funny. I was listening to panel discussions we did at a conference I think 11 years ago, the AtMedia Conference in London. One of the questions I was asking the panelists was like, “What’s your wish list for CSS or browsers, in general?” They were saying things like, “Oh, if we had multiple background images, everything would be perfect. All my problems would be solved.”

They were all saying things that we have. They were all saying things that we have today, and we’ve got more. We have so much today that you couldn’t even imagine in the past, things like service workers where you can literally control network level stuff, amazing CSS things with Grid now and Flexbox. Amazing, right? One the one hand, yes, things are better than they’ve ever been.

Then, on another hand, not so much because, first of all, in the area of browsers, the fact that making a browser is now so complicated that only very, very, very, very few companies and organizations could do it and we’re kind of down to just two or three browser rendering engines, that’s not very healthy for something like the Web, which has always thrived on diversity. That’s something we’ll see how that plays out, so I’m uncomfortable about that but it remains to be seen.

Then, in terms of things being, in my opinion, worse than they were before, it’s less to do with what we get from browsers and more to do with how we choose to make things on the Web. We seem to have collectively decided to make things really complicated in terms of, I want to put something on the Web that used to be relatively straightforward.

I know there were all sorts of problems with the way we used to do it and maybe it didn’t scale so well, but we seem to have collectively decided that the barrier to entry to putting something on the Web requires loads of technologies, not browser technologies, but technologies that sit on our computers or sit on our servers. It’s great that we’ve got version control, build tools, automatic bundlers, and all this stuff, but the level of complexity is extremely high, it seems to me.

I know I’m slow and maybe that’s the reason I’m just not very good at picking this stuff up, but it seems to be objectively quite complex. That strikes me as strange because, like I was saying, you can do more with less these days in a browser. It’s easier than ever to build something interactive in a browser with quite minimal HTML, a bit of JavaScript, CSS, right? You can do loads with what you get out of the browser. Yet, we’ve decided to almost reinvent everything for ourselves.

Even though the browser will let us do all this really smart stuff, let’s reinvent it in JavaScript for ourselves. Let’s reinvent going from URL to URL. We’ll call it rooting, and we’ll do all that ourselves. We’ll do it all in JavaScript, and that means now we have to manage state, and so we’re keeping track of all this stuff.

It’s weird because it’s a choice to do that stuff. Yet, we’re acting as though it’s the default.

People are constantly saying, “Oh, well, expectations are different now.” I will say that’s true. People’s expectations of the Web are different, but not in the way that people mostly talk about it.

When people use that phrase, “Oh, people’s expectations of the Web are different now,” what they usually mean is, “Oh, people expect more from the Web. People expect the Web to be fast and interactive like native apps and stuff. I think that would be great if that were true, but my observation from talking to people is that people’s expectations of the Web have changed.

People expect the Web to be terrible. I talk to people and they’ve simply given up on the Web. Certainly, on mobile, they just try to avoid going on the Web.

Yes, people’s expectations of the Web have changed but not for the better. They’re associating the Web with bad experiences, with things being slow, with constantly being bombarded with, you know, sign up to my newsletter, accept cookies, dark patterns, all this stuff.

The solution to that is not, well, let’s throw more complicated toolchains, JavaScript libraries, and frameworks at it. The solution is to pull things back. How about if we didn’t have terrible user experiences that bombard people with stuff? How about if we just made websites using the bare minimum technology so that they’re fast and respond quickly?

Yet, weirdly, we’ve gotten into this cycle where people say, “Oh, people’s expectations of the Web are so high now that we must use all this complex technology,” which just ends up making the Web feel, frankly, even worse. From that perspective, things are in a pretty terrible state for the Web. Yet, like I said in terms of what you can do out of the box in a browser, just get a text editor and write some HTML, a bit of CSS, a bit of JavaScript; you can make amazing things straight out of the box that 10, 15 years ago we literally couldn’t have imagined.

What are the most important things for people coming into the industry to understand? Thinking about how to ensure the things they are building will be reliable and maintainable in the future?

I think the first thing to establish is that people learn in different ways. The answer to this question kind of depends on the person. I’ve experienced this myself, talking to students in, say, Codebar and stuff, is that some people really want to know why something is working, first. Give me the fundamentals. Give me almost a bit of theory but build things up from the fundamentals upwards until we’ve got a thing that works.

Other people, they don’t work that way. They say, “I want to build something as quickly as possible.” Okay, let’s start with a framework. Let’s create React App or something, something that gets you something straight away and then work backward from there.

I say, “Okay, but what’s actually going on here? Why does this work? What’s happening under the hood?”

There are two different ways of learning there. Neither is right and neither is wrong. There are just different ways.

I think the important thing is that, at some point, you end up with this kind of layered level of knowledge that you’ve got the fundaments in the grounding and then you can add things on top like a framework at the tippy top of that stack. Whether you start with the framework and work down to the fundamentals or start with the fundamentals and work up to the framework, I don’t think that matters as long as what you end up with is a nice rounded kind of stack of technologies.

Then, I think, what you learn over time, and I feel is something you could be told but you kind of have to just learn it yourself and experience it, is that the stuff further down, the fundamentals will change at a much slower pace and the stuff higher up, the abstractions, the frameworks, the tools, they will change at a faster pace. Once you know that, then it’s okay. Then that feeling of being overwhelmed, like, “Oh, there’s so much to learn,” you can start to filter it and figure out, “Well, where do I want to concentrate? Do I want to learn stuff that I know I will have to swap out in another year, two years, three years, or will I concentrate my time on this lower level fundamental stuff that will last for maybe decades, or do I split it? Do I dedicate some of my time to fundamentals and some of my time to the abstractions?”

I think the key thing is that you go in with your eyes open about the nature of the thing you’re learning. If I’m going to learn about HTML and, to a certain extent, CSS and stuff, then I will know this is knowledge that will last for quite a while. It’s not going to change too quickly. But if I’m learning about a framework, a build tool, or something like that, then I will say, “Okay. It’s fine that I’m learning this,” but I shouldn’t be under any illusions that this is going to be forever and not be surprised when, further down the line, people say, “Oh, you’re still using that framework? We don’t use that anymore. We use this other framework now,” right?

I think that’s the key thing is going in with your eyes open. It’s totally fine to study all the stuff, learn all the stuff, as long as you’re not disappointed, like, “Oh, I invested all my time in that framework and now nobody is using that framework anymore. We’ve all moved on to this other framework.”

There’s a phrase from DevOps where you talk about your servers. They say, treat your servers like cattle, not pets. Don’t get too attached to them.

I feel like that’s the case with a lot of the tools we use. I would consider frameworks and libraries to be tools. They’re tools. You use them to help you work faster, but don’t get too attached to them because they will change whereas, the more fundamental stuff, you can rely on.

Now, when I say fundamental stuff, to a certain extent I’m talking about the technology stuff like HTML. That moves at a slow pace. HTTP and how the Internet works, that’s not going to change very fast.

When I say fundamentals, I think you can go deeper than that even, and you can talk about philosophies, attitudes, and ways of approaching how to build something on the Web that’s completely agnostic to technologies. In other words, it’s like what your mindset is when you approach building something, what your priorities are, what you value. Those kinds of things can last for a very, very long time, longer than any technologies.

For example, over time, on the Web, I’ve come to realize that progressive enhancement, which is completely technology agnostic—it’s just a way of thinking—is a good long-term investment. Even as technologies come and go, this approach of thinking in a sort of layered way and building up from the most supported thing to least supported thing works really well no matter what the technology is that comes along.

When Ajax came along in 2005, I could take the progressive enhancement approach and apply it to Ajax. When responsive design came along in 2010, I could take progressive enhancement and apply it to responsive design. When progressive Web apps come along, whatever it happens to be, I can take this approach, this fundamental approach and apply it to whatever the new technology is. Those things tend to be really long-lasting. Those kinds of approaches, almost strategies I guess, are things that can last a long time.

You should always be questioning them. You should always be saying, “Is this still relevant? Does this still work in this situation? Does it still apply?” Over a long time period, you start to get an answer to that. It’s like, “Yeah, actually, it’s funny. Even over 20 years, this particular strategy works really well,” whereas some other strategy that worked well 15 years ago, it turns out, just doesn’t even apply today because some technology has made it obsolete.

Yeah, fundamental things aren’t necessarily technologies. I think a Web developer is well versed in getting to grips with those fundamental things but, at the same time, I’m not sure if you could learn those first. I’m not sure if you could be like, “Okay, we’re going to learn about these fundamental things without touching a line of code.” You kind of have to learn them for yourself by doing it and learning over time, I think.

Do you think frameworks, for example, will be replaced by the establishment of long-lasting Web components with CSS routines where we can adjust everything? Is this the world we’re moving toward or is it going to stay simple after all?

Yes, absolutely, the things that people are pushing the envelope with, in terms of frameworks today, will become the standards of tomorrow. I think I would put good money on that because I’ve seen it happen. I’ve seen it happen in the past, generally.

It’s usually in JavaScript that we figure something out, we figure out what we want, and we make it work in JavaScript first. If it’s a really powerful idea that solves a common problem, it will find its way further down.

The classic example, early on, I’m talking in the ’90s now, the first pieces of JavaScript were things like doing image rollovers. Now we don’t need JavaScript for that because we use hover in CSS. It’s such a common use case, it moved down into the declarative layer.

The same with form validation. You have to write your own form validation. Now you can just do required in HTML and stuff like that. This pattern plays out over and over again. With responsive images, we figured out what we wanted in JavaScript and then we got it in HTML with pictures.

Yes, I think the goal of any good framework or library should be to make itself redundant. A classic example of this would be jQuery. You don’t need jQuery today because all the stuff that jQuery did for you like using CSS selectors to find DOM nodes, you can do that now in the browser using querySelector, querySelectorAll. But of course, the only reason why querySelector exists is because jQuery proved it was powerful and people wanted it.

I think, absolutely, a lot of the things that people are currently using frameworks and libraries for will become part of the standard, whether that has to do with the idea of a virtual DOM, state management, managing page transitions, giving us control over that. Yes, absolutely, that will find its way.

Now, whether the specific implementations will be these things like Web components, Houdini, and stuff like that, that’s interesting. We’ll see how that plays out. That’s all part of this bigger idea of the extensible Web where, in the past, we would get specific things like, here is the picture element, here is this new JavaScript API or whatever, here is querySelector. Whereas now, we’ve sort of been given, okay, here are the nuts and bolts of how a browser works. You build a solution and then we’ll see what happens. That’s an interesting idea.

I guess the theory is then that, okay, let’s say we get Web components, we get Houdini. Now we all start building our own widgets and we all start building our own CSS functions. The theory is that the ones that are really popular and really goodwill then get standardized and end up in the standards.

I’m not sure if that’s actually going to happen because I wonder what a standards body or browser maker would actually say is, “Oh, well, we don’t need to make it part of the standard because everyone can just use the Web component, everyone can just use this Houdini thing,” right? We’ll see whether that works out.

I wonder if it’ll end up maybe like the situation with jQuery plugins. I mentioned that jQuery was great, it showed this is what people want, and it ended up as a standard. As well as jQuery the library, you also had jQuery plugins, the ecosystem where everybody built a thousand different carousels, a thousand different widgets. There was no quality control and you couldn’t figure out which was the right one to use. I worry that might be where we end up with things like Web components, Houdini, and stuff like that. But it’s an interesting idea, this extensible Web thing.

How will we build? How will the workflow or the tooling change and evolve as we move forward?

Well, that’s up to us. These things are created by people, so that’s something to be aware of. When people come to the Web think, “Oh, what should I learn? What’s the tool? What’s the methodology? How will we be building websites?” It’s almost like, what horse should I be backing here? What’s a safe bet?

You’ve got to step back and realize these things aren’t handed down from heaven as some kind of decision has been made and then passed on to us. We make those decisions. We decide how the Web gets built. There’s no central authority on this stuff. We collectively decide it.

You can choose how the future of Web development is going to look. You could choose what a workflow is going to look like that works for you and works for other people.

The Web is super flexible. You can choose to build in this layered way that I’ve talked about, progressive enhancement, very resilient way of working, but you don’t have to. The Web doesn’t mandate that you work that way. You could choose to build in a way that you just do everything in JavaScript and make JavaScript do the rooting, the DOM, and everything in JavaScript.

It’s a choice. It’s not something that, oh, in the future, we will all do this; in the future, we will all do that. In the future, you will make a choice about how you want to build.

I think, too often, though, when we’re making those decisions of how should I build or what’s the best way to build something on the Web, I worry that sometimes we think about it a bit too much from our perspective. What’s the best way for me to build on the Web? What’s going to make things easiest for me as a developer?

I don’t want to make things hard for us. I don’t want life to be difficult, but I do think our priorities should actually be what’s going to make things better for the user, even if that means more work from us.

If you’re getting paid, if you’re getting a paycheck to make things on the Web—then again, kind of going back to responsibility—it’s not about you now. You have a duty of care to the people who will be using the thing you’re building. Decisions about how to build on the Web shouldn’t just be made according to what you like, what you think is nice for you, what makes your life easy, what saves you typing, but should be more informed by what’s going to be better for users, what’s going to be more resilient, what’s going to leave nobody behind, you know, something that’s available to everyone.

I know I’m talking a lot in abstractions and vagaries, but the talk at View Source will go into a little more detail.

People are often disappointed in the state of the Web today. How do you see the Web evolving over the years? Do you think that privacy and ethics will become a standard?

I think the first thing to establish is that I don’t want to paint too rosy a picture of how things were in the past. There have always been problems. It’s just that we might have different problems today.

I remember the days of literal pop-up windows or pop-under windows, things like that, really annoying things that eventually browsers had to come in and kind of stamp down on that stuff. That’s sort of happening today as well with some of the egregious tracking and surveillance you see Safari and Firefox taking steps to limit that.

In the past, I would have said, “Oh, we need to figure this out. We need to almost self-regulate,” you know, before it’s too late. At this point, I think, “No, it is too late,” and regulation is coming. GDPR is a first step in that and there will be more.

We deserve it. We had our chance to figure this stuff out for ourselves and do the right thing. We blew it, and things are really bad when it comes to surveillance and tracking.

A lot of the business models seem to be predicated on tracking. I’m saying tracking here, not advertising. Advertising isn’t the issue here. It’s specifically tracking.

It’s a bit of a shame that we talk about ad blockers as a software. Most people are not blocking ads. What they’re blocking is tracking. Again, the same way that browsers had to kind of step in and stop popups and pop-under windows, now we see ad blockers, tracking blockers stepping in to solve this.

We get this kind of battle, right? It’s almost like an arms race that’s been going on. I think regulation is going to come in on top of that. Guaranteed it’s going to happen.

You’re right; the fundamental business models in use today are kind of at odds with privacy and surveillance, so they might need to change. Although, I don’t think advertising requires tracking. I know a lot of people talk as though it does. People talk about, “Oh, you can’t have advertising without a tracking link.” You absolutely can. Sponsorship, other kinds of advertising absolutely work.

The other thing is that tracking is not very good. If I’m advertised to with something that absolutely suits my needs then it kind of ceases to be advertising. It just becomes useful, right? That’s not what I experience. What I experience is just really badly targeted things. It’s not even like the tracking works. Yet, people claim tracking is essential.

Anyway, when I say business models need to change, I don’t mean advertising. I think advertising is actually a reasonable business model for some kinds of services. That connection between advertising and tracking, that needs to be severed.

Some people think that’s impossible. They say, “No, it’s just a law of nature that those two things go together.” That’s not true. We choose that. 
The other thing to remember is that we sometimes look around to see how things are today and we can’t imagine it could be any different. We see one dominant search engine and so we think there could only ever be one dominant search engine, but that’s not true. That’s just the way things have turned out. We see a big social network like Facebook and we think, “Oh, there could ever be one big social network.” Again, that’s just the way things have turned out in our situation.

I think the worst thing we can do is assume things are inevitable and it’s inevitable that things end up that way. That’s particularly true when it comes to surveillance and tracking and things that are antiprivacy to say, “Well, that’s just the way it is. It’s inevitable and it couldn’t be any other way.” I think the first step is that we have to have the imagination to think about how things could be different, how things could have turned out differently, and then work towards making that a reality.

Also, this is a huge opportunity. People are clearly fed up with the tracking. They’re fed up with the surveillance. They don’t mind the advertising. There is a separation there. There is an opportunity here to take on these big organizations who literally can’t change their business model.

Someone like Google, the idea of tracking and surveillance is kind of intrinsically linked to their core business model. That gives a huge opportunity. You can see Apple already starting to exploit this opportunity, but other people, too, where you can make privacy and lack of tracking your selling point. It’s a way for a small player to suddenly maybe disrupt the incumbents because the incumbents are so reliant on tracking.

You can’t take on Facebook by trying to be another Facebook, but you can take on Facebook by being what Facebook can’t do. Not what Facebook won’t do, what Facebook literally can’t do. There’s actually a big opportunity there.

Yeah, when we talk about the good old days of keeping track of things, blogs, I kind of share that because I remember the good old days as well. But I’ll say I see a bit of a resurgence as well. Enough people are getting fed up with just posting on silos like Twitter, Facebook, and stuff that I see more and more people launching their own websites again and publishing there. I hope we’ll see more of that.

What are you most excited about on the Web these days?

Yeah, this is an interesting question because it’s happened over and over again over the course of my career, about 20 years now, where I’ll think, like, “Oh, there’s nothing really exciting me,” and then something comes along and I get, ooh, really excited. Almost kind of puttering along when CSS came along, “Oh, this is really interesting.” Then, years later, Ajax, like, “Ah, this is really interesting.”

I think currently service workers are the things that get me excited, get me thinking about, oh, the potential for what the Web could be. The potential for the user experience on the Web is huge. I don’t even think the challenges are technological because it’s pretty straightforward using service workers.

It’s more changing people’s expectations of the Web, the idea that, oh, you should be able to open a browser or hit a bookmark and have something happen even if you don’t have an Internet connection or even if you are on a crappy network that things could still be quite reliable. That’s such a fundamental change and that gets me very, very excited. It’s also, obviously, a huge challenge to change that.

I have to say, over a long enough time period, the things that I start to think about start to be less and less about specific technologies and more and more about just the Web, in general, and the people making the Web.

I certainly have fears for the Web. They aren’t so much around technologies, like, “Oh, will one particular browser make or dominate,” or, “Will one particular framework be the only technology around?” Those things are concerning. It’s more about, “Will the idea of being able to make for the Web start to get reduced down to an elite kind of priesthood of a certain kind of person?” Frankly, the kind of person who looks like me, right? White, male, privileged, European. If we’re the only people who get to make for the Web, that will be terrible.

I think the real potential of the Web and the promise of the Web from the early days was that it’s for anyone. Anybody should be able to not just use the Web and consume it, but anyone should be able to add to it and build for it.

The thing that actually motivates me now is less about a specific technology and more about how can I try and get a more diverse range of people making the Web, making their own careers out of making for the Web rather than it being reduced, reduced, reduced to a certain kind of person. When I’m done with all this, if I look around and all the other people making websites look just like me, then I think we’ll have failed.

Friday, December 13th, 2019

The Origin Story of Container Queries—zachleat.com

Everyone wants it, but it sure seems like no one is actively working on it.

Zach traces the earliest inklings of container queries to an old blog post of Andy’s—back when he was at Clearleft—called Responsive Containers:

For fun, here’s some made-up syntax (which Jeremy has dubbed ‘selector queries’)…

Why `details` is Not an Accordion - daverupert.com

At the risk of being a broken record; HTML really needs <accordion> , <tabs>, <dialog>, <dropdown>, and <tooltip> elements. Not more “low-level primitives” but good ol’ fashioned, difficult-to-get-consensus-on elements.

Hear, hear!

I wish browsers would prioritize accessibility improvements over things like main thread scheduling optimization to unblock tracking pixels and the Sisyphean task of competing with native.

If we really want to win, let’s make it easy for everyone to access the Web.

Thursday, December 12th, 2019

How readable—Findings

The results are in for Daniel van Berzon’s most recent experiment into accurately measuring code readability. You can read the results and read about the methodology behind them.

Basil: Secret Santa as a Service | Trys Mudford

Trys writes up the process—and the tech (JAM)stack—he used to build basil.christmas.

Monday, December 9th, 2019

Level of Effort | Brad Frost

Brad gets ranty …with good reason.

Thursday, December 5th, 2019

I <3 the cascade! | Go Make Things

Chris makes the valid observation that JavaScript programmers who bemoan the “global scope” of CSS are handily forgetting that JavaScript also has global scope by default.

JS is also global by default. We use IIFEs and wrapper functions to add scope.

And for all this talk about CSS being global, you can actually scope styles when you need to. It’s more-or-less the same way you do it in JavaScript.

Tuesday, December 3rd, 2019

Motion Paths - Past, Present and Future | Codrops

This is superbly in-depth and easy-to-follow article from Cassie—everything you need to know about motion paths in SVG and CSS! It’s worth reading just for the wonderful examples.

Music and Web Design | Brad Frost

I feel my trajectory as a musician maps to the trajectory of the web industry. The web is still young. We’re all still figuring stuff out and we’re all eager to get better. In our eagerness to get better, we’re reaching for more complexity. More complex abstractions, build processes, and tools. Because who wants to be bored playing in 4/4 when you can be playing in 7/16?

I hope we in the web field will arrive at the same realization that I did as a musician: complexity is not synonymous with quality.

Can I get an “Amen!”?