Tags: conference

601

sparkline

Tuesday, May 23rd, 2017

Evaluating Technology – Jeremy Keith – btconfDUS2017 on Vimeo

I wasn’t supposed to speak at this year’s Beyond Tellerrand conference, but alas, Ellen wasn’t able to make it so I stepped in and gave my talk on evaluating technology.

Friday, May 5th, 2017

Patterns Day speakers

Ticket sales for Patterns Day are going quite, quite briskly. If you’d like to come along, but you don’t yet have a ticket, you might want to remedy that. Especially when you hear about who else is going to be speaking…

Sareh Heidari works at the BBC building websites for a global audience, in as many as twenty different languages. If you want to know about strategies for using CSS at scale, you definitely want to hear this talk. She just stepped off stage at the excellent CSSconf EU in Berlin, and I’m so happy that Sareh’s coming to Brighton!

Patterns Day isn’t the first conference about design systems and pattern libraries on the web. That honour goes to the Clarity conference, organised by the brilliant Jina Anne. I was gutted I couldn’t make it to Clarity last year. By all accounts, it was excellent. When I started to form the vague idea of putting on an event here in the UK, I immediately contacted Jina to make sure she was okay with it—I didn’t want to step on her toes. Not only was she okay with it, but she really wanted to come along to attend. Well, never mind attending, I said, how about speaking?

I couldn’t be happier that Jina agreed to speak. She has had such a huge impact on the world of pattern libraries through her work with the Lightning design system, Clarity, and the Design Systems Slack channel.

The line-up is now complete. Looking at the speakers, I find myself grinning from ear to ear—it’s going to be an honour to introduce each and every one of them.

This is going to be such an excellent day of fun and knowledge. I can’t wait for June 30th!

Tuesday, April 18th, 2017

Back to the Cave – Frank Chimero

Frank has published the (beautifully designed) text of his closing XOXO keynote.

Wednesday, April 12th, 2017

Jeremy Keith at Render 2017 - YouTube

Here’s the opening keynote I gave at the Render Conference in Oxford. The talk is called Evaluating Technology:

We work with technology every day. And every day it seems like there’s more and more technology to understand: graphic design tools, build tools, frameworks and libraries, not to mention new HTML, CSS and JavaScript features landing in browsers. How should we best choose which technologies to invest our time in? When we decide to weigh up the technology choices that confront us, what are the best criteria for doing that? This talk will help you evaluate tools and technologies in a way that best benefits the people who use the websites that we are designing and developing. Let’s take a look at some of the hottest new web technologies and together we will dig beneath the hype to find out whether they will really change life on the web for the better.

Tuesday, April 11th, 2017

Announcing Patterns Day: June 30th

Gather ‘round, my friends. I’ve got a big announcement.

You should come to Brighton on Friday, June 30th. Why? Well, apart from the fact that you can have a lovely Summer weekend by the sea, that’s when a brand new one-day event will be happening:

Patterns Day!

That’s right—a one-day event dedicated to all things patterny: design systems, pattern libraries, style guides, and all that good stuff. I’m putting together a world-class line-up of speakers. So far I’ve already got:

It’s going to be a brain-bendingly good day of ideas, case studies, processes, and techniques with something for everyone, whether you’re a designer, developer, product owner, content strategist, or project manager.

Best of all, it’s taking place in the splendid Duke Of York’s Picture House. If you’ve been to Remy’s FFconf then you’ll know what a great venue it is—such comfy, comfy seats! Well, Patterns Day will be like a cross between FFconf and Responsive Day Out.

Tickets are £150+VAT. Grab yours now. Heck, bring the whole team. Let’s face it, this is a topic that everyone is struggling with so we’re all going to benefit from getting together for a day with your peers to hammer out the challenges of pattern libraries and design systems.

I’m really excited about this! I would love to see you in Brighton on the 30th of June for Patterns Day. It’s going to be fun!

Monday, April 10th, 2017

Getting griddy with it

I had the great pleasure of attending An Event Apart Seattle last week. It was, as always, excellent.

It’s always interesting to see themes emerge during an event, especially when those thematic overlaps haven’t been planned in advance. Jen noticed this one:

I remember that being a theme at An Event Apart San Francisco too, when it seemed like every speaker had words to say about ill-judged use of Bootstrap. That theme was certainly in my presentation when I talked about “the fallacy of assumed competency”:

  1. large company X uses technology Y,
  2. company X must know what they are doing because they are large,
  3. therefore technology Y must be good.

Perhaps “the fallacy of assumed suitability” would be a better term. Heydon calls it “the ‘made at Facebook’ fallacy.” But I also made sure to contrast it with the opposite extreme: “Not Invented Here syndrome”.

As well as over-arching themes, it was also interesting to see which technologies were hot topics at An Event Apart. There was one clear winner here—CSS Grid Layout.

Microsoft—a sponsor of the event—used An Event Apart as the place to announce that Grid is officially moving into development for Edge. Jen talked about Grid (of course). Rachel talked about Grid (of course). And while Eric and Una didn’t talk about it on stage, they’ve both been writing about the fun they’ve been having having with Grid. Una wrote about 3 CSS Grid Features That Make My Heart Flutter. Eric is documenting the overall of his site with Grid. So when we were all gathered together, that’s what we were nerding out about.

The CSS Squad.

There are some great resources out there for levelling up in Grid-fu:

With Jen’s help, I’ve been playing with CSS Grid on a little site that I’m planning to launch tomorrow (he said, foreshadowingly). I took me a while to get my head around it, but once it clicked I started to have a lot of fun. “Fun” seems to be the overall feeling around this technology. There’s something infectious about the excitement and enthusiasm that’s returning to the world of layout on the web. And now that the browser support is great pretty much across the board, we can start putting that fun into production.

Tuesday, April 4th, 2017

Sketchnotes from AEA Seattle | Krystal Higgins

I love Krystal’s sketchnotes from my talk at An Event Apart Seattle. Follows on nicely from Ethan’s too.

LukeW | An Event Apart: Evaluating Technology

Luke is a live-blogging machine. Here’s the notes he made during my talk at An Event Apart Seattle.

If it reads like a rambling hodge-podge of unconnected thoughts, I could say that you had to be there …but it kinda was a rambling hodge-podge of unconnected thoughts.

Monday, March 13th, 2017

In AMP we trust

AMP Conf was one of those deep dive events, with two days dedicated to one single technology: AMP.

Except AMP isn’t really one technology, is it? And therein lies the confusion. This was at the heart of the panel I was on. When we talk about AMP, we could be talking about one of three things:

  1. The AMP format. A bunch of web components. For instance, instead of using an img element on an AMP page, you use an amp-img element instead.
  2. The AMP rules. There’s one JavaScript file, hosted on Google’s servers, that turns those web components from spans into working elements. No other JavaScript is allowed. All your styles must be in a style element instead of an external file, and there’s a limit on what you can do with those styles.
  3. The AMP cache. The source of most confusion—and even downright enmity—this is what’s behind the fact that when you launch an AMP result from Google search, you don’t go to another website. You see Google’s cached copy of the page instead of the original.

The first piece of AMP—the format—is kind of like a collection of marginal gains. Where the img element might have some performance issues, the amp-img element optimises for perceived performance. But if you just used the AMP web components, it wouldn’t be enough to make your site blazingly fast.

The second part of AMP—the rules—is where the speed gains start to really show. You can’t have an external style sheet, and crucially, you can’t have any third-party scripts other than the AMP script itself. This is key to making AMP pages super fast. It’s not so much about what AMP does; it’s more about what it doesn’t allow. If you never used a single AMP component, but stuck to AMP’s rules disallowing external styles and scripts, you could easily make a page that’s even faster than what AMP can do.

At AMP Conf, Natalia pointed out that The Guardian’s non-AMP pages beat out the AMP pages for performance. So why even have AMP pages? Well, that’s down to the third, most contentious, part of the AMP puzzle.

The AMP cache turns the user experience of visiting an AMP page from fast to instant. While you’re still on the search results page, Google will pre-render an AMP page in the background. Not pre-fetch, pre-render. That’s why it opens so damn fast. It’s also what causes the most confusion for end users.

From my unscientific polling, the behaviour of AMP results confuses the hell out of people. The fact that the page opens instantly isn’t the problem—far from it. It’s the fact that you don’t actually go to an another page. Technically, you’re still on Google. An analogous mental model would be an RSS reader, or an email client: you don’t go to an item or an email; you view it in situ.

Well, that mental model would be fine if it were consistent. But in Google search, only some results will behave that way (the AMP pages) and others will behave just like regular links to other websites. No wonder people are confused! Some search results take them away and some search results keep them on Google …even though the page looks like a different website.

The price that we pay for the instantly-opening AMP pages from the Google cache is the URL. Because we’re looking at Google’s pre-rendered copy instead of the original URL, the address bar is not pointing to the site the browser claims to be showing. Everything in the body of the browser looks like an article from The Guardian, but if I look at the URL (which is what security people have been telling us for years is important to avoid being phished), then I’ll see a domain that is not The Guardian’s.

But wait! Couldn’t Google pre-render the page at its original URL?

Yes, they could. But they won’t.

This was a point that Paul kept coming back to: trust. There’s no way that Google can trust that someone else’s URL will play by the AMP rules (no external scripts, only loading embedded content via web components, limited styles, etc.). They can only trust the copies that they themselves are serving up from their cache.

By the way, there was a joint AMP/search panel at AMP Conf with representatives from both teams. As you can imagine, there were many questions for the search team, most of which were Glomar’d. But one thing that the search people said time and again was that Google was not hosting our AMP pages. Now I don’t don’t know if they were trying to make some fine-grained semantic distinction there, but that’s an outright falsehood. If I click on a link, and the URL I get taken to is a Google property, then I am looking at a page hosted by Google. Yes, it might be a copy of a document that started life somewhere else, but if Google are serving something from their cache, they are hosting it.

This is one of the reasons why AMP feels like such a bait’n’switch to me. When it first came along, it felt like a direct competitor to Facebook’s Instant Articles and Apple News. But the big difference, we were told, was that you get to host your own content. That appealed to me much more than having Facebook or Apple host the articles. But now it turns out that Google do host the articles.

This will be the point at which Googlers will say no, no, no, you can totally host your own AMP pages …but you won’t get the benefits of pre-rendering. But without the pre-rendering, what’s the point of even having AMP pages?

Well, there is one non-cache reason to use AMP and it’s a political reason. Beleaguered developers working for publishers of big bloated web pages have a hard time arguing with their boss when they’re told to add another crappy JavaScript tracking script or bloated library to their pages. But when they’re making AMP pages, they can easily refuse, pointing out that the AMP rules don’t allow it. Google plays the bad cop for us, and it’s a very valuable role. Sarah pointed this out on the panel we were on, and she was spot on.

Alright, but what about The Guardian? They’ve already got fast pages, but they still have to create separate AMP pages if they want to get the pre-rendering benefits when they show up in Google search results. Sorry, says Google, but it’s the only way we can trust that the pre-rendered page will be truly fast.

So here’s the impasse we’re at. Google have provided a list of best practices for making fast web pages, but the only way they can truly verify that a page is sticking to those best practices is by hosting their own copy, URLs be damned.

This was the crux of Paul’s argument when he was on the Shop Talk Show podcast (it’s a really good episode—I was genuinely reassured to hear that Paul is not gung-ho about drinking the AMP Kool Aid; he has genuine concerns about the potential downsides for the web).

Initially, I accepted this argument that Google just can’t trust the rest of the web. But the more I talked to people at AMP Conf—and I had some really, really good discussions with people away from the stage—the more I began to question it.

Here’s the thing: the regular Google search can’t guarantee that any web page is actually 100% the right result to return for a search. Instead there’s a lot of fuzziness involved: based on the content, the markup, and the number of trusted sources linking to this, it looks like it should be a good result. In other words, Google search trusts websites to—by and large—do the right thing. Sometimes websites abuse that trust and try to game the system with sneaky tricks. Google responds with penalties when that happens.

Why can’t it be the same for AMP pages? Let me host my own AMP pages (maybe even host my own AMP script) and then when the Googlebot crawls those pages—the same as it crawls any other pages—that’s when it can verify that the AMP page is abiding by the rules. If I do something sneaky and trick Google into flagging a page as fast when it actually isn’t, then take my pre-rendering reward away from me.

To be fair, Google has very, very strict rules about what and how to pre-render the AMP results it’s caching. I can see how allowing even the potential for a false positive would have a negative impact on the user experience of Google search. But c’mon, there are already false positives in regular search results—fake news, spam blogs. Googlers are smart people. They can solve—or at least mitigate—these problems.

Google says it can’t trust our self-hosted AMP pages enough to pre-render them. But they ask for a lot of trust from us. We’re supposed to trust Google to cache and host copies of our pages. We’re supposed to trust Google to provide some mechanism to users to get at the original canonical URL. I’d like to see trust work both ways.

Wednesday, March 8th, 2017

Let’s Make the World We Want To Live In | Big Medium

Josh gives a thorough roundup of the Interaction ‘17 event he co-chaired.

“I think I’ve distilled what this conference is all about,” Jeremy Keith quipped to me during one of the breaks. “It’s about how we’ll save the world through some nightmarish combination of virtual reality, chatbots, and self-driving cars.”

AMP and the Web - TimKadlec.com

Tim watched the panel discussion at AMP Conf. He has opinions.

Optimistically, AMP may be a stepping stone to a better performant web. But I still can’t shake the feeling that, in its current form, it’s more of a detour.

AMP Conf: Day 1 Live Stream - YouTube

Here’s the panel I was on at the AMP conference. It was an honour and a pleasure to share the stage with Nicole, Sarah, Gina, and Mike.

Wednesday, March 1st, 2017

All You Need is Link | Rhizome

A lovely piece of early web history—Olia Lialina describes the early Net Art scene in 2000.

The address bar is the author’s signature. It’s where action takes place, and it’s the action itself. The real action on the web doesn’t happen on the page with its animated GIFs or funny scripts, it’s concentrated in the address bar.

And how wonderful that this piece is now published on Rhizome, an online institution so committed to its mission that it’s mentioned in this seventeen year old article.

Tuesday, February 21st, 2017

[this is aaronland] fault lines — a cultural heritage of misaligned expectations

When Aaron talks, I listen. This time he’s talking about digital (and analogue) preservation, and how that can clash with licensing rules.

It is time for the sector to pick a fight with artists, and artist’s estates and even your donors. It is time for the sector to pick a fight with anyone that is preventing you from being allowed to have a greater — and I want to stress greater, not total — license of interpretation over the works which you are charged with nurturing and caring for.

It is time to pick a fight because, at least on bad days, I might even suggest that the sector has been played. We all want to outlast the present, and this is especially true of artists. Museums and libraries and archives are a pretty good bet if that’s your goal.

Saturday, December 24th, 2016

16 Web Conference Talks You Need to Watch This Holiday

Ignore the clickbaity title—you don’t need to do anything this holiday; that’s why it’s a holiday. But there are some great talks here.

The list is marred only by the presence of my talk Resilience, the inclusion of which spoils an otherwise …ah, who am I kidding? I’m really proud of that talk and I’m very happy to see it on this list.

Wednesday, November 16th, 2016

Resilience retires

I spoke at the GOTO conference in Berlin this week. It was the final outing of a talk I’ve been giving for about a year now called Resilience.

Looking back over my speaking engagements, I reckon I must have given this talk—in one form or another—about sixteen times. If by some statistical fluke or through skilled avoidance strategies you managed not to see the talk, you can still have it rammed down your throat by reading a transcript of the presentation.

That particular outing is from Beyond Tellerrand earlier this year in Düsseldorf. That’s one of the events that recorded a video of the talk. Here are all the videos of it I could find:

Or, if you prefer, here’s an audio file. And here are the slides but they won’t make much sense by themselves.

Resilience is a mixture of history lesson and design strategy. The history lesson is about the origins of the internet and the World Wide Web. The design strategy is a three-pronged approach:

  1. Identify core functionality.
  2. Make that functionality available using the simplest technology.
  3. Enhance!

And if you like that tweet-sized strategy, you can get it on a poster. Oh, and check this out: Belgian student Sébastian Seghers published a school project on the talk.

Now, you might be thinking that the three-headed strategy sounds an awful lot like progressive enhancement, and you’d be right. I think every talk I’ve ever given has been about progressive enhancement to some degree. But with this presentation I set myself a challenge: to talk about progressive enhancement without ever using the phrase “progressive enhancement”. This is something I wrote about last year—if the term “progressive enhancement” is commonly misunderstood by the very people who would benefit from hearing this message, maybe it’s best to not mention that term and talk about the benefits of progressive enhancement instead: robustness, resilience, and technical credit. I think that little semantic experiment was pretty successful.

While the time has definitely come to retire the presentation, I’m pretty pleased with it, and I feel like it got better with time as I adjusted the material. The most common format for the talk was 40 to 45 minutes long, but there was an extended hour-long “director’s cut” that only appeared at An Event Apart. That included an entire subplot about Arthur C. Clarke and the invention of the telegraph (I’m still pretty pleased with the segue I found to weave those particular threads together).

Anyway, with the Resilience talk behind me, my mind is now occupied with the sequel: Evaluating Technology. I recently shared my research material for this one and, as you may have gathered, it takes me a loooong time to put a presentation like this together (which, by the same token, is one of the reasons why I end up giving the same talk multiple times within a year).

This new talk had its debut at An Event Apart in San Francisco two weeks ago. Jeffrey wrote about it and I’m happy to say he liked it. This bodes well—I’m already booked in for An Event Apart Seattle in April. I’ll also be giving an abridged version of this new talk at next year’s Render conference.

But that’s it for my speaking schedule for now. 2016 is all done and dusted, and 2017 is looking wide open. I hope I’ll get some more opportunities to refine and adjust the Evaluating Technology talk at some more events. If you’re a conference organiser and it sounds like something you’d be interested in, get in touch.

In the meantime, it’s time for me to pack away the Resilience talk, and wheel down into the archives, just like the closing scene of Raiders Of The Lost Ark. The music swells. The credits roll. The image fades to black.

State of the Web: Evaluating Technology | Jeremy Keith - Zeldman on Web & Interaction Design

Jeffrey likes the new talk I debuted at An Event San Francisco. That’s nice!

Summarizing it here is like trying to describe the birth of your child in five words or less. Fortunately, you can see Jeremy give this presentation for yourself at several upcoming An Event Apart conference shows in 2017.

Monday, November 14th, 2016

SmashingConf Barcelona 2016 - Jeremy Keith on Resilience on Vimeo

Here’s the video of the talk I gave at Smashing Conference in Barcelona last month—one of its last outings.

Thursday, November 10th, 2016

From Pages to Patterns – Charlotte Jackson - btconfBER2016 on Vimeo

The video of Charlotte’s excellent pattern library talk that she presented yesterday in Berlin.

Tuesday, November 8th, 2016

Resilience

A presentation from the Beyond Tellerrand conference held in Düsseldorf in May 2016. I also presented a version of this talk at An Event Apart, Smashing Conference, Render, and From The Front.

Thank you very much, Marc. And actually, not just for inviting me back to speak again this year, but also, as well as organising this conference, Marc also helped organise IndieWebCamp on the weekend, which was fantastic, so thank you for that, Marc. I think some of the sipgate people are back there where we had Indie Web Camp, and I want to thank them again. They did a fantastic job doing it. So thank you, Marc and sipgate.

Yeah, as Marc said, I get the job of opening up day two. This is known as the hangover slot, right? But I’ll see what I can do. I tell you what. I’ll open up day two of Beyond Tellerrand with a story or, rather, a creation myth.

lo

You probably heard that the Internet was created to withstand a nuclear attack, right? A network that would be resilient to withstanding a nuclear attack, and that’s actually not quite true. What is true is that Paul Baran, who was at the Rand Corporation, was looking into what is the most resilient shape of a network. And amongst his findings, one of the things he discovered was that by splitting up your information into discrete packages, it made for a more resilient network, and this is where this idea of packet switching comes from that you take the entire message, chop it up into little packets, and then you ship those packets around the network by whatever route happens to be best and then reassemble them at the other end.

Now this idea of packet switching that Paul Baran was coming up with, that came across the radar of Lenard Kleinrock, who was working on the ARPANET or, earlier, the DARPANET from the Advanced Research Project Agency. This is the idea of linking up networks, effectively computer networks. Now this is really, really early days here. This is 1969 is when the very first message was sent on the ARPANET, and it was simply an instruction to log in from one machine to another machine, but it crashed after two characters, so that was the first message ever sent on the ARPANET, which kind of was the precursor to the Internet.

So they kept working on it, right? They ironed out the bugs, and this network of networks grew and grew over time throughout the ’70s. But the point at which it really kind of morphed into being the Internet was when they had to tackle the problem of making sure that all these different networks that were speaking different languages, using different programs, could all be understood by one another. That there needed to be some kind of low level protocol that this inter network could use to make sure that these packages were all being shuffled around in an understandable way. And that’s where these two gentlemen come in, Bob Kahn and Vince Cerf, because they created TCP/IP, the Transfer Control Protocol, the Internet Protocol.

Now what’s interesting is that, Bob Kahn and Vince Cerf, back then they weren’t concerned about making the network resilient to nuclear attack. They were young, idealistic men, and what they were concerned about was making a network that was resilient to any kind of top down control, so that was kind of baked into the design of these protocols that the network would have no centre. The network has no single decision point. You don’t have to ask to add a node to the network. You can just do it.

I think that’s really the secret sauce of the Internet is the fact that it is, by design, a dumb network, right? What I mean by that is that the network doesn’t care at all about the contents of those packets that are being switched around and moved around. It just cares about getting those packets to their final destination, and no particular kind of information is prioritised over any other kind. This turns out to be really, really powerful.

The whole idea is that TCP/IP is as simple as possible. In fact, they used to even say that theoretically you could implement TCP/IP using two tin cans and a piece of string. It’s very, very low level.

What you can then do on top of this low level, dumb, simple protocol is add more protocols, more complex protocols. And you could just go ahead and create these extra protocols. You can create protocols for sending and receiving email, Telnet, File Transfer Protocol, Gopher, all sitting on top of TCP/IP.

Again, if you want to create a new protocol, you can just do it. You don’t have to ask for anyone’s permission. You just create the new protocol.

The tricky thing is getting people to use your protocol because then you really start to run into Metcalfe’s law:

The value of a network is proportional to the square of the number of users of the network

…which basically means the more people who use a network, the more powerful it is. The very first person who had a fax machine had a completely useless thing. But as soon as one other person had a fax machine, it was twice as powerful, and so on. You have to convince people to use the protocol you’ve just created that sits on top of TCP/IP.

Vague but exciting…

And that was the situation with a protocol that was invented called Hypertext Transfer Protocol. It was just one part of a three-part stack of a project called World Wide Web. Hypertext Transfer Protocol is the protocol for sending and receiving information, URLs for addressability, and a very, very simple format, HTML, for putting links together, basically - very, very simple. These three pieces form this World Wide Web project that was created by Tim Berners-Lee when he was working at Cern. What I kind of love is that at this point it only exists on his computer and he still called it the World Wide Web. He was pretty confident.

All these different influences go into the creation of the web, and I think part of it is where the web was created because it was created here at CERN, which is just the most amazing place if you ever get the chance to go. It’s unbelievable what human beings are doing there, right?

I mean recreating the conditions of the start of the universe, smashing particles together near the speed of light in this 20-mile wide ring under the border of France and Switzerland. Mind-blowing stuff. And of course there’s lots and lots of data being generated. There’s so much logistical overhead involved in just getting this started and building this machine and doing the experiments, so managing the information is quite a problem, as you can probably imagine.

This is the problem that Tim Berners-Lee was trying to tackle while he was there. He was a computer scientist at CERN. And he had this idea that hypertext could be a really powerful way of managing information, and this wasn’t his first time trying to create some kind of hypertext system.

In the ’80s, he had tried to create a hypertext systems called Enquire. It was named after this Victorian book on manners called Enquire Within Upon Everything, which I always thought would be a great name for the World Wide Web: Enquire Within Upon Everything.

There are these different influences feeding in. There’s this previous work with Enquire. There’s the architecture of the Internet itself that he’s going to put this other protocol on top of. There’s the culture at CERN where it isn’t business driven. It’s for pure scientific research, right? All of these things are feeding in and influencing Tim Berners-Lee.

He puts a proposal together, and it doesn’t have the sexiest title, right? He just called it Information Management: A Proposal. But his boss at CERN, Mike Sendall, he must have seen something in this because he gave Tim Berners-Lee the green light by scrawling across the top, "Vague but exciting…" and this is how the web came to be made: vague but exciting…

Right from the start, Tim Berners-Lee understood that the trick wasn’t creating the best protocol or the best format. The trick was getting people to use it, right? To accomplish that, I think he had a very keen insight. Just like TCP/IP, he understood it needed to be as simple as possible, just like that apocryphal Einstein quote that everything should be as simple as possible, but no simpler. That’s probably going to help you to encourage people to use what you’re building if it’s just as simple as it could be, but still powerful.

Looking at those building blocks—the protocol, the addressability, the format—I think that’s true of all these buildings blocks. These are all flawed in some way. None of these are perfect - far from it. They all have issues. We’ve been fixing the issues for years. But they’re all good enough and all simple enough that the World Wide Web was able to take off in the way it did.

The trick… is to make sure that each limited mechanical part of the web, each application, is within itself composed of simple parts that will never get too powerful.

HTML

Just looking at one piece of this, let’s just look at HTML. It’s a very simple format. To begin with, there was no official version of HTML. It was just something Tim Berners-Lee threw together. There was a document called HTML Tags, presumably written by Tim Berners-Lee, that outlined the entirety of HTML, which was a total of 21 elements. That was it, 21 elements. Even those 21 elements, Tim Berners-Lee didn’t invent. He didn’t create them, most of them. Most of them he stole, he borrowed from an existing format. See, the people at CERN were already using a markup language called CERN SGML, Standard Generalised Markup Language. And so by taking what they were already familiar with and putting that into HTML, it was more likely the people would use HTML.

Now what I find amazing is that we’ve gone from having 21 elements in HTML tags, that first document, to having 100 more elements now, and yet it’s still the same language. I find that amazing. It’s still the same language that was created 25 years ago. It’s grown an extra 100 elements in there, and yet it’s still the same language.

If you’re familiar at all with computer formats, this is very surprising. If you tried to open a Word processing document from the same time as when Tim Berners-Lee was creating the World Wide Web project, good luck. You’d probably have to run some emulation just to get the thing open. And yet you could open an HTML document from back then in a browser today.

How is it possible that this one language can grow over 25 years, grow 100 fold? Well, I think it comes down to a design decision with how HTML is handled by browsers, by parsers. Okay, we’re going to get very basic here, but think for a minute about what happens when a browser sees an HTML element. You’ve got an opening tag. You’ve got a closing tag. You’ve got some content in between. Maybe there’ll be some attributes on the opening tag. This is basically an HTML element. What a browser does is it displays the content in between the opening and closing tags.

<div>
show me
</div>

Now, for some elements it will do extra stuff. Some elements have extra goodness. Maybe it’s styling. Maybe it’s behaviour. The A element is very special and so on. But, by default, an HTML element just displays the content between the opening and closing tags. Okay. You all know this.

What’s interesting is what happens if you give a browser an HTML element that doesn’t exist. It’s not in HTML. The browser doesn’t recognise it. Still got an opening tag. Still got a closing tag. Still got content in between. Well, what the browser does is it still shows that content in between the opening and closing tags. Okay, you all know this too.

<foo>
show me
</foo>

See what’s interesting is what the browser does not do. The browser does not throw an error to the user. The browser does not stop parsing the document at this point and refuse to parse any further. It just skips over what it doesn’t understand, shows that content, and carries on to the next element.

Well, this turns out to be enormously powerful. This is how we get to have 100 new elements since the birth of HTML because, as we add new elements into the language, we know exactly how older browsers will behave when they see these new elements. They’ll just ignore the tags they don’t understand and display the content. That’s how we can add to the language.

<main>
show me
</main>

In fact, we can make use of this design decision for some more complex elements. Let’s take canvas. If we know that an older browser will display the content between tags for elements it doesn’t understand, that means we can put fallback content between those tags and we can have newer browsers not display the content between the opening and closing tag. Very powerful. It means you get to use things like canvas, video, audio, and still provide some fallback content.

<canvas>
hide me
</canvas>

This is not an accident. This is by design. The canvas element originally, that was a proprietary element created by Apple. As so often happens, just the way standards get done is other browsers look at what a browser is doing, creating a proprietary thing, and goes, "Oh, that’s a good idea. We’re going to do that too," and they standardise on it. But when it was a proprietary element, it was a standalone element. It didn’t have a closing tag, right? It was standalone like image, meta, or link.

When it became standardised, they gave it a closing tag specifically so we could use this pattern, specifically so that we could put fallback content in there and safely use these new, exciting elements, but also provide fallback for older elements. So I really like that design pattern. Some real thought has gone into that.

There’s an interesting pattern I’d like to look at here as well, another HTML element. Now the image element has a very interesting back story. Looking at it, even from here, you can say "Wait a minute. There’s no closing tag," and it would actually be much better if we had an opening image tag, a closing image tag, and then we could put fallback content in between the opening and closing tags, like a text description of what’s in the image.

<img src alt>

But, no. Instead, we’re stuck with this alt attribute where we have to put this fallback content. It seems like a bit of a weird design decision. Well, what happened was, in the early days of the web when everybody seemed to be making a web browser, there was this mailing list for all the people making web browsers.

You have to remember. Back then there were no images on the web, but this topic came up. How could we have images on the World Wide Web? It’s being discussed, and they’re throwing ideas backwards and forwards like, oh, maybe it should be called icon, or maybe it should be called object because maybe there’ll be things other than images one day on the web.

This is all going on and Marc Andreessen, who is making the Mosaic browser, he chimes in and goes, "Uh, listen. I’ve just shipped this. It’s called I-M-G. You put the path in the src attribute, and it’s landing in the next version of Mosaic." Everyone else went, "Okay."

Because what they had was they had rough consensus. But, more importantly, they had running code. And the running code kind of trumped any sort of theoretical purity. It does mean, though, we’re stuck with these decisions.

There’s all sorts of weird stuff in HTML. You might wonder why does it work that way and not another way. It usually goes back to some historical reason like that.

Well, this worked well enough that we had this img element for throwing in, say, bitmap images. But there is a certain clash between this inherent flexibility of the web when it comes to text and bitmap images that have an inherent width and height. You put text on the web, and it doesn’t matter what the width of the browser is. It’s just going to break onto multiple lines. The web is very flexible when it comes to text.

When it comes to images, not so much because images are so fixed, so there’s kind of a clash between the web and between bitmap images. That really sort of came to a head with the rise of responsive design. It was like, oh, shit. What are we going to do now? We’ve got these fixed things, and yet sometimes we want to them to be different sizes.

The responsive images problem has been solved. And again, the design decisions there are very smart. One way of solving is you’ve got the source set attribute now, right? You can put in other images and say to the browser, "Look. Here are some other images with a higher pixel density," for example, "and let the browser choose." Or we’ve got this picture element. You can wrap the image element in, and you can provide even more images that the browser could choose from and provide media queries in there and all that stuff.

<img src alt srcset>


But, but, but… With both of those, you still have to have an image element. There’s no way you can leave it out. If you try and use picture without an image element, it just won’t work. And you have to have a source attribute because the way that these things work, both the source set attribute and these source elements, is that they update the value of the src attribute in there, right? So you can’t leave off that initial source attribute, which means you have to provide some backwards compatibility. If you try to just use the new stuff without using the gold old fashioned src attribute, it just won’t work. That too is deliberate, and that’s a really nice design decision. Very forward thinking, but also making sure we know how things are going to behave in older browsers.

<picture>
<source srcset>
  <source srcset>
  <img src alt srcset>
</picture>

Again, the reason why we can do this with HTML is because of how it handles errors, how it handles stuff it doesn’t recognise. You give this to an older browser, it just skips over the picture stuff, the source stuff. Sees the image. If it understands that, that’s what it uses, and it doesn’t throw an error, and it doesn’t stop parsing the file at that point. So HTML is very error tolerant, I guess.

CSS

It’s similar with CSS. It has a very similar way of handling errors. Now CSS, I know a lot of people, especially from the JavaScript world, really like to hate on CSS, but I kind of love CSS and I’ll tell you why. If you think about all the CSS that’s out there, and there’s a lot of CSS out there because there are a lot of websites out there, and they’re all using CSS. The possible combinations are endless. Yet, all of it — all of it comes down one pattern: selectors, properties, values. That’s it. That’s all the CSS that’s ever been written — one simple, little pattern.

selector {
  property: value;

}

The tricky part is, of course, knowing the vocabulary of all the selectors and all the properties and all the values. But the underlying pattern is super simple: a couple of special characters so that the machines can parse it, but one underlying pattern behind all of the CSS ever written. I think that’s really beautiful.

Again, we’ve been able to grow CSS over time, just add in new selectors, new properties, new values. The reason we can do that is because of how browsers handle CSS that they don’t understand. If you give a browser a selector that doesn’t exist, well, it’s just like giving it a selector that doesn’t match anything in the document. It just ignores that chunk of curly braces and skips onto the next one. If you give it a property it doesn’t understand, it just skips onto the next declaration. You give it a value, the same thing. It doesn’t throw an error, and it doesn’t stop parsing the CSS and refuse to parse any further.

CSS, like HTML, is very error tolerant. What I find interesting about CSS lately, and when I say lately, I mean in the last, let’s say, five years is, as we look at the biggest changes in CSS, personally I think they fall into kind of two categories. First of all, you’ve got pre-processors and post-processors, but things like Sass and LESS. Then you’ve also got these naming conventions, these ways of organising your CSS: OOCSS, BEM, SMACSS. There’s a few more.

Who here is using some kind of naming convention like this? Right. Okay.

And who here is using Sass or Less or post-processors? Right. Lots of us.

See, what I find interesting about both of those revolutions in how we do CSS is that in neither case did we have to go to the browser makes or go to the standards body and lobby them and say, "Please add this to CSS." With the preprocessors, it happens on our machines, so we don’t need to worry about having anything needed to be implemented in the browser. And with the naming conventions, well, it kind of all happens in the selector, and nothing new needed to be added into a CSS for us to come up with these new ways of naming things and conventions for class names.

In fact, even though it’s only in the last few years that these things have become popular, in theory there’s no reason why we couldn’t have been doing BEM 15 years ago, right? It’s almost like it was there hiding in plain sight the whole time, staring us in the face in that simple pattern and we just hadn’t realised its potential. I find that fascinating. I want you to remember that because I’m going to come back to this idea that something is just staring us in the face, hiding in plain sight.

Okay, so CSS and HTML can grow over time because they’re error tolerant. And I think that this is an example of what’s known as the robustness principle. This is from Jon Postel:

Be conservative in what you send. Be liberal in what you accept

Mr. DJ, you can use that as a sample.

Postel’s law

Be conservative in what you send. Be liberal in what you accept, because that’s what browsers are doing. They’re being very liberal.

Jon Postel, he worked on the Internet, and he was talking about that packet switching stuff when he came up with this principle. If you are a machine on the Internet and you’re given a packet you’re supposed to shuttle on, and let’s say there are errors in the packet, but you can still understand what you’re supposed to do with it. Well, just shuttle it on anyway even though there are errors. So be tolerant about that kind of stuff. But when you send packets out, try to make them well formed. Be conservative in what you send, but be liberal in what you accept.

Now this might sound like it’s a very technical principle that only applies to things like networking or the creation of formats for computers, but I actually see Postel’s law at work all the time in areas of design in the field of user experience. Let’s say you’ve got a form you’re going to put on the web. Well, the number one rule is try to keep the number of form fields to a minimum. Don’t ask the user to fill in too many form fields. Keep it to a minimum, right? Be conservative in what you send.

Then when the user is filling in those fields, let’s say it’s telephone number or credit card number, don’t make them format the form fields in a certain way. Just deal with it. Be liberal in what you accept.

JavaScript

Now, CSS and HTML, I think, can afford to have this robustness principle and this error tolerant handling built in, partly because of the kind of languages they are. CSS and HTML are both declarative languages. In other words, you don’t give step-by-step instructions to the browser on how to render something when you write CSS or HTML. You’re just declaring what it is. You’re declaring what the content is with HTML. You’re declaring your desired outcome in CSS. And it’s worth remembering every line of CSS you write is a suggestion to the browser, not a command.

They’re declarative languages, so they can kind of afford to be error tolerant. That’s not true when it comes to JavaScript. And I’m talking specifically here about client side JavaScript, JavaScript in a web browser. It’s an imperative language where you do give step-by-step instructions to the computer about what you want to happen. A language like that can’t afford to have loose error handling.

With JavaScript, if you give it something it doesn’t understand, it will throw an error. It will stop parsing the JavaScript at that point and refuse to parse any further in the file. It kind of has to.

If you had an imperative language that was very error tolerant, you would never be able to debug anything. You make a mistake and the browser is like, "Oh, it’s fine. Don’t worry about it." You kind of need to have that, well, frankly, more fragile error handling. It’s the price you pay.

The thing is imperative languages are, by their nature, more powerful because you get to decide a lot more. Declarative languages, like I said, you’re just kind of making suggestions what you’d like to happen. What that means is the declarative languages can afford to be more resilient whereas imperative languages, I think, are inherently more fragile.

I think there are other differences too. In my experience, declarative languages are far easier to learn. The learning curve is pretty shallow, whereas an imperative language has a much steeper learning curve kind of because you’ve got to get your head around all these concepts like variables, loops, and all sorts of stuff before you can even start writing.

What I’ve noticed over time, though, sort of looking at the history of the web, is that when we’re trying to solve problems, when we run up against things like the responsive images problem will be one example, we initially start solving it up at the fragile end of the stack. We solve it with scripts. When we’ve got something working well enough, over time it finds its way down into the more resilient part of the stack into CSS or into HTML.

If you can remember when we first started writing JavaScript way back in the day, the two most common use cases were rollovers, right? You mouse over an image; it swaps out for a different image. And form fields like, has a required form field been filled in, does this actually look like an email address, stuff like that.

Now these days you wouldn’t even use JavaScript to do that stuff because, to do rollovers, you’d use CSS because that functionality found its way into the declarative language through the pseudo classes. And if you want to make sure that the required field is being filled in, you can do that in HTML by adding the required attribute. You see this over time that we solved stuff initially in the imperative layer, the fragile part in JavaScript, and they find, those patterns find their way down into the declarative stack over time.

JavaScript, by its nature, because of its error handling, you kind of have to be a bit more careful in how you use it. It’s just the nature of the beast. What’s interesting is that, again, looking back at the history of the web, there was a moment about ten years ago when we almost had the worst of both worlds, if anyone remembers.

Yeah, PPK knows what I’m talking about: XHTML2. The idea here was, okay, so we already have XHTML1, and all that was was taking the syntax of XML and applying it to HTML because, in HTML, it doesn’t matter whether your tags are upper case or lower case. It doesn’t matter if your attributes are upper case or lower case. It doesn’t matter if you quote your attributes. Whereas in XHTML it has to be all lower case elements, all lower case attributes. Always quote your attributes.

The idea of taking the syntax and applying it to HTML was kind of a nice thing because it made our HTML cleaner and kind of showed a bit of professionalism, right? That was XHTML1. It didn’t fundamentally make any difference to the browsers, whether you used an old version of HTML or used XHTML1. It was all the same.

But the idea with XHTML2 was that, as well as borrowing the syntax from XML, we would also borrow the error handling of XML. Here’s the error handling of XML. If there’s a single mistake in the document, don’t parse the document. Don’t show anything to the end user, so a really draconian error handling.

Now, web developers, designers, authors, us, we took one look at this and we said, "No. That’s insane. Why would we put stuff on the public web where, if there’s one un-encoded ampersand, you’re going to get a yellow screen of death and the user is not going to see anything?" That’s madness, right? We quite rightfully rejected XHTML2 because of its draconian error handling.

Here we are, ten years later, and we’re putting our base content, like text on a screen, into the most fragile layer of the stack. We are JavaScripting all the things. What changed? We decided ten years ago that that kind of draconian error handling was just way too fragile. It wasn’t resilient enough for the public web. But I missed the memo when we decided that if you want to render some text on a screen that you should use an imperative programming language to do that where, if you make one mistake, nothing is going to get rendered. And mistakes do happen.

I remember a couple of years back where the page for downloading Google Chrome, a pretty important page, wasn’t working at all. Nobody in the world could download Google Chrome for a few hours. The reason was because of this link to download Google Chrome. You can see there’s an error, and it’s in JavaScript somewhere, probably completely unrelated error. But this is the way that the link had been marked up. In other words, taking that fragile imperative part to the stack and pushing it down into the more resilient parts of the stack and getting the worst of both worlds.

<a href="javascript:void(0)">
Download Chrome
</a>

Using this JavaScript pseudo protocol means that it’s not actually a link. It’s kind of just a pathway to the fragility of a scripting language. This illustrates another law that in some ways is just as important as Postel’s law, and that is Murphy’s Law:

Anything that can possibly go wrong will go wrong.

Murphy’s law

He was a real person. He was an aerospace engineer. And because he had this attitude, he never lost anybody on his watch. And like Postel’s law, I see Murphy’s Law in action all the time, and particularly when it comes to client side JavaScript because of the way it handles errors.

Stuart Langridge put together a sort of flow chart of all the things that can possibly go wrong with JavaScript, and some of these things are in the browser, and some of them are in the network, and some of them are in the server, things that go wrong. And of course things can go wrong with your HTML, your CSS, and your images too. But because of the error handling of those things, it doesn’t matter as much. With JavaScript, it’s going to stop parsing the entire JavaScript file if you’ve got one single error, or if something goes wrong on the network, or if the browser doesn’t support something that you’ve assumed it supported, right? So it’s inherently more fragile, and we need to embrace that.

We need to accept that shit happens. We need to accept that Murphy’s Law is real. We need to take a pretty resilient approach to how we treat that fragile layer of the stack, the imperative layer.

Could you imagine if car manufacturers who currently spend a lot of time strapping crash test dummies into cars and smashing them against walls at high speed, if they said, "You know what? Actually, we’re not going to strap crash test dummies into our cars and smash them into walls at high speed because we’ve been thinking. Actually, we don’t think crash test dummies are going to drive these cars. We think they’ll be driven by people. Also, we don’t anticipate people are going to drive their cars into the wall at high speed. We think they’ll drive on roads."

Yeah, of course that’s what we hope will happen, but you’ve still got to plan for the worst case scenario. Hope for the best; prepare for the worst. That’s not a bad thing. That’s just good engineering.

Trent Walton wrote about this. He said:

Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability.

The reality of the web’s inherent variability.

We need to face that reality. Stop pretending. Stop assuming that, oh, well, everyone has got JavaScript. Oh, that JavaScript will be fine. Those are assumptions. We need to push those assumptions and accept that there is variability, that Murphy’s Law is real.

Well, this all sounds very depressing, doesn’t it? I mean it sounds like I’ve come here to give you doom and, indeed, gloom. Oh, we’re all doomed. Don’t use JavaScript. Which is not what I’m saying at all. Far from it. I love JavaScript.

No, I think we just need to be a bit more careful about how we deploy it. And I’ve got a solution for you. I want to give you my three step plan for building websites. Here’s how I do it.

  1. Step one: Identify the core functionality of the service, the product you’re building.
  2. Step two: Make that core functionality available using the simplest possible technology.
  3. Step three: Enhance, which is where the fun is, right? You want to spend your time at step three, but take a little time with step one and two.

Identify core functionality

Let’s go through this. Let’s look at the first bit. Identify the core functionality. Let’s say you’re providing the news. Well, there you go. There’s your core functionality: providing the news. That’s it. There’s loads more you can do as well as providing the news. But when you really stop and think about what the core functionality is, it’s just providing the news.

Let’s say you’ve got a social network, a messaging service where people can send and receive messages from all over the world. Well, I would say the ability to send a message, the ability to receive a message, that’s the core functionality. Again, there’s lots more we can do, but that’s a core functionality. You want to make sure that anybody in the world can do that.

If you have a photo sharing app, the clue is in the name: the ability to share photos. I need to be able to see photos. I need to be able to share a photograph.

Let’s say you’ve got some writing tool where you can write, edit, and collaborate on documents. Well, there’s your core functionality right there: the ability to write and edit documents.

Make that functionality available using the simplest technology

Okay. Now that you’ve identified the core functionality, make that functionality available using the simplest technology. By the simplest technology, that means you’re probably wanting to look as far down the stack as you can go.

Going back to the news site, providing the news is the core functionality. Theoretically, the simplest technology to do that would be a plain text file. I’m going to go one level up from that though. I’m going to say an HTML file. We structure that news and we put it out there on the web. That’s it. That’s how we make the core functionality available using the simplest possible technology.

That social networking site, we need to be able to send messages. We need to be able to receive messages. Well, to see messages, probably in reverse chronological order, HTML can do that. To send messages, we can do that too using forms, so a simple form field should cover that. All right, you’ve done the core functionality.

For the photo sharing app, very similar. Again, reverse chronological list, but this time we need to have images in there, so our baseline is a little bit higher now. The browser needs to support images. And instead of a form field for accepting text, we’re going to have a form field for accepting an image. As far as I can tell, that’s the simplest possible technology to do this.

And for this collaborative writing tool, the ability to write and edit documents, a text area, a form. Okay.

Enhance!

Now if you were to stop at this point, what you have would work, but it would be kind of shitty. Okay? The fun happens at step three where you get to enhance. You take your baseline and you enhance up. This is where you get to differentiate. This is where you stand out from the competition. This is where you get to play with the cool toys where you get to make something much nicer.

With something like providing the news, well, providing layout on larger screens. There is the enhancement right there. Now it might be odd to think about layout as an enhancement, but if you think about responsive design and, particularly, mobile first responsive design, that’s exactly what layout is. You begin with the content and then, in your media queries, you add the layout as an enhancement.

You want it to look beautiful, so we can use web fonts do to that, right? I would love to think that the beautiful typography is inherent to the content, but we have to accept the reality that it’s an enhancement. That’s not to belittle it. Don’t think when I say, "Oh, this is an enhancement," that I’m saying this is just an enhancement. The enhancements are where the differentiation lies where things really shine.

In the case of our social networking messaging service, it’s sending and receiving messages. It full pages refreshes. It’s really dull. It’s really boring. We’re going to bring in some ajax so that we don’t need to refresh the page all the time to see the latest messages, and we could even make it work the other way, right? We can use websocket so that the sending and receiving, we never need to refresh the page again. We get those messages arriving all the time.

Now, not every browser is going to support websockets. That’s okay because the core functionality is still available to everyone. The experience will be different. It’ll be worse in older browsers. But they can still accomplish something. That’s the key part.

In the case of our photo sharing app, all the things we said before, right? We’re going to have layout. We’re going to have web fonts. We’re going to have ajax. We’re going to have websockets. And let’s … even more stuff, newer stuff, the file API. The moment that file is in the browser, before we even sent it to the server, we can start playing around with it. We can do things like CSS filters. Put sepia tones on those images. Let the user play with that.

Again, not every browser supports this stuff. That’s okay. The core functionality is there. You’re laying the stuff on.

In the case of this collaborative writing tool, all the stuff I mentioned before. You definitely want to have ajax in there. You definitely want to have all that other good stuff, websockets. But let’s make sure it’s resilient to network failures. Let’s start storing stuff in the browser itself. We’ve got all different kinds of local storage these days. I can’t even keep up with the many databases we have in a browser. Local storage and making it work offline, this is the technology I’m probably most excited about right now: service workers.

Very, very exciting. I mean properly game changing stuff. And you know when I was talking about those patterns earlier like canvas, like image, and the way they’ve been designed with backwards compatibility in mind? Service worker has been designed to be an enhancement like this. You can’t make service worker a requirement for a website. You have to add it as an enhancement because the first time someone hits your website, there is no service worker. So that again is a design decision, and that encourages the adoption of technologies like service worker. It’s a very clever move.

Scale

That’s how you make websites, that three-step process. And what I like about this three-step process is that it’s scale-free, which means it works at different levels. I’ve just been talking about the level of the whole service, the product or the service you’re building. But you could apply this at different scales. You could apply it at the scale of a URL. You could ask: What is the core functionality of this URL? How do I make that functionality available using the simplest possible technology? And how can I then enhance it?

You can go deeper at the level of a component within a page and say, okay, what’s the simplest way of making this component work and then how do I enhance it from there. The Filament Group talked about this, just providing an address. Well, the simplest way is some text with the address on it. But then you could add an image with a map on it. Then you could add Slippy Map for more advanced browsers. Then you could add animation, all sorts of good stuff. You can layer this stuff on.

My point here is that there isn’t a dichotomy between either having the basic functionality, which is available to everyone, which is quite boring, or a rich, immersive experience with all the cool APIs and the new stuff. I’m saying you can have both. By taking this layered approach, you can have both.

Now there’s a myth with this, the idea that, yeah, but this means I’m going to spend all my time in older browsers if I’m concentrated on backwards compatibility. No. Far from it. As long as you spend time making steps one and two work, I find I spend all of my time in step three because I know exactly what’s going to happen in older browsers. They’re going to get the basic core functionality, and I get to play around with the new stuff, the new toys, the new APIs, kind of with a clear conscious. It’s kind of the safest way of playing with stuff even when it’s only supported in one or maybe two browsers. You’re going to spend more time in newer browsers if you do this.

This is too easy

But I do get pushback on this, and the pushback falls into sort of two categories. One that this is too easy. Or rather, it’s too simplistic. It’s naïve. It’s like, "Well, what you’re talking about, that will work for a simple blog or personal site, but it couldn’t possibly scale for the really complicated app, the really complicated corporate site that I’m building."

What’s interesting is that I heard that argument before when we were trying to convince people to switch from using tables for layout and font tags to using CSS. I remember people saying, "Yes. Those examples you’ve shown, it’s all well and good for a simple, little blog or a personal site, but it could never scale to a big, corporate site." Then we got Wired.com. We got ESPN.com, and the floodgates opened.

When responsive design came along, Ethan got exactly the same thing. It’s like, "Well, Ethan, that’s all well and good for your own little website, this responsive design stuff, but it couldn’t possibly scale to a big, corporate site." Then we had the Boston Globe. We had Microsoft.com. And the flood gates opened again.

This is too hard

But the other pushback I get is that this is too hard, it’s too difficult. And I have some sympathy for this because people look at this three-step process and they’re like, "Wait. Wait. Wait a minute. You’re saying I spend my time making this stuff work in the old fashioned client server model, and then at step three, when I start adding in my JavaScript, I’m just going to recreate the whole thing again, right?" Not quite.

I think there could possibly be some duplicated work. But remember. You’re just making sure that the core functionality is available to everyone. What you do then after that, all the other functionality you add in, you don’t need to make that available to everyone.

Again, talking about the Boston Globe. I remember Matt Marquis saying there’s a whole bunch of features on the Boston Globe that require JavaScript to work. Reading the news is not one of them.

But I think this could be harder at first. If you’re not used to working this way, it’s fair enough to say, yeah, this is hard. But again, that was true when we moved from tables to layout, from layout to CSS. It was harder. At least the first time we tried it it was harder. The second time it got easier. The third time, easier still. Now it’s just a default, and I couldn’t make a website with tables for layout if I tried.

And if you’d been making fixed width websites for years then, yeah, the first time you tried to make a responsive website it was really painful. The second time was probably still painful, but not as painful. And by the third time it gets easier and now it’s just the default way you build websites, so it’s the same here. You’ve just got to get used to it.

But I still find people push back. They’re like, "Uh, this is too hard. This doesn’t work with the tools I’m using." I hate that argument because the tools are supposed to be there to support you.

The tools are supposed to be there to make you work better. That’s why you choose a library. That’s why you choose a framework. You don’t let a framework or library dictate you, how you approach building a website. That’s the tail wagging the dog.

Yet, I see again and again that people choose developer convenience over user needs. Now, I don’t want to belittle developer convenience. Developer convenience is hugely important, but not at the expense of user needs. There has to be a balance here.

I’ve said it often, but if I’m given the option, if there’s a problem and I have the choice of making it the user’s problem or making it my problem, I’ll make it my problem every time. You know why? That’s my job. That’s why it’s called work. Okay? Sometimes it is harder.

Everything is amazing and nobody’s happy

We’ve seen this over and over again that we’re constantly complaining about what we can’t do. It’s like, "Ugh! We’re not there yet. The web — the web kind of sucks when you compare it to Flash," or, "the web kind of sucks when you compare it to Native." It goes back a long way, right?

I remember when we were like, "Ugh! The web sucks because I’ve only got 216 colors to play with." True story - 216 colors. It’s all we had, right?

Or, "The web sucks because I’ve only got these system fonts to work with with typography." Or, "Ugh! Everything will be so much better if people would just upgrade from Netscape 4. If people would just upgrade from Internet Explorer 6, and everything will be fine. If only people would upgrade from Windows XP. If those Android 2.0 users would just upgrade, then everything would be fine, right?"

It’s like this keeps happening over and over again. We’re never happy.

My friend Frank has a wonderful essay he wrote a few years back. It’s called There is a Horse in the Apple Store.

Wherein he describes the situation. A true story. It really happened. There was a horse in the Apple store, and he describes what it’s like to see a horse in the Apple store, but he also describes the reaction or complete lack thereof by all the people in the Apple store. It’s like don’t they see the tiny horse in the Apple store? It’s right in front of their faces, but they just don’t see it. And I think we kind of have let that happen with the web.

Frank calls things like this tiny ponies when something is amazing, but it’s right in front of you and you don’t see it. It’s a tiny pony. And I think the World Wide Web is a tiny pony. It’s amazing, and yet we’re like, "Ugh, I can’t get 60 frames per second. Ugh." Right?

It’s incredible. The web is incredible. You know why it’s incredible? It’s not because of HTTP. And it’s not even because of HTML, much as I love it. The web is incredible because of URLs.

There are plenty of other formats and plenty of other protocols on the Internet for sending and receiving messages for keeping people in touch. Some of them are better than the web at that stuff, but only the web has URLs. Only the web allows you to put something online and keep it there over time so that people can access it throughout history. That’s amazing.

Also, you build an application. You build something that people can use. You can put it on the web just by putting it at a URL. You don’t need to ask anyone’s permission. There’s no app store. There’s no gatekeeper. URLs are the beating heart of the World Wide Web.

And the fact that we can build up the store of knowledge is amazing. We can extend the reach of our networks for future generations. We can extend the reach of the collective knowledge of our species. We need to be good ancestors, and we need to leave behind a web that lasts, a web that’s resilient. Thank you.