Tags: enhancement

239

sparkline

Thursday, April 20th, 2017

The work I like. — Ethan Marcotte

Ethan’s been thinking about the trends he’s noticed in the work he’s doing:

  • prototypes over mockups,
  • preserving patterns at scale, and
  • thinking about a design’s layers.

On that last point…

The web’s evolution has never been charted along a straight line: it’s simultaneously getting slower and faster, with devices new and old coming online every day.

That’s why I’ve realized just how much I like designing in layers. I love looking at the design of a page, a pattern, whatever, and thinking about how it’ll change if, say, fonts aren’t available, or JavaScript doesn’t work, or if someone doesn’t see the design as you and I might, and is having the page read aloud to them.

Thursday, March 30th, 2017

The Analog Web | Jim Nielsen’s Blog

This is wonderful meditation on the history of older technologies that degrade in varied conditions versus newer formats that fall of a “digital cliff”, all tied in to working on the web.

When digital TV fails, it fails completely. Analog TV, to use parlance of the web, degrades gracefully. The web could be similar, if we choose to make it so. It could be “the analog” web in contrast to “the digital” platforms. Perhaps in our hurry to replicate and mirror native platforms, we’re forgetting the killer strength of the web: universal accessibility.

Monday, March 20th, 2017

World Wide Web, Not Wealthy Western Web (Part 2) – Smashing Magazine

The second part of Bruce’s excellent series begins by focusing on the usage of proxy browsers around the world:

Therefore, to make websites work in Opera Mini’s extreme mode, treat JavaScript as an enhancement, and ensure that your core functionality works without it. Of course, it will probably be clunkier without scripts, but if your website works and your competitors’ don’t work for Opera Mini’s quarter of a billion users, you’ll get the business.

But how!? Well, Bruce has the answer:

The best way to ensure that everyone gets your content is to write real, semantic HTML, to style it with CSS and ensure sensible fallbacks for CSS gradients, to use SVG for icons, and to treat JavaScript as an enhancement, ensuring that core functionality works without scripts. Package up your website with a manifest file and associated icons, add a service worker, and you’ll have a progressive web app in conforming browsers and a normal website everywhere else.

I call this amazing new technique “progressive enhancement.”

You heard it here first, folks!

Friday, March 17th, 2017

A Little Surprise Is Waiting For You Here — Meet The Next Smashing Magazine

An open beta of Smashing Magazine’s redesign, which looks like it could be a real poster child for progressive enhancement:

We do our best to ensure that content is accessible and enhanced progressively, with performance in mind. If JavaScript isn’t available or if the network is slow, then we deliver content via static fallbacks (for example, by linking directly to Google search), as well as a service worker that persistently stores CSS, JavaScripts, SVGs, font files and other assets in its cache.

Saturday, February 25th, 2017

CSS and progressive enhancement | justmarkup

A nice look at the fallbacks that are built into CSS.

Friday, February 17th, 2017

Teaching in Porto, day four

Day one covered HTML (amongst other things), day two covered CSS, and day three covered JavaScript. Each one of those days involved a certain amount of hands-on coding, with the students getting their hands dirty with angle brackets, curly braces, and semi-colons.

Day four was a deliberate step away from all that. No more laptops, just paper. Whereas the previous days had focused on collaboratively working on a single document, today I wanted everyone to work on a separate site.

The sites were generated randomly. I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

For a bit of fun, the first brainstorming exercise (run as a 6-up) was to come with potential names for this service—4 minutes for 6 ideas. Then we went around the table, shared the ideas, got feedback, and settled on the names.

Now I asked everyone to come up with a one-sentence mission statement for their newly-named service. This was a good way of teasing out the most important verbs and nouns, which led nicely into the next task: answering the question “what is the core functionality?”

If that sounds familiar, it’s because it’s the first part of the three-step process I outlined in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

We did some URL design, figuring out what structures would make sense for straightforward GET requests, like:

  • /things
  • /things/ID

Then, once it was clear what the primary “thing” was (a car, a book, etc.), I asked them to write down all the pieces that might appear on such a page; one post-it note per item e.g. “title”, “description”, “img”, “rating”, etc.

The next step involved prioritisation. They took those post-it notes and put them on the wall, but they had to put them in a vertical line from top to bottom in decreasing order of importance. This can be a challenge, but it’s better to solve these problems now rather than later.

Okay. I know asked them to “mark up” those vertical lists of post-it notes: writing HTML tag names by each one. By doing this before doing any visual design, it meant they were thinking about the meaning of the content first.

After that, we did a good ol’ fashioned classic 6-up sketching exercise, followed by critique (including a “designated dissenter” for each round). At this point, I was encouraging them to go crazy with ideas—they already had the core functionality figured out (with plain ol’ client/server requests and responses) so they could all the bells and whistles they wanted on top of that.

We finished up with a discussion of some of those bells and whistles, and how they could be used to improve the user experience: Ajax, geolocation, service workers, notifications, background sync …the sky’s the limit.

It was a whirlwind tour for just one day but I think it helped emphasise the importance of thinking about the fundamentals before adding enhancements.

This marked the end of the structured masterclass lessons. Tomorrow I’m around to answer any miscellaneous questions (if I can) and chat to the students individually while they work on their term projects.

Friday, February 3rd, 2017

Isomorphic rendering on the JAM Stack

Phil describes the process of implementing the holy grail of web architecture (which perhaps isn’t as difficult as everyone seems to think it is):

I have been experimenting with something that seemed obvious to me for a while. A web development model which gives a pre-rendered, ready-to-consume, straight-into-the-eyeballs web page at every URL of a site. One which, once loaded, then behaves like a client-side, single page app.

Now that’s resilient web design!

Friday, January 27th, 2017

A practical guide to Progressive Web Apps for organisations who don’t know anything about Progressive Web Apps : Records Sound the Same

Sally gives a really good introduction to using service workers as a progressive enhancement.

Sunday, January 15th, 2017

Modernizing our Progressive Enhancement Delivery | Filament Group, Inc., Boston, MA

Scott runs through the latest improvements to the Filament Group website. There’s a lot about HTTP2, but also a dab of service workers (using a similar recipe to my site).

Browser Support for evergreen websites

Oh, how I wished everyone approached building for the web the way that Rachel does. Smart, sensible, pragmatic, and exciting!

Sunday, January 1st, 2017

In Praise of On Resilient Web Design by Jeremy Keith

I’m really touched—and honoured—that my book could have this effect.

It made me fall back in love with the web and with making things for the web.

Friday, December 16th, 2016

The bold beauty of content prototypes — Thomas Byttebier

Designing content-first:

Everything that happens to the content prototype from now on is merely progressive enhancement. Because while the prototype is in a shared git repository, microcopy sneaks in, text gets corrected by a copywriter, photos change for the better and flows shape up, meta data is added, semantics are double checked, WAI-ARIA roles get in…

Wednesday, December 14th, 2016

Progressive enhancement and team memberships

A really nice pattern, similar to one I wrote about a little while back. There’s also this little gem of an observation:

Progressive enhancement is also well-suited to Agile, as you can start with the core functionality and then iterate.

Thursday, December 8th, 2016

Server Side React

Remy wants to be able to apply progressive enhancement to React: server-side and client-side rendering, sharing the same codebase. He succeeded, but…

In my opinion, an individual or a team starting out, or without the will, aren’t going to use progressive enhancement in their approach to applying server side rendering to a React app. I don’t believe this is by choice, I think it’s simply because React lends itself so strongly to client-side, that I can see how it’s easy to oversee how you could start on the server and progressive enhance upwards to a rich client side experience.

I’m hopeful that future iterations of React will make this a smoother option.

Wednesday, December 7th, 2016

Is JavaScript more fragile? – Baldur Bjarnason

Progressive enhancement’s core value proposition, for me, is that HTML and CSS have features that are powerful in their own right. Using HTML, CSS, and JavaScript together makes for more reliable products than just using Javascript alone in a single-page-app.

This philosophy doesn’t apply to every website out there, but it sure as hell applies to a lot of them.

Wednesday, November 16th, 2016

Resilience retires

I spoke at the GOTO conference in Berlin this week. It was the final outing of a talk I’ve been giving for about a year now called Resilience.

Looking back over my speaking engagements, I reckon I must have given this talk—in one form or another—about sixteen times. If by some statistical fluke or through skilled avoidance strategies you managed not to see the talk, you can still have it rammed down your throat by reading a transcript of the presentation.

That particular outing is from Beyond Tellerrand earlier this year in Düsseldorf. That’s one of the events that recorded a video of the talk. Here are all the videos of it I could find:

Or, if you prefer, here’s an audio file. And here are the slides but they won’t make much sense by themselves.

Resilience is a mixture of history lesson and design strategy. The history lesson is about the origins of the internet and the World Wide Web. The design strategy is a three-pronged approach:

  1. Identify core functionality.
  2. Make that functionality available using the simplest technology.
  3. Enhance!

And if you like that tweet-sized strategy, you can get it on a poster. Oh, and check this out: Belgian student Sébastian Seghers published a school project on the talk.

Now, you might be thinking that the three-headed strategy sounds an awful lot like progressive enhancement, and you’d be right. I think every talk I’ve ever given has been about progressive enhancement to some degree. But with this presentation I set myself a challenge: to talk about progressive enhancement without ever using the phrase “progressive enhancement”. This is something I wrote about last year—if the term “progressive enhancement” is commonly misunderstood by the very people who would benefit from hearing this message, maybe it’s best to not mention that term and talk about the benefits of progressive enhancement instead: robustness, resilience, and technical credit. I think that little semantic experiment was pretty successful.

While the time has definitely come to retire the presentation, I’m pretty pleased with it, and I feel like it got better with time as I adjusted the material. The most common format for the talk was 40 to 45 minutes long, but there was an extended hour-long “director’s cut” that only appeared at An Event Apart. That included an entire subplot about Arthur C. Clarke and the invention of the telegraph (I’m still pretty pleased with the segue I found to weave those particular threads together).

Anyway, with the Resilience talk behind me, my mind is now occupied with the sequel: Evaluating Technology. I recently shared my research material for this one and, as you may have gathered, it takes me a loooong time to put a presentation like this together (which, by the same token, is one of the reasons why I end up giving the same talk multiple times within a year).

This new talk had its debut at An Event Apart in San Francisco two weeks ago. Jeffrey wrote about it and I’m happy to say he liked it. This bodes well—I’m already booked in for An Event Apart Seattle in April. I’ll also be giving an abridged version of this new talk at next year’s Render conference.

But that’s it for my speaking schedule for now. 2016 is all done and dusted, and 2017 is looking wide open. I hope I’ll get some more opportunities to refine and adjust the Evaluating Technology talk at some more events. If you’re a conference organiser and it sounds like something you’d be interested in, get in touch.

In the meantime, it’s time for me to pack away the Resilience talk, and wheel down into the archives, just like the closing scene of Raiders Of The Lost Ark. The music swells. The credits roll. The image fades to black.

Monday, November 14th, 2016

SmashingConf Barcelona 2016 - Jeremy Keith on Resilience on Vimeo

Here’s the video of the talk I gave at Smashing Conference in Barcelona last month—one of its last outings.

Tuesday, November 8th, 2016

Resilience

A presentation from the Beyond Tellerrand conference held in Düsseldorf in May 2016. I also presented a version of this talk at An Event Apart, Smashing Conference, Render, and From The Front.

Thank you very much, Marc. And actually, not just for inviting me back to speak again this year, but also, as well as organising this conference, Marc also helped organise IndieWebCamp on the weekend, which was fantastic, so thank you for that, Marc. I think some of the sipgate people are back there where we had Indie Web Camp, and I want to thank them again. They did a fantastic job doing it. So thank you, Marc and sipgate.

Yeah, as Marc said, I get the job of opening up day two. This is known as the hangover slot, right? But I’ll see what I can do. I tell you what. I’ll open up day two of Beyond Tellerrand with a story or, rather, a creation myth.

lo

You probably heard that the Internet was created to withstand a nuclear attack, right? A network that would be resilient to withstanding a nuclear attack, and that’s actually not quite true. What is true is that Paul Baran, who was at the Rand Corporation, was looking into what is the most resilient shape of a network. And amongst his findings, one of the things he discovered was that by splitting up your information into discrete packages, it made for a more resilient network, and this is where this idea of packet switching comes from that you take the entire message, chop it up into little packets, and then you ship those packets around the network by whatever route happens to be best and then reassemble them at the other end.

Now this idea of packet switching that Paul Baran was coming up with, that came across the radar of Lenard Kleinrock, who was working on the ARPANET or, earlier, the DARPANET from the Advanced Research Project Agency. This is the idea of linking up networks, effectively computer networks. Now this is really, really early days here. This is 1969 is when the very first message was sent on the ARPANET, and it was simply an instruction to log in from one machine to another machine, but it crashed after two characters, so that was the first message ever sent on the ARPANET, which kind of was the precursor to the Internet.

So they kept working on it, right? They ironed out the bugs, and this network of networks grew and grew over time throughout the ’70s. But the point at which it really kind of morphed into being the Internet was when they had to tackle the problem of making sure that all these different networks that were speaking different languages, using different programs, could all be understood by one another. That there needed to be some kind of low level protocol that this inter network could use to make sure that these packages were all being shuffled around in an understandable way. And that’s where these two gentlemen come in, Bob Kahn and Vince Cerf, because they created TCP/IP, the Transfer Control Protocol, the Internet Protocol.

Now what’s interesting is that, Bob Kahn and Vince Cerf, back then they weren’t concerned about making the network resilient to nuclear attack. They were young, idealistic men, and what they were concerned about was making a network that was resilient to any kind of top down control, so that was kind of baked into the design of these protocols that the network would have no centre. The network has no single decision point. You don’t have to ask to add a node to the network. You can just do it.

I think that’s really the secret sauce of the Internet is the fact that it is, by design, a dumb network, right? What I mean by that is that the network doesn’t care at all about the contents of those packets that are being switched around and moved around. It just cares about getting those packets to their final destination, and no particular kind of information is prioritised over any other kind. This turns out to be really, really powerful.

The whole idea is that TCP/IP is as simple as possible. In fact, they used to even say that theoretically you could implement TCP/IP using two tin cans and a piece of string. It’s very, very low level.

What you can then do on top of this low level, dumb, simple protocol is add more protocols, more complex protocols. And you could just go ahead and create these extra protocols. You can create protocols for sending and receiving email, Telnet, File Transfer Protocol, Gopher, all sitting on top of TCP/IP.

Again, if you want to create a new protocol, you can just do it. You don’t have to ask for anyone’s permission. You just create the new protocol.

The tricky thing is getting people to use your protocol because then you really start to run into Metcalfe’s law:

The value of a network is proportional to the square of the number of users of the network

…which basically means the more people who use a network, the more powerful it is. The very first person who had a fax machine had a completely useless thing. But as soon as one other person had a fax machine, it was twice as powerful, and so on. You have to convince people to use the protocol you’ve just created that sits on top of TCP/IP.

Vague but exciting…

And that was the situation with a protocol that was invented called Hypertext Transfer Protocol. It was just one part of a three-part stack of a project called World Wide Web. Hypertext Transfer Protocol is the protocol for sending and receiving information, URLs for addressability, and a very, very simple format, HTML, for putting links together, basically - very, very simple. These three pieces form this World Wide Web project that was created by Tim Berners-Lee when he was working at Cern. What I kind of love is that at this point it only exists on his computer and he still called it the World Wide Web. He was pretty confident.

All these different influences go into the creation of the web, and I think part of it is where the web was created because it was created here at CERN, which is just the most amazing place if you ever get the chance to go. It’s unbelievable what human beings are doing there, right?

I mean recreating the conditions of the start of the universe, smashing particles together near the speed of light in this 20-mile wide ring under the border of France and Switzerland. Mind-blowing stuff. And of course there’s lots and lots of data being generated. There’s so much logistical overhead involved in just getting this started and building this machine and doing the experiments, so managing the information is quite a problem, as you can probably imagine.

This is the problem that Tim Berners-Lee was trying to tackle while he was there. He was a computer scientist at CERN. And he had this idea that hypertext could be a really powerful way of managing information, and this wasn’t his first time trying to create some kind of hypertext system.

In the ’80s, he had tried to create a hypertext systems called Enquire. It was named after this Victorian book on manners called Enquire Within Upon Everything, which I always thought would be a great name for the World Wide Web: Enquire Within Upon Everything.

There are these different influences feeding in. There’s this previous work with Enquire. There’s the architecture of the Internet itself that he’s going to put this other protocol on top of. There’s the culture at CERN where it isn’t business driven. It’s for pure scientific research, right? All of these things are feeding in and influencing Tim Berners-Lee.

He puts a proposal together, and it doesn’t have the sexiest title, right? He just called it Information Management: A Proposal. But his boss at CERN, Mike Sendall, he must have seen something in this because he gave Tim Berners-Lee the green light by scrawling across the top, "Vague but exciting…" and this is how the web came to be made: vague but exciting…

Right from the start, Tim Berners-Lee understood that the trick wasn’t creating the best protocol or the best format. The trick was getting people to use it, right? To accomplish that, I think he had a very keen insight. Just like TCP/IP, he understood it needed to be as simple as possible, just like that apocryphal Einstein quote that everything should be as simple as possible, but no simpler. That’s probably going to help you to encourage people to use what you’re building if it’s just as simple as it could be, but still powerful.

Looking at those building blocks—the protocol, the addressability, the format—I think that’s true of all these buildings blocks. These are all flawed in some way. None of these are perfect - far from it. They all have issues. We’ve been fixing the issues for years. But they’re all good enough and all simple enough that the World Wide Web was able to take off in the way it did.

The trick… is to make sure that each limited mechanical part of the web, each application, is within itself composed of simple parts that will never get too powerful.

HTML

Just looking at one piece of this, let’s just look at HTML. It’s a very simple format. To begin with, there was no official version of HTML. It was just something Tim Berners-Lee threw together. There was a document called HTML Tags, presumably written by Tim Berners-Lee, that outlined the entirety of HTML, which was a total of 21 elements. That was it, 21 elements. Even those 21 elements, Tim Berners-Lee didn’t invent. He didn’t create them, most of them. Most of them he stole, he borrowed from an existing format. See, the people at CERN were already using a markup language called CERN SGML, Standard Generalised Markup Language. And so by taking what they were already familiar with and putting that into HTML, it was more likely the people would use HTML.

Now what I find amazing is that we’ve gone from having 21 elements in HTML tags, that first document, to having 100 more elements now, and yet it’s still the same language. I find that amazing. It’s still the same language that was created 25 years ago. It’s grown an extra 100 elements in there, and yet it’s still the same language.

If you’re familiar at all with computer formats, this is very surprising. If you tried to open a Word processing document from the same time as when Tim Berners-Lee was creating the World Wide Web project, good luck. You’d probably have to run some emulation just to get the thing open. And yet you could open an HTML document from back then in a browser today.

How is it possible that this one language can grow over 25 years, grow 100 fold? Well, I think it comes down to a design decision with how HTML is handled by browsers, by parsers. Okay, we’re going to get very basic here, but think for a minute about what happens when a browser sees an HTML element. You’ve got an opening tag. You’ve got a closing tag. You’ve got some content in between. Maybe there’ll be some attributes on the opening tag. This is basically an HTML element. What a browser does is it displays the content in between the opening and closing tags.

<div>
show me
</div>

Now, for some elements it will do extra stuff. Some elements have extra goodness. Maybe it’s styling. Maybe it’s behaviour. The A element is very special and so on. But, by default, an HTML element just displays the content between the opening and closing tags. Okay. You all know this.

What’s interesting is what happens if you give a browser an HTML element that doesn’t exist. It’s not in HTML. The browser doesn’t recognise it. Still got an opening tag. Still got a closing tag. Still got content in between. Well, what the browser does is it still shows that content in between the opening and closing tags. Okay, you all know this too.

<foo>
show me
</foo>

See what’s interesting is what the browser does not do. The browser does not throw an error to the user. The browser does not stop parsing the document at this point and refuse to parse any further. It just skips over what it doesn’t understand, shows that content, and carries on to the next element.

Well, this turns out to be enormously powerful. This is how we get to have 100 new elements since the birth of HTML because, as we add new elements into the language, we know exactly how older browsers will behave when they see these new elements. They’ll just ignore the tags they don’t understand and display the content. That’s how we can add to the language.

<main>
show me
</main>

In fact, we can make use of this design decision for some more complex elements. Let’s take canvas. If we know that an older browser will display the content between tags for elements it doesn’t understand, that means we can put fallback content between those tags and we can have newer browsers not display the content between the opening and closing tag. Very powerful. It means you get to use things like canvas, video, audio, and still provide some fallback content.

<canvas>
hide me
</canvas>

This is not an accident. This is by design. The canvas element originally, that was a proprietary element created by Apple. As so often happens, just the way standards get done is other browsers look at what a browser is doing, creating a proprietary thing, and goes, "Oh, that’s a good idea. We’re going to do that too," and they standardise on it. But when it was a proprietary element, it was a standalone element. It didn’t have a closing tag, right? It was standalone like image, meta, or link.

When it became standardised, they gave it a closing tag specifically so we could use this pattern, specifically so that we could put fallback content in there and safely use these new, exciting elements, but also provide fallback for older elements. So I really like that design pattern. Some real thought has gone into that.

There’s an interesting pattern I’d like to look at here as well, another HTML element. Now the image element has a very interesting back story. Looking at it, even from here, you can say "Wait a minute. There’s no closing tag," and it would actually be much better if we had an opening image tag, a closing image tag, and then we could put fallback content in between the opening and closing tags, like a text description of what’s in the image.

<img src alt>

But, no. Instead, we’re stuck with this alt attribute where we have to put this fallback content. It seems like a bit of a weird design decision. Well, what happened was, in the early days of the web when everybody seemed to be making a web browser, there was this mailing list for all the people making web browsers.

You have to remember. Back then there were no images on the web, but this topic came up. How could we have images on the World Wide Web? It’s being discussed, and they’re throwing ideas backwards and forwards like, oh, maybe it should be called icon, or maybe it should be called object because maybe there’ll be things other than images one day on the web.

This is all going on and Marc Andreessen, who is making the Mosaic browser, he chimes in and goes, "Uh, listen. I’ve just shipped this. It’s called I-M-G. You put the path in the src attribute, and it’s landing in the next version of Mosaic." Everyone else went, "Okay."

Because what they had was they had rough consensus. But, more importantly, they had running code. And the running code kind of trumped any sort of theoretical purity. It does mean, though, we’re stuck with these decisions.

There’s all sorts of weird stuff in HTML. You might wonder why does it work that way and not another way. It usually goes back to some historical reason like that.

Well, this worked well enough that we had this img element for throwing in, say, bitmap images. But there is a certain clash between this inherent flexibility of the web when it comes to text and bitmap images that have an inherent width and height. You put text on the web, and it doesn’t matter what the width of the browser is. It’s just going to break onto multiple lines. The web is very flexible when it comes to text.

When it comes to images, not so much because images are so fixed, so there’s kind of a clash between the web and between bitmap images. That really sort of came to a head with the rise of responsive design. It was like, oh, shit. What are we going to do now? We’ve got these fixed things, and yet sometimes we want to them to be different sizes.

The responsive images problem has been solved. And again, the design decisions there are very smart. One way of solving is you’ve got the source set attribute now, right? You can put in other images and say to the browser, "Look. Here are some other images with a higher pixel density," for example, "and let the browser choose." Or we’ve got this picture element. You can wrap the image element in, and you can provide even more images that the browser could choose from and provide media queries in there and all that stuff.

<img src alt srcset>


But, but, but… With both of those, you still have to have an image element. There’s no way you can leave it out. If you try and use picture without an image element, it just won’t work. And you have to have a source attribute because the way that these things work, both the source set attribute and these source elements, is that they update the value of the src attribute in there, right? So you can’t leave off that initial source attribute, which means you have to provide some backwards compatibility. If you try to just use the new stuff without using the gold old fashioned src attribute, it just won’t work. That too is deliberate, and that’s a really nice design decision. Very forward thinking, but also making sure we know how things are going to behave in older browsers.

<picture>
<source srcset>
  <source srcset>
  <img src alt srcset>
</picture>

Again, the reason why we can do this with HTML is because of how it handles errors, how it handles stuff it doesn’t recognise. You give this to an older browser, it just skips over the picture stuff, the source stuff. Sees the image. If it understands that, that’s what it uses, and it doesn’t throw an error, and it doesn’t stop parsing the file at that point. So HTML is very error tolerant, I guess.

CSS

It’s similar with CSS. It has a very similar way of handling errors. Now CSS, I know a lot of people, especially from the JavaScript world, really like to hate on CSS, but I kind of love CSS and I’ll tell you why. If you think about all the CSS that’s out there, and there’s a lot of CSS out there because there are a lot of websites out there, and they’re all using CSS. The possible combinations are endless. Yet, all of it — all of it comes down one pattern: selectors, properties, values. That’s it. That’s all the CSS that’s ever been written — one simple, little pattern.

selector {
  property: value;

}

The tricky part is, of course, knowing the vocabulary of all the selectors and all the properties and all the values. But the underlying pattern is super simple: a couple of special characters so that the machines can parse it, but one underlying pattern behind all of the CSS ever written. I think that’s really beautiful.

Again, we’ve been able to grow CSS over time, just add in new selectors, new properties, new values. The reason we can do that is because of how browsers handle CSS that they don’t understand. If you give a browser a selector that doesn’t exist, well, it’s just like giving it a selector that doesn’t match anything in the document. It just ignores that chunk of curly braces and skips onto the next one. If you give it a property it doesn’t understand, it just skips onto the next declaration. You give it a value, the same thing. It doesn’t throw an error, and it doesn’t stop parsing the CSS and refuse to parse any further.

CSS, like HTML, is very error tolerant. What I find interesting about CSS lately, and when I say lately, I mean in the last, let’s say, five years is, as we look at the biggest changes in CSS, personally I think they fall into kind of two categories. First of all, you’ve got pre-processors and post-processors, but things like Sass and LESS. Then you’ve also got these naming conventions, these ways of organising your CSS: OOCSS, BEM, SMACSS. There’s a few more.

Who here is using some kind of naming convention like this? Right. Okay.

And who here is using Sass or Less or post-processors? Right. Lots of us.

See, what I find interesting about both of those revolutions in how we do CSS is that in neither case did we have to go to the browser makes or go to the standards body and lobby them and say, "Please add this to CSS." With the preprocessors, it happens on our machines, so we don’t need to worry about having anything needed to be implemented in the browser. And with the naming conventions, well, it kind of all happens in the selector, and nothing new needed to be added into a CSS for us to come up with these new ways of naming things and conventions for class names.

In fact, even though it’s only in the last few years that these things have become popular, in theory there’s no reason why we couldn’t have been doing BEM 15 years ago, right? It’s almost like it was there hiding in plain sight the whole time, staring us in the face in that simple pattern and we just hadn’t realised its potential. I find that fascinating. I want you to remember that because I’m going to come back to this idea that something is just staring us in the face, hiding in plain sight.

Okay, so CSS and HTML can grow over time because they’re error tolerant. And I think that this is an example of what’s known as the robustness principle. This is from Jon Postel:

Be conservative in what you send. Be liberal in what you accept

Mr. DJ, you can use that as a sample.

Postel’s law

Be conservative in what you send. Be liberal in what you accept, because that’s what browsers are doing. They’re being very liberal.

Jon Postel, he worked on the Internet, and he was talking about that packet switching stuff when he came up with this principle. If you are a machine on the Internet and you’re given a packet you’re supposed to shuttle on, and let’s say there are errors in the packet, but you can still understand what you’re supposed to do with it. Well, just shuttle it on anyway even though there are errors. So be tolerant about that kind of stuff. But when you send packets out, try to make them well formed. Be conservative in what you send, but be liberal in what you accept.

Now this might sound like it’s a very technical principle that only applies to things like networking or the creation of formats for computers, but I actually see Postel’s law at work all the time in areas of design in the field of user experience. Let’s say you’ve got a form you’re going to put on the web. Well, the number one rule is try to keep the number of form fields to a minimum. Don’t ask the user to fill in too many form fields. Keep it to a minimum, right? Be conservative in what you send.

Then when the user is filling in those fields, let’s say it’s telephone number or credit card number, don’t make them format the form fields in a certain way. Just deal with it. Be liberal in what you accept.

JavaScript

Now, CSS and HTML, I think, can afford to have this robustness principle and this error tolerant handling built in, partly because of the kind of languages they are. CSS and HTML are both declarative languages. In other words, you don’t give step-by-step instructions to the browser on how to render something when you write CSS or HTML. You’re just declaring what it is. You’re declaring what the content is with HTML. You’re declaring your desired outcome in CSS. And it’s worth remembering every line of CSS you write is a suggestion to the browser, not a command.

They’re declarative languages, so they can kind of afford to be error tolerant. That’s not true when it comes to JavaScript. And I’m talking specifically here about client side JavaScript, JavaScript in a web browser. It’s an imperative language where you do give step-by-step instructions to the computer about what you want to happen. A language like that can’t afford to have loose error handling.

With JavaScript, if you give it something it doesn’t understand, it will throw an error. It will stop parsing the JavaScript at that point and refuse to parse any further in the file. It kind of has to.

If you had an imperative language that was very error tolerant, you would never be able to debug anything. You make a mistake and the browser is like, "Oh, it’s fine. Don’t worry about it." You kind of need to have that, well, frankly, more fragile error handling. It’s the price you pay.

The thing is imperative languages are, by their nature, more powerful because you get to decide a lot more. Declarative languages, like I said, you’re just kind of making suggestions what you’d like to happen. What that means is the declarative languages can afford to be more resilient whereas imperative languages, I think, are inherently more fragile.

I think there are other differences too. In my experience, declarative languages are far easier to learn. The learning curve is pretty shallow, whereas an imperative language has a much steeper learning curve kind of because you’ve got to get your head around all these concepts like variables, loops, and all sorts of stuff before you can even start writing.

What I’ve noticed over time, though, sort of looking at the history of the web, is that when we’re trying to solve problems, when we run up against things like the responsive images problem will be one example, we initially start solving it up at the fragile end of the stack. We solve it with scripts. When we’ve got something working well enough, over time it finds its way down into the more resilient part of the stack into CSS or into HTML.

If you can remember when we first started writing JavaScript way back in the day, the two most common use cases were rollovers, right? You mouse over an image; it swaps out for a different image. And form fields like, has a required form field been filled in, does this actually look like an email address, stuff like that.

Now these days you wouldn’t even use JavaScript to do that stuff because, to do rollovers, you’d use CSS because that functionality found its way into the declarative language through the pseudo classes. And if you want to make sure that the required field is being filled in, you can do that in HTML by adding the required attribute. You see this over time that we solved stuff initially in the imperative layer, the fragile part in JavaScript, and they find, those patterns find their way down into the declarative stack over time.

JavaScript, by its nature, because of its error handling, you kind of have to be a bit more careful in how you use it. It’s just the nature of the beast. What’s interesting is that, again, looking back at the history of the web, there was a moment about ten years ago when we almost had the worst of both worlds, if anyone remembers.

Yeah, PPK knows what I’m talking about: XHTML2. The idea here was, okay, so we already have XHTML1, and all that was was taking the syntax of XML and applying it to HTML because, in HTML, it doesn’t matter whether your tags are upper case or lower case. It doesn’t matter if your attributes are upper case or lower case. It doesn’t matter if you quote your attributes. Whereas in XHTML it has to be all lower case elements, all lower case attributes. Always quote your attributes.

The idea of taking the syntax and applying it to HTML was kind of a nice thing because it made our HTML cleaner and kind of showed a bit of professionalism, right? That was XHTML1. It didn’t fundamentally make any difference to the browsers, whether you used an old version of HTML or used XHTML1. It was all the same.

But the idea with XHTML2 was that, as well as borrowing the syntax from XML, we would also borrow the error handling of XML. Here’s the error handling of XML. If there’s a single mistake in the document, don’t parse the document. Don’t show anything to the end user, so a really draconian error handling.

Now, web developers, designers, authors, us, we took one look at this and we said, "No. That’s insane. Why would we put stuff on the public web where, if there’s one un-encoded ampersand, you’re going to get a yellow screen of death and the user is not going to see anything?" That’s madness, right? We quite rightfully rejected XHTML2 because of its draconian error handling.

Here we are, ten years later, and we’re putting our base content, like text on a screen, into the most fragile layer of the stack. We are JavaScripting all the things. What changed? We decided ten years ago that that kind of draconian error handling was just way too fragile. It wasn’t resilient enough for the public web. But I missed the memo when we decided that if you want to render some text on a screen that you should use an imperative programming language to do that where, if you make one mistake, nothing is going to get rendered. And mistakes do happen.

I remember a couple of years back where the page for downloading Google Chrome, a pretty important page, wasn’t working at all. Nobody in the world could download Google Chrome for a few hours. The reason was because of this link to download Google Chrome. You can see there’s an error, and it’s in JavaScript somewhere, probably completely unrelated error. But this is the way that the link had been marked up. In other words, taking that fragile imperative part to the stack and pushing it down into the more resilient parts of the stack and getting the worst of both worlds.

<a href="javascript:void(0)">
Download Chrome
</a>

Using this JavaScript pseudo protocol means that it’s not actually a link. It’s kind of just a pathway to the fragility of a scripting language. This illustrates another law that in some ways is just as important as Postel’s law, and that is Murphy’s Law:

Anything that can possibly go wrong will go wrong.

Murphy’s law

He was a real person. He was an aerospace engineer. And because he had this attitude, he never lost anybody on his watch. And like Postel’s law, I see Murphy’s Law in action all the time, and particularly when it comes to client side JavaScript because of the way it handles errors.

Stuart Langridge put together a sort of flow chart of all the things that can possibly go wrong with JavaScript, and some of these things are in the browser, and some of them are in the network, and some of them are in the server, things that go wrong. And of course things can go wrong with your HTML, your CSS, and your images too. But because of the error handling of those things, it doesn’t matter as much. With JavaScript, it’s going to stop parsing the entire JavaScript file if you’ve got one single error, or if something goes wrong on the network, or if the browser doesn’t support something that you’ve assumed it supported, right? So it’s inherently more fragile, and we need to embrace that.

We need to accept that shit happens. We need to accept that Murphy’s Law is real. We need to take a pretty resilient approach to how we treat that fragile layer of the stack, the imperative layer.

Could you imagine if car manufacturers who currently spend a lot of time strapping crash test dummies into cars and smashing them against walls at high speed, if they said, "You know what? Actually, we’re not going to strap crash test dummies into our cars and smash them into walls at high speed because we’ve been thinking. Actually, we don’t think crash test dummies are going to drive these cars. We think they’ll be driven by people. Also, we don’t anticipate people are going to drive their cars into the wall at high speed. We think they’ll drive on roads."

Yeah, of course that’s what we hope will happen, but you’ve still got to plan for the worst case scenario. Hope for the best; prepare for the worst. That’s not a bad thing. That’s just good engineering.

Trent Walton wrote about this. He said:

Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability.

The reality of the web’s inherent variability.

We need to face that reality. Stop pretending. Stop assuming that, oh, well, everyone has got JavaScript. Oh, that JavaScript will be fine. Those are assumptions. We need to push those assumptions and accept that there is variability, that Murphy’s Law is real.

Well, this all sounds very depressing, doesn’t it? I mean it sounds like I’ve come here to give you doom and, indeed, gloom. Oh, we’re all doomed. Don’t use JavaScript. Which is not what I’m saying at all. Far from it. I love JavaScript.

No, I think we just need to be a bit more careful about how we deploy it. And I’ve got a solution for you. I want to give you my three step plan for building websites. Here’s how I do it.

  1. Step one: Identify the core functionality of the service, the product you’re building.
  2. Step two: Make that core functionality available using the simplest possible technology.
  3. Step three: Enhance, which is where the fun is, right? You want to spend your time at step three, but take a little time with step one and two.

Identify core functionality

Let’s go through this. Let’s look at the first bit. Identify the core functionality. Let’s say you’re providing the news. Well, there you go. There’s your core functionality: providing the news. That’s it. There’s loads more you can do as well as providing the news. But when you really stop and think about what the core functionality is, it’s just providing the news.

Let’s say you’ve got a social network, a messaging service where people can send and receive messages from all over the world. Well, I would say the ability to send a message, the ability to receive a message, that’s the core functionality. Again, there’s lots more we can do, but that’s a core functionality. You want to make sure that anybody in the world can do that.

If you have a photo sharing app, the clue is in the name: the ability to share photos. I need to be able to see photos. I need to be able to share a photograph.

Let’s say you’ve got some writing tool where you can write, edit, and collaborate on documents. Well, there’s your core functionality right there: the ability to write and edit documents.

Make that functionality available using the simplest technology

Okay. Now that you’ve identified the core functionality, make that functionality available using the simplest technology. By the simplest technology, that means you’re probably wanting to look as far down the stack as you can go.

Going back to the news site, providing the news is the core functionality. Theoretically, the simplest technology to do that would be a plain text file. I’m going to go one level up from that though. I’m going to say an HTML file. We structure that news and we put it out there on the web. That’s it. That’s how we make the core functionality available using the simplest possible technology.

That social networking site, we need to be able to send messages. We need to be able to receive messages. Well, to see messages, probably in reverse chronological order, HTML can do that. To send messages, we can do that too using forms, so a simple form field should cover that. All right, you’ve done the core functionality.

For the photo sharing app, very similar. Again, reverse chronological list, but this time we need to have images in there, so our baseline is a little bit higher now. The browser needs to support images. And instead of a form field for accepting text, we’re going to have a form field for accepting an image. As far as I can tell, that’s the simplest possible technology to do this.

And for this collaborative writing tool, the ability to write and edit documents, a text area, a form. Okay.

Enhance!

Now if you were to stop at this point, what you have would work, but it would be kind of shitty. Okay? The fun happens at step three where you get to enhance. You take your baseline and you enhance up. This is where you get to differentiate. This is where you stand out from the competition. This is where you get to play with the cool toys where you get to make something much nicer.

With something like providing the news, well, providing layout on larger screens. There is the enhancement right there. Now it might be odd to think about layout as an enhancement, but if you think about responsive design and, particularly, mobile first responsive design, that’s exactly what layout is. You begin with the content and then, in your media queries, you add the layout as an enhancement.

You want it to look beautiful, so we can use web fonts do to that, right? I would love to think that the beautiful typography is inherent to the content, but we have to accept the reality that it’s an enhancement. That’s not to belittle it. Don’t think when I say, "Oh, this is an enhancement," that I’m saying this is just an enhancement. The enhancements are where the differentiation lies where things really shine.

In the case of our social networking messaging service, it’s sending and receiving messages. It full pages refreshes. It’s really dull. It’s really boring. We’re going to bring in some ajax so that we don’t need to refresh the page all the time to see the latest messages, and we could even make it work the other way, right? We can use websocket so that the sending and receiving, we never need to refresh the page again. We get those messages arriving all the time.

Now, not every browser is going to support websockets. That’s okay because the core functionality is still available to everyone. The experience will be different. It’ll be worse in older browsers. But they can still accomplish something. That’s the key part.

In the case of our photo sharing app, all the things we said before, right? We’re going to have layout. We’re going to have web fonts. We’re going to have ajax. We’re going to have websockets. And let’s … even more stuff, newer stuff, the file API. The moment that file is in the browser, before we even sent it to the server, we can start playing around with it. We can do things like CSS filters. Put sepia tones on those images. Let the user play with that.

Again, not every browser supports this stuff. That’s okay. The core functionality is there. You’re laying the stuff on.

In the case of this collaborative writing tool, all the stuff I mentioned before. You definitely want to have ajax in there. You definitely want to have all that other good stuff, websockets. But let’s make sure it’s resilient to network failures. Let’s start storing stuff in the browser itself. We’ve got all different kinds of local storage these days. I can’t even keep up with the many databases we have in a browser. Local storage and making it work offline, this is the technology I’m probably most excited about right now: service workers.

Very, very exciting. I mean properly game changing stuff. And you know when I was talking about those patterns earlier like canvas, like image, and the way they’ve been designed with backwards compatibility in mind? Service worker has been designed to be an enhancement like this. You can’t make service worker a requirement for a website. You have to add it as an enhancement because the first time someone hits your website, there is no service worker. So that again is a design decision, and that encourages the adoption of technologies like service worker. It’s a very clever move.

Scale

That’s how you make websites, that three-step process. And what I like about this three-step process is that it’s scale-free, which means it works at different levels. I’ve just been talking about the level of the whole service, the product or the service you’re building. But you could apply this at different scales. You could apply it at the scale of a URL. You could ask: What is the core functionality of this URL? How do I make that functionality available using the simplest possible technology? And how can I then enhance it?

You can go deeper at the level of a component within a page and say, okay, what’s the simplest way of making this component work and then how do I enhance it from there. The Filament Group talked about this, just providing an address. Well, the simplest way is some text with the address on it. But then you could add an image with a map on it. Then you could add Slippy Map for more advanced browsers. Then you could add animation, all sorts of good stuff. You can layer this stuff on.

My point here is that there isn’t a dichotomy between either having the basic functionality, which is available to everyone, which is quite boring, or a rich, immersive experience with all the cool APIs and the new stuff. I’m saying you can have both. By taking this layered approach, you can have both.

Now there’s a myth with this, the idea that, yeah, but this means I’m going to spend all my time in older browsers if I’m concentrated on backwards compatibility. No. Far from it. As long as you spend time making steps one and two work, I find I spend all of my time in step three because I know exactly what’s going to happen in older browsers. They’re going to get the basic core functionality, and I get to play around with the new stuff, the new toys, the new APIs, kind of with a clear conscious. It’s kind of the safest way of playing with stuff even when it’s only supported in one or maybe two browsers. You’re going to spend more time in newer browsers if you do this.

This is too easy

But I do get pushback on this, and the pushback falls into sort of two categories. One that this is too easy. Or rather, it’s too simplistic. It’s naïve. It’s like, "Well, what you’re talking about, that will work for a simple blog or personal site, but it couldn’t possibly scale for the really complicated app, the really complicated corporate site that I’m building."

What’s interesting is that I heard that argument before when we were trying to convince people to switch from using tables for layout and font tags to using CSS. I remember people saying, "Yes. Those examples you’ve shown, it’s all well and good for a simple, little blog or a personal site, but it could never scale to a big, corporate site." Then we got Wired.com. We got ESPN.com, and the floodgates opened.

When responsive design came along, Ethan got exactly the same thing. It’s like, "Well, Ethan, that’s all well and good for your own little website, this responsive design stuff, but it couldn’t possibly scale to a big, corporate site." Then we had the Boston Globe. We had Microsoft.com. And the flood gates opened again.

This is too hard

But the other pushback I get is that this is too hard, it’s too difficult. And I have some sympathy for this because people look at this three-step process and they’re like, "Wait. Wait. Wait a minute. You’re saying I spend my time making this stuff work in the old fashioned client server model, and then at step three, when I start adding in my JavaScript, I’m just going to recreate the whole thing again, right?" Not quite.

I think there could possibly be some duplicated work. But remember. You’re just making sure that the core functionality is available to everyone. What you do then after that, all the other functionality you add in, you don’t need to make that available to everyone.

Again, talking about the Boston Globe. I remember Matt Marquis saying there’s a whole bunch of features on the Boston Globe that require JavaScript to work. Reading the news is not one of them.

But I think this could be harder at first. If you’re not used to working this way, it’s fair enough to say, yeah, this is hard. But again, that was true when we moved from tables to layout, from layout to CSS. It was harder. At least the first time we tried it it was harder. The second time it got easier. The third time, easier still. Now it’s just a default, and I couldn’t make a website with tables for layout if I tried.

And if you’d been making fixed width websites for years then, yeah, the first time you tried to make a responsive website it was really painful. The second time was probably still painful, but not as painful. And by the third time it gets easier and now it’s just the default way you build websites, so it’s the same here. You’ve just got to get used to it.

But I still find people push back. They’re like, "Uh, this is too hard. This doesn’t work with the tools I’m using." I hate that argument because the tools are supposed to be there to support you.

The tools are supposed to be there to make you work better. That’s why you choose a library. That’s why you choose a framework. You don’t let a framework or library dictate you, how you approach building a website. That’s the tail wagging the dog.

Yet, I see again and again that people choose developer convenience over user needs. Now, I don’t want to belittle developer convenience. Developer convenience is hugely important, but not at the expense of user needs. There has to be a balance here.

I’ve said it often, but if I’m given the option, if there’s a problem and I have the choice of making it the user’s problem or making it my problem, I’ll make it my problem every time. You know why? That’s my job. That’s why it’s called work. Okay? Sometimes it is harder.

Everything is amazing and nobody’s happy

We’ve seen this over and over again that we’re constantly complaining about what we can’t do. It’s like, "Ugh! We’re not there yet. The web — the web kind of sucks when you compare it to Flash," or, "the web kind of sucks when you compare it to Native." It goes back a long way, right?

I remember when we were like, "Ugh! The web sucks because I’ve only got 216 colors to play with." True story - 216 colors. It’s all we had, right?

Or, "The web sucks because I’ve only got these system fonts to work with with typography." Or, "Ugh! Everything will be so much better if people would just upgrade from Netscape 4. If people would just upgrade from Internet Explorer 6, and everything will be fine. If only people would upgrade from Windows XP. If those Android 2.0 users would just upgrade, then everything would be fine, right?"

It’s like this keeps happening over and over again. We’re never happy.

My friend Frank has a wonderful essay he wrote a few years back. It’s called There is a Horse in the Apple Store.

Wherein he describes the situation. A true story. It really happened. There was a horse in the Apple store, and he describes what it’s like to see a horse in the Apple store, but he also describes the reaction or complete lack thereof by all the people in the Apple store. It’s like don’t they see the tiny horse in the Apple store? It’s right in front of their faces, but they just don’t see it. And I think we kind of have let that happen with the web.

Frank calls things like this tiny ponies when something is amazing, but it’s right in front of you and you don’t see it. It’s a tiny pony. And I think the World Wide Web is a tiny pony. It’s amazing, and yet we’re like, "Ugh, I can’t get 60 frames per second. Ugh." Right?

It’s incredible. The web is incredible. You know why it’s incredible? It’s not because of HTTP. And it’s not even because of HTML, much as I love it. The web is incredible because of URLs.

There are plenty of other formats and plenty of other protocols on the Internet for sending and receiving messages for keeping people in touch. Some of them are better than the web at that stuff, but only the web has URLs. Only the web allows you to put something online and keep it there over time so that people can access it throughout history. That’s amazing.

Also, you build an application. You build something that people can use. You can put it on the web just by putting it at a URL. You don’t need to ask anyone’s permission. There’s no app store. There’s no gatekeeper. URLs are the beating heart of the World Wide Web.

And the fact that we can build up the store of knowledge is amazing. We can extend the reach of our networks for future generations. We can extend the reach of the collective knowledge of our species. We need to be good ancestors, and we need to leave behind a web that lasts, a web that’s resilient. Thank you.

Sunday, November 6th, 2016

Create a MarkDown tag - JSFiddle

This is nice example of a web component that degrades gracefully—if custom elements aren’t supported, you still get the markdown content, just not converted to HTML.

<ah-markdown>
## Render some markdown!
</ah-markdown>

Wednesday, November 2nd, 2016

Performance and assumptions | susan jean robertson

We all make assumptions, it’s natural and normal. But we also need to be jolted out of those assumptions on a regular basis to help us see that not everyone uses the web the way we do. I’ve talked about loving doing support for that reason, but I also love it when I’m on a slow network, it shows me how some people experience the web all the time; that’s good for me.

I’m privileged to have fast devices and fast, broadband internet, along with a lot of other privileges. Not remembering that privilege while I work and assuming that everyone is like me is, quite possibly, one of the biggest mistakes I can make.