Tags: ie

239

sparkline

CSS grid in Internet Explorer 11

When I was in Boston, speaking on a lunchtime panel with Rachel at An Event Apart, we took some questions from the audience about CSS grid. Inevitably, a question about browser support came up—specifically about support in Internet Explorer 11.

(Technically, you can use CSS grid in IE11—in fact it was the first browser to ship a version of grid—but the prefixed syntax is different to the standard and certain features are missing.)

Rachel gave a great balanced response, saying that you need to look at your site’s stats to determine whether it’s worth the investment of your time trying to make a grid work in IE11.

My response was blunter. I said I just don’t consider IE11 as a browser that supports grid.

Now, that might sound harsh, but what I meant was: you’re already dividing your visitors into browsers that support grid, and browsers that don’t …and you’re giving something to those browsers that don’t support grid. So I’m suggesting that IE11 falls into that category and should receive the layout you’re giving to browsers that don’t support grid …because really, IE11 doesn’t support grid: that’s the whole reason why the syntax is namespaced by -ms.

You could jump through hoops to try to get your grid layout working in IE11, as detailed in a three-part series on CSS Tricks, but at that point, the amount of effort you’re putting in negates the time-saving benefits of using CSS grid in the first place.

Frankly, the whole point of prefixed CSS is that is not used after a reasonable amount of time (originally, the idea was that it would not be used in production, but that didn’t last long). As we’ve moved away from prefixes to flags in browsers, I’m seeing the amount of prefixed properties dropping, and that’s very, very good. I’ve stopped using autoprefixer on new projects, and I’ve been able to remove it from some existing ones—please consider doing the same.

And when it comes to IE11, I’ll continue to categorise it as a browser that doesn’t support CSS grid. That doesn’t mean I’m abandoning users of IE11—far from it. It means I’m giving them the layout that’s appropriate for the browser they’re using.

Remember, websites do not need to look exactly the same in every browser.

Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…

Components and concerns

We tend to like false dichotomies in the world of web design and web development. I’ve noticed one recently that keeps coming up in the realm of design systems and components.

It’s about separation of concerns. The web has a long history of separating structure, presentation, and behaviour through HTML, CSS, and JavaScript. It has served us very well. If you build in that order, ensuring that something works (to some extent) before adding the next layer, the result will be robust and resilient.

But in this age of components, many people are pointing out that it makes sense to separate things according to their function. Here’s the Diana Mounter in her excellent article about design systems at Github:

Rather than separating concerns by languages (such as HTML, CSS, and JavaScript), we’re are working towards a model of separating concerns at the component level.

This echoes a point made previously in a slidedeck by Cristiano Rastelli.

Separating interfaces according to the purpose of each component makes total sense …but that doesn’t mean we have to stop separating structure, presentation, and behaviour! Why not do both?

There’s nothing in the “traditonal” separation of concerns on the web (HTML/CSS/JavaScript) that restricts it only to pages. In fact, I would say it works best when it’s applied on smaller scales.

In her article, Pattern Library First: An Approach For Managing CSS, Rachel advises starting every component with good markup:

Your starting point should always be well-structured markup.

This ensures that your content is accessible at a very basic level, but it also means you can take advantage of normal flow.

That’s basically an application of starting with the rule of least power.

In chapter 6 of Resilient Web Design, I outline the three-step process I use to build on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

That chapter is filled with examples of applying those steps at the level of an entire site or product, but it doesn’t need to end there:

We can apply the three‐step process at the scale of individual components within a page. “What is the core functionality of this component? How can I make that functionality available using the simplest possible technology? Now how can I enhance it?”

There’s another shared benefit to separating concerns when building pages and building components. In the case of pages, asking “what is the core functionality?” will help you come up with a good URL. With components, asking “what is the core functionality?” will help you come up with a good name …something that’s at the heart of a good design system. In her brilliant Design Systems book, Alla advocates asking “what is its purpose?” in order to get a good shared language for components.

My point is this:

  • Separating structure, presentation, and behaviour is a good idea.
  • Separating an interface into components is a good idea.

Those two good ideas are not in conflict. Presenting them as though they were binary choices is like saying “I used to eat Italian food, but now I drink Italian wine.” They work best when they’re done in combination.

Speaking my brains in Boston

I was in Boston last week to give a talk. I ended up giving four.

I was there for An Event Apart which was, as always, excellent. I opened up day two with my talk, The Way Of The Web.

This was my second time giving this talk at An Event Apart—the first time was in Seattle a few months back. It was also my last time giving this talk at An Event Apart—I shan’t be speaking at any of the other AEAs this year, alas. The talk wasn’t recorded either so I’m afraid you kind of had to be there (unless you know of another conference that might like to have me give that talk, in which case, hit me up).

After giving my talk in the morning, I wasn’t quite done. I was on a panel discussion with Rachel about CSS grid. It turned out to be a pretty good format: have one person who’s a complete authority on a topic (Rachel), and another person who’s barely starting out and knows just enough to be dangerous (me). I really enjoyed it, and the questions from the audience prompted some ideas to form in my head that I should really note down in a blog post before they evaporate.

The next day, I went over to MIT to speak at Design 4 Drupal. So, y’know, technically I’ve lectured at MIT now.

I wasn’t going to do the same talk as I gave at An Event Apart, obviously. Instead, I reprised the talk I gave earlier this at Webstock: Taking Back The Web. I thought it was fitting given how much Drupal’s glorious leader, Dries, has been thinking about, writing about, and building with the indie web.

I really enjoyed giving this talk. The audience were great, and they had lots of good questions afterwards. There’s a video, which is basically my voice dubbed over the slides, followed by a good half of questions afterwards.

When I was done there, after a brief excursion to the MIT bookstore, I went back across the river from Cambridge to Boston just in time for that evening’s Boston CSS meetup.

Lea had been in touch to ask if I would speak at this meet-up, and I was only too happy to oblige. I tried doing something I’ve never done before: a book reading!

No, not reading from Going Offline, my current book which I should encouraging people to buy. Instead I read from Resilient Web Design, the free online book that people literally couldn’t buy if they wanted to. But I figured reading the philosophical ramblings in Resilient Web Design would go over better than trying to do an oral version of the service worker code in Going Offline.

I read from chapters two (Materials), three (Visions), and five (Layers) and I really, really liked doing it! People seemed to enjoy it too—we had questions throughout.

And with that, my time in Boston was at an end. I was up at the crack of dawn the next morning to get the plane back to England where Ampersand awaited. I wasn’t speaking there though. I thoroughly enjoyed being an attendee and absorbing the knowledge bombs from the brilliant speakers that Rich assembled.

The next place I’m speaking will much closer to home than Boston: I’ll be giving a short talk at Oxford Geek Nights on Wednesday. Come on by if you’re in the neighbourhood.

Praise for Going Offline

I’m very, very happy to see that my new book Going Offline is proving to be accessible and unintimidating to a wide audience—that was very much my goal when writing it.

People have been saying nice things on their blogs, which is very gratifying. It’s even more gratifying to see people use the knowledge gained from reading the book to turn those blogs into progressive web apps!

Sara Soueidan:

It doesn’t matter if you’re a designer, a junior developer or an experienced engineer — this book is perfect for anyone who wants to learn about Service Workers and take their Web application to a whole new level.

I highly recommend it. I read the book over the course of two days, but it can easily be read in half a day. And as someone who rarely ever reads a book cover to cover (I tend to quit halfway through most books), this says a lot about how good it is.

Eric Lawrence:

I was delighted to discover a straightforward, very approachable reference on designing a ServiceWorker-backed application: Going Offline by Jeremy Keith. The book is short (I’m busy), direct (“Here’s a problem, here’s how to solve it“), opinionated in the best way (landmine-avoiding “Do this“), and humorous without being confusing. As anyone who has received unsolicited (or solicited) feedback from me about their book knows, I’m an extremely picky reader, and I have no significant complaints on this one. Highly recommended.

Ben Nadel:

If you’re interested in the “offline first” movement or want to learn more about Service Workers, Going Offline by Jeremy Keith is a really gentle and highly accessible introduction to the topic.

Daniel Koskine:

Jeremy nails it again with this beginner-friendly introduction to Service Workers and Progressive Web Apps.

Donny Truong

Jeremy’s technical writing is as superb as always. Similar to his first book for A Book Apart, which cleared up all my confusions about HTML5, Going Offline helps me put the pieces of the service workers’ puzzle together.

People have been saying nice things on Twitter too…

Aaron Gustafson:

It’s a fantastic read and a simple primer for getting Service Workers up and running on your site.

Ethan Marcotte:

Of course, if you’re looking to take your website offline, you should read @adactio’s wonderful book

Lívia De Paula Labate:

Ok, I’m done reading @adactio’s Going Offline book and as my wife would say, it’s the bomb dot com.

If that all sounds good to you, get yourself a copy of Going Offline in paperbook, or ebook (or both).

Expectations

I noticed something interesting recently about how I browse the web.

It used to be that I would notice if a site were responsive. Or, before responsive web design was a thing, I would notice if a site was built with a fluid layout. It was worthy of remark, because it was exceptional—the default was fixed-width layouts.

But now, that has flipped completely around. Now I notice if a site isn’t responsive. It feels …broken. It’s like coming across an embedded map that isn’t a slippy map. My expectations have reversed.

That’s kind of amazing. If you had told me ten years ago that liquid layouts and media queries would become standard practice on the web, I would’ve found it very hard to believe. I spent the first decade of this century ranting in the wilderness about how the web was a flexible medium, but I felt like the laughable guy on the street corner with an apocalyptic sandwich board. Well, who’s laughing now

Anyway, I think it’s worth stepping back every now and then and taking stock of how far we’ve come. Mind you, in terms of web performance, the trend has unfortunately been in the wrong direction—big, bloated websites have become the norm. We need to change that.

Now, maybe it’s because I’ve been somewhat obsessed with service workers lately, but I’ve started to notice my expectations around offline behaviour changing recently too. It’s not that I’m surprised when I can’t revisit an article without an internet connection, but I do feel disappointed—like an opportunity has been missed.

I really notice it when I come across little self-contained browser-based games like

Those games are great! I particularly love Battleship Solitaire—it has a zen-like addictive quality to it. If I load it up in a browser tab, I can then safely go offline because the whole game is delivered in the initial download. But if I try to navigate to the game while I’m offline, I’m out of luck. That’s a shame. This snack-sized casual games feel like the perfect use-case for working offline (or, even if there is an internet connection, they could still be speedily served up from a cache).

I know that my expectations about offline behaviour aren’t shared by most people. The idea of visiting a site even when there’s no internet connection doesn’t feel normal …yet.

But perhaps that expectation will change. It’s happened before.

(And if you want to be ready when those expectations change, I’ve written a Going Offline for you.)

AMPstinction

I’ve come to believe that the goal of any good framework should be to make itself unnecessary.

Brian said it explicitly of his PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

That makes total sense, especially if your code is a polyfill—those solutions are temporary by design. Autoprefixer is another good example of a piece of code that becomes less and less necessary over time.

But I think it’s equally true of any successful framework or library. If the framework becomes popular enough, it will inevitably end up influencing the standards process, thereby becoming dispensible.

jQuery is the classic example of this. There’s very little reason to use jQuery these days because you can accomplish so much with browser-native JavaScript. But the reason why you can accomplish so much without jQuery is because of jQuery. I don’t think we would have querySelector without jQuery. The library proved the need for the feature. The same is true for a whole load of DOM scripting features.

The same process is almost certain to occur with React—it’s a good bet there will be a standardised equivalent to the virtual DOM at some point.

When Google first unveiled AMP, its intentions weren’t clear to me. I hoped that it existed purely to make itself redundant:

As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

Alas, as time has passed, that hope shows no signs of being fulfilled. If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting their “regular” pages put on weight.

Worse yet, the messaging from Google around AMP has shifted. Instead of pitching it as a format for creating parallel versions of your web pages, they’re now also extolling the virtues of having your AMP pages be the only version you publish:

In fact, AMP’s evolution has made it a viable solution to build entire websites.

On an episode of the Dev Mode podcast a while back, AMP was a hotly-debated topic. But even those defending AMP were doing so on the understanding that it was more a proof-of-concept than a long-term solution (and also that AMP is just for news stories—something else that Google are keen to change).

But now it’s clear that the Google AMP Project is being marketed more like a framework for the future: a collection of web components that prioritise performance …which is kind of odd, because that’s also what Google’s Polymer project is. The difference being that pages made with Polymer don’t get preferential treatment in Google’s search results. I can’t help but wonder how the Polymer team feels about AMP’s gradual pivot onto their territory.

If the AMP project existed in order to create a web where AMP was no longer needed, I think I could get behind it. But the more it’s positioned as the only viable solution to solving performance, the more uncomfortable I am with it.

Which, by the way, brings me to one of the most pernicious ideas around Google AMP—positioning anyone opposed to it as not caring about web performance. Nothing could be further from the truth. It’s precisely because performance on the web is so important that it deserves a long-term solution, co-created by all of us: not some commandents delivered to us from on-high by one organisation, enforced by preferential treatment by that organisation’s monopoly in search.

It’s the classic logical fallacy:

  1. Performance! Something must be done!
  2. AMP is something.
  3. Now something has been done.

By marketing itself as the only viable solution to the web performance problem, I think the AMP project is doing itself a great disservice. If it positioned itself as an example to be emulated, I would welcome it.

I wish that AMP were being marketed more like a temporary polyfill. And as with any polyfill, I look forward to the day when AMP is no longer necesssary.

I want AMP to become extinct. I genuinely think that the Google AMP team should share that wish.

Unworn Pleasures

I’ve made no secret of my admiration of Jocelyn Bell Burnell, and how Peter Saville’s iconic cover design for Joy Division’s Unknown Pleasures always reminds of her.

There are many, many memetic variations of that design.

Spaghetti, All Lined Up Quite Nicely. Furr Division. Depeche Mode, Boys Don't Cry. What is this? I’ve seen it on Tumblr.

I assumed that somebody somewhere at some time must have made a suitable tribute to the discover of those pulses, but I’ve never come across any Jocelyn-themed variation of the Joy Division album art.

So I made my own.

Jocelyn T-shirt.

The test order I did just showed up, and it’s looking pretty nice (although be warned that the sizes run small—I ordered a large, and I probably should’ve gone for extra large). If your music/radio-astronomy Venn diagram overlaps like mine, then you too might enjoy being the proud bearer of this wearable tribute to Dame Jocelyn Bell Burnell.

Design systems

Talking about scaling design can get very confusing very quickly. There are a bunch of terms that get thrown around: design systems, pattern libraries, style guides, and components.

The generally-accepted definition of a design system is that it’s the outer circle—it encompasses pattern libraries, style guides, and any other artefacts. But there’s something more. Just because you have a collection of design patterns doesn’t mean you have a design system. A system is a framework. It’s a rulebook. It’s what tells you how those patterns work together.

This is something that Cennydd mentioned recently:

Here’s my thing with the modularisation trend in design: where’s the gestalt?

In my mind, the design system is the gestalt. But Cennydd is absolutely right to express concern—I think a lot of people are collecting patterns and calling the resulting collection a design system. No. That’s a pattern library. You still need to have a framework for how to use those patterns.

I understand the urge to fixate on patterns. They’re small enough to be manageable, and they’re tangible—here’s a carousel; here’s a date-picker. But a design system is big and intangible.

Games are great examples of design systems. They’re frameworks. A game is a collection of rules and constraints. You can document those rules and constraints, but you can’t point to something and say, “That is football” or “That is chess” or “That is poker.”

Even though they consist entirely of rules and constraints, football, chess, and poker still produce an almost infinite possibility space. That’s quite overwhelming. So it’s easier for us to grasp instances of football, chess, and poker. We can point to a particular occurrence and say, “That is a game of football”, or “That is a chess match.”

But if you tried to figure out the rules of football, chess, or poker just from watching one particular instance of the game, you’d have your work cut for you. It’s not impossible, but it is challenging.

Likewise, it’s not very useful to create a library of patterns without providing any framework for using those patterns.

I would go so far as to say that the actual code for the patterns is the least important part of a design system (or, certainly, it’s the part that should be most malleable and open to change). It’s more important that the patterns have been identified, named, described, and crucially, accompanied by some kind of guidance on usage.

I could easily imagine using a tool like Fractal to create a library of text snippets with no actual code. Those pieces of text—which provide information on where and when to use a pattern—could be more valuable than providing a snippet of code without any context.

One of the very first large-scale pattern libraries I can remember seeing on the web was Yahoo’s Design Pattern Library. Each pattern outlined

  1. the problem being solved;
  2. when to use this pattern;
  3. when not to use this pattern.

Only then, almost incidentally, did they link off to the code for that pattern. But it was entirely possible to use the system of patterns without ever using that code. The code was just one instance of the pattern. The important part was the framework that helped you understand when and where it was appropriate to use that pattern.

I think we lose sight of the real value of a design system when we focus too much on the components. The components are the trees. The design system is the forest. As Paul asked:

What methodologies might we uncover if we were to focus more on the relationships between components, rather than the components themselves?

Thanos

I’m going to discuss Avengers: Infinity War without spoilers, unless you count the motivations of the main villain as a spoiler, in which case you should stop reading now.

The most recent book by Charles C. Mann—author of 1491 and 1493—is called The Wizard And The Prophet. It profiles two twentieth century figures with divergent belief systems: Norman Borlaug and William Vogt. (Trust me, this will become relevant to the new Avengers film.)

I’ve long been fascinated by Norman Borlaug, father of the Green Revolution. It is quite possible that he is responsible for saving more lives than any other single human being in history (with the possible exception of Stanislav Petrov who may have saved the entire human race through inaction). In his book, Mann dubs Borlaug “The Wizard”—the epitome of a can-do attitude and a willingness to use technology to solve global problems.

William Vogt, by contrast, is “The Prophet.” His groundbreaking research crystalised many central tenets of the environmental movement, including the term he coined, carrying capacity—the upper limit to a population that an environment can sustain. Vogt’s stance is that there is no getting around the carrying capacity of our planet, so we need to make do with less: fewer people consuming fewer resources.

Those are the opposing belief systems. Prophets believe that carrying capacity is fixed and that if our species exceed this limit, we are doomed. Wizards believe that technology can treat carrying capacity as damage and route around it.

Vogt’s philosophy came to dominate the environmental movement for the latter half of the twentieth century. It’s something I’ve personally found very frustrating. Groups and organisations that I nominally agree with—the Green Party, Greenpeace, etc.—have anti-technology baggage that doesn’t do them any favours. The uninformed opposition to GM foods is a perfect example. The unrealistic lauding of country life over the species-saving power of cities is another.

And yet history so far has favoured the wizards. The Malthusian population bomb never exploded, partly thanks to Borlaug’s work, but also thanks to better education for women in the developing world, which had enormously positive repercussions.

Anyway, I find this framing of fundamental differences in attitude to be fascinating. Ultimately it’s a stand-off between optimism (the wizards) and pessimism (the prophets). John Faithful Hamer uses this same lens to contrast recent works by Steven Pinker and Yuval Noah Harari. Pinker is a wizard. Harari is a prophet.

I was not expecting to be confronted with the wizards vs. prophets debate while watching Avengers: Infinity War, but there’s no getting around it—Thanos is a prophet.

Very early on, we learn that Thanos doesn’t want to destroy all life in the universe. Instead, he wants to destroy half of all life in the universe. Why? Carrying capacity. He believes the only way to save life is to reduce its number (and therefore its footprint).

Many reviews of the film have noted how the character of Thanos is strangely sympathetic. It’s no wonder! He is effectively toeing the traditional party line of the mainstream environmental movement.

There’s even a moment in the film where Thanos explains how he came to form his opinions through a tragedy in the past that he correctly predicted. “Congratulations”, says one of his heroic foes sarcastically, “You’re a prophet.”

Earlier in the film, as some of the heroes are meeting for the first time, there are gags and jokes referring to Dr. Strange’s group as “the wizards.”

I’m sure those are just coincidences.

2001 + 50

The first ten minutes of my talk at An Event Apart Seattle consisted of me geeking about science fiction. There was a point to it …I think. But I must admit it felt quite self-indulgent to ramble to a captive audience about some of my favourite works of speculative fiction.

The meta-narrative I was driving at was around the perils of prediction (and how that’s not really what science fiction is about). This is something that Arthur C. Clarke pointed out repeatedly, most famously in Hazards of Prophecy. Ironically, I used Clarke’s meisterwork of a collaboration with Stanley Kubrick as a rare example of a predictive piece of sci-fi with a good hit rate.

When I introduced 2001: A Space Odyssey in my talk, I mentioned that it was fifty years old (making it even more of a staggering achievement, considering that humans hadn’t even reached the moon at that point). What I didn’t realise at the time was that it was fifty years old to the day. The film was released in American cinemas on April 2nd, 1968; I was giving my talk on April 2nd, 2018.

Over on Wired.com, Stephen Wolfram has written about his own personal relationship with the film. It’s a wide-ranging piece, covering everything from the typography of 2001 (see also: Typeset In The Future) right through to the nature of intelligence and our place in the universe.

When it comes to the technology depicted on-screen, he makes the same point that I was driving at in my talk—that, despite some successful extrapolations, certain real-world advances were not only unpredicted, but perhaps unpredictable. The mobile phone; the collapse of the soviet union …these are real-world events that are conspicuous by their absence in other great works of sci-fi like William Gibson’s brilliant Neuromancer.

But in his Wired piece, Wolfram also points out some acts of prediction that were so accurate that we don’t even notice them.

Also interesting in 2001 is that the Picturephone is a push-button phone, with exactly the same numeric button layout as today (though without the * and # [“octothorp”]). Push-button phones actually already existed in 1968, although they were not yet widely deployed.

To use the Picturephone in 2001, one inserts a credit card. Credit cards had existed for a while even in 1968, though they were not terribly widely used. The idea of automatically reading credit cards (say, using a magnetic stripe) had actually been developed in 1960, but it didn’t become common until the 1980s.

I’ve watched 2001 many, many, many times and I’m always looking out for details of the world-building …but it never occurred to me that push-button numeric keypads or credit cards were examples of predictive extrapolation. As time goes on, more and more of these little touches will become unnoticeable and unremarkable.

On the space shuttle (or, perhaps better, space plane) the cabin looks very much like a modern airplane—which probably isn’t surprising, because things like Boeing 737s already existed in 1968. But in a correct (at least for now) modern touch, the seat backs have TVs—controlled, of course, by a row of buttons.

Now I want to watch 2001: A Space Odyssey again. If I’m really lucky, I might get to see a 70mm print in a cinema near me this year.

Fit For Purpose: Making Sense of the New CSS by Eric Meyer

Time for even more CSS goodness at An Event Apart Seattle (Special Edition). Eric’s talk is called Fit For Purpose: Making Sense of the New CSS. Here are my notes…

Eric isn’t going to dive quite as deeply as Rachel, but he is going to share some patterns he has used.

Feature queries

First up: feature queries! Or @supports, if you prefer. You can ask a browser “do you support this feature?” If you haven’t used feature queries, you might be wondering why you have to say the property and the value. Well, think about it. If you asked a browser “do you support display?”, it’s not very useful. So you have to say “do you support display: grid?”

Here’s a nice pattern from Lea Verou for detecting support for custom properties:

@supports (--css: variables)

Here’s a gotcha:

@supports (clip-path: polygon())

That won’t work because polygon() is invalid. This will work:

@supports (clip-path: polygon(0 0))

So to use feature queries, you need to understand valid values for properties.

You can chain feature queries together, or just pick the least-supported thing you’re testing for and test just for that.

Here’s a pattern Eric used when he only wanted to make text sideways, but only if grid is supported:

@supports (display: grid) {
    ...
    @supports (writing-mode: sideways-lr) {
        ...
    }
}

That’s functionally equivalent to:

@supports (display: grid) {
    ...
}
@supports (display:grid) and (writing-mode: sideways-lr) {
    ...
}

Choose whichever pattern makes sense to you. More to the point, choose the pattern that makes sense to your future self when you revisit your code.

Feature queries need to work together with media queries. Sometimes there are effects that you only want to apply on larger viewports. Do you put your feature queries inside your media queries? Or do you put your media queries inside your feature queries?

  • MOSS: Media Outside Support Statements
  • MISO: Media Inside Supports Object

Use MOSS when you have more media switches than support blocks. Use MISO when you only have a few breakpoints but lots of feature queries.

That’s one idea that Eric has. It’ll be interesting to see how this develops.

And remember, CSS is still CSS. Sometimes you don’t need a feature query at all. You could just use hanging-punctuation without testing for it. Browsers that don’t understand it will just ignore it. CSS has implicit feature queries built in. You don’t have to put your grid layout in a feature query, but you might want to put grid-specific margins and widths inside a feature query for display: grid.

Feature queries really help us get from now to the future.

Flexbox

Let’s move on to flexbox. Flexbox is great for things in a line.

On the An Event Apart site, the profile pictures have social media icons lined up at the bottom. Sometimes there are just a few. Sometimes there are a lot more. This is using flexbox. Why? Because it’s cool. Also, because it’s flexbox, you can create rules about how the icons should behave if one of the icons is taller than the others. (It’s gotten to the point that Eric has forgotten that vertically-centring things in CSS is supposed to be hard. The jokes aren’t funny any more.) Also, what if there’s no photo? Using flexbox, you can say “if there’s no photo, change the direction of the icons to be vertical.” Once again, it’s all about writing less CSS.

Also, note that the profile picture is being floated. That’s the right tool for the job. It feels almost transgressive to use float for exactly the purpose for which it was intended.

On the An Event Apart site, the header is currently using absolute positioning to pull the navigation from the bottom of the page source to the top of the viewport. But now you get overlap at some screen sizes. Flexbox would make it much more robust. (Eric uses the flexbox inspector in Firefox Nightly to demonstrate.)

With flexbox, what works horizontally works vertically. Flexbox allows you to align things, as long as you’re aligning in one direction. Flexbox makes things springy. Everything’s related and pushing against each other in a way that makes sense for this medium. It’s intuitive, even though it takes a bit of getting used to …because we’ve picked up some bad habits. To quote Yoda, “You must unlearn what you have learned.” A lot of the barrier is getting over what we’ve internalised. Eric envies the people starting out now. They get to start fresh. It’s like when people who never had to table layouts see code from that time period: it (quite rightly) doesn’t make any sense. That’s what it’s going to be like when people starting out today see the float-based layouts from Bootstrap and the like.

Grid

That’s going to happen with grid too. We must unlearn what we have learned from twenty years of floats and positioning. What makes it worth is:

  1. Flexbox and grid are pretty easy to get used to, and
  2. It’s amazing what you can do!

Eric quotes from an article called How We adopted CSS Grid at Scale:

…we agreed to use CSS Grid at the layout level and Flexbox at the component level (arranging child items of components). Although there’s some overlap and in some cases both could be used interchangeably, abiding by this rule helped us avoid any confusion in gray areas.

Don’t be afraid to set these kind of arbitrary limits that aren’t technological, but are necessary for the team to work well together.

Eric hacked his Wordpress admin interface to use grid instead of floats for an activity component (a list of dates and titles). He initially turned each list item into a separate grid. The overall list didn’t look right. What Eric really needed was a subgrid capability, so that the mini grids (the list items) would relate to one another within the larger grid (the list). But subgrid doesn’t exist yet.

In this case, there’s a way to fake it using display: contents. Eric made the list a grid and used display: contents on the list items. It’s as though you’re saying that the contents of the li are really the contents of the ul. That works in this particular case.

The feature queries for that looked like:

@supports (display: grid) {
    ...
    @supports (display: contents) {
        ...
    }
}

Eric is also using the grid “ASCII art” (named areas) technique on his personal site. This works independent of source order. For that reason, make sure your source order makes sense.

Using media queries, Eric defines entirely different layouts simply by using different ASCII art. He’s switching templates.

For a proposed redesign of the An Event Apart site, Eric used CSS grid as a prototyping tool. He took a PDF, sliced it up, exported JPGs, and then used grid to lay out those images in a flexible grid. Rapid prototyping! The Firefox grid inspector really helps here. In less than an hour, he had a working layout. He could test whether the layout was sensible and robust. Then he swapped out the sliced images for real content. That took maybe another hour (mostly because it was faster to re-type the text than try to copy and paste from a PDF). CSS makes it that damn easy now!

So even if you’re not going to put things like grid into production, they can still be enormously useful as design tools (and you’re getting to grips with this new stuff).

See also:

Beyond Engagement: the Content Performance Quotient by Jeffrey Zeldman

I’m at An Event Apart Seattle (Special Edition). Jeffrey is kicking off the show with a presentation called Beyond Engagement: the Content Performance Quotient. I’m going to jot down some notes during this talk…

First, a story. Jeffrey went to college in Bloomington, Indiana. David Frost—the British journalist—came to talk to them. Frost had a busy schedule, and when he showed up, he seemed a little tipsy. He came up to the podium and said, “Good evening, Wilmington.”

Jeffrey remembers this and knows that Seattle and Portland have a bit of a rivalry, and so Jeffrey thought, the first time he spoke in Portland, it would be funny to say “Good morning, Seattle!” …and that was the last time he spoke in Portland.

Anyway …”Good morning, Portland!”

Jeffrey wants to talk about content. He spends a lot of time in meetings with stakeholders. Those stakeholders always want things to be better, and they always talk about “engagement.” It’s the number one stakeholder request. It’s a metric that makes stakeholders feel comfortable. It’s measurable—the more seconds people give us, the better.

But is that really the right metric?

There are some kinds of sites where engagement is definitely the right metric. Instagram, for example. That’s how they make money. You want to distract yourself. Also, if you have a big content site—beautifully art-directed and photographed—then engagement is what you want. You want people to spend a lot of time there. Or if you have a kids site, or a games site, or a reading site for kids, you want them to be engaged and spend time. A List Apart, too. It’s like the opposite of Stack Overflow, where you Google something and grab the piece of code you need and then get out. But for A List Apart or Smashing Magazine, you want people to read and think and spend their time. Engagement is what you want.

But for most sites—insurance, universities—engagement is not what you want. These sites are more like a customer service desk. You want to help the customer as quickly as possible. If a customer spends 30 minutes on our site, was she engaged …or frustrated? Was it the beautiful typography and copy …or because she couldn’t find what they wanted? If someone spends a long time on an ecommerce site, is it because the products are so good …or because search isn’t working well?

What we need is a metric called speed of usefulness. Jeffrey calls this Content Performance Quotient (CPQ) …because business people love three-letter initialisms. It’s a loose measurement: How quickly can you solve the customer’s problem? It’s the shortest distance between the problem and the solution. Put another way, it’s a measurement of your value to the customer. It’s a new way to evaluate success.

From the customer’s point of view, CPQ is the time it takes the customer to get the information she came for. From the organisation’s point of view, it’s the time it takes for a specific customer to find, receive, and absorb your most important content.

We’re all guilty of neglecting the basics on our sites—just what it is it that we do? We need to remember that we’re all making stuff to make people’s live’s easier. Otherwise we end up with what Jeffrey calls “pretty garbage.” It’s aesthetically coherent and visually well-designed …but if the content is wrong and doesn’t help anyone, it’s garbage. Garbage in a delightfully responsive grid is still garbage.

Let’s think of an example of where people really learned to cut back and really pare down their message. Advertising. In the 1950s, when the Leo Burnett agency started the Marlboro campaign, TV spots were 60 seconds long. An off-camera white man in a suit with a soothing voice would tell you all about the product while the visuals showed you what he was talking about. No irony. Marlboro did a commercial where there was no copy at all until the very end. For 60 seconds they showed you cowboys doing their rugged cowboy things. Men in the 1950s wanted to feel rugged, you see. Leo Burnett aimed the Marlboro cigarettes at those men. And at the end of the 60 second montage of rugged cowboys herding steers, they said “Come to where the flavour is. Come to Marlboro Country.” For the billboard, they cut it back even more. Just “Come to Marlboro Country.” In fact, they eventually went to just “Marlboro.” Jeffrey knows that this campaign worked well, because he started smoking Marlboros as a kid.

Leaving aside the ethical implications of selling cigarettes to eight-graders, let’s think about the genius of those advertisers. Slash your architecture and shrink your content. Constantly ask yourself, “Why do we need this?”

As Jared Spool says, design is the rendering of intent. Every design is intentional. There is some intent—like engagement—driving our design. If there’s no intent behind the design, it will fail, even if what you’re doing is very good. If your design isn’t going somewhere, it’s going nowhere. You’ve got to stay ruthlessly focused on what the customer needs and “kill your darlings” as Hemingway said. Luke Wroblewski really brought this to light when he talked about Mobile First.

To paraphrase David Byrne, how did we get here?

Well, we prioritised meetings over meaning. Those meetings can be full of tension; different stakeholders arguing over what should be on the homepage. And we tried to solve this by giving everyone what they want. Having a good meeting doesn’t necessarily mean having a good meeting. We think of good meetings as conflict-free where everyone emerges happy. But maybe there should be a conflict that gets resolved. Maybe there should be winners and losers.

Behold our mighty CMS! Anyone can add content to the website. Anyone can create the information architecture …because we want to make people happy in meetings. It’s easy to give everyone what they want. It’s harder to do the right thing. Harder for us, but better for the customer and the bottom line.

As Gerry McGovern says:

Great UX professionals are like whistleblowers. They are the voice of the user.

We need to stop designing 2001 sites for a 2018 web.

One example of cutting down content was highlighted in A List Apart where web design was compared to chess: The King vs. Pawn Game of UI Design. Don’t start by going through all the rules. Teach them in context. Teach chess by starting with a checkmate move, reduced down to just three pieces on the board. From there, begin building out. Start with the most important information, and build out from there.

When you strip down the game to its core, everything you learn is a universal principle.

Another example is atomic design: focus relentlessly on the individual interaction. We do it for shopping carts. We can do it for content.

Another example on A List Apart is No More FAQs: Create Purposeful Information for a More Effective User Experience. FAQ problems include:

  • duplicate and contradictory information,
  • lack of discernible content order,
  • repetitive grammatical structure,
  • increased cognitive load, and
  • more content than they need.

Users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge).

The important word there is purpose. We need to eliminate distraction. How do we do that?

One way is the waterfall method. Do a massive content inventory. It’s not recommended (unless maybe you’re doing a massive redesign).

Agile and scrum is another way. Constantly iterate on content. Little by little over time, we make the product better. It’s the best bet if you work in-house.

If you work in an agency, a redesign is an opportunity to start fresh. Take everything off the table and start from scratch. Jeffrey’s friend Fred Gates got an assignment to redesign an online gaming platform for kids to teach them reading and management skills. The organisation didn’t have much money so they said, let’s just do the homepage. Fred challenged himself to put the whole thing on the homepage. The homepage tells the whole story. Jeffrey is using this same method on a site for an insurance company, even though the client has a bigger budget and can afford more than just the homepage. The point is, what Fred did was effective.

So this is what Jeffrey is going to be testing and working on: speed of usefulness.

And for those of you who do need to use engagement as the right metric, Jeffrey covered the two kinds of metrics in an article called We need design that is faster and design that is slower.

For example, “scannability” is good for transactions (CPQ), but bad for thoughtful content (engagement). Our news designs need to slow down the user. Bigger type, typographic hierarchy, and more whitespace. Art direction. Shout out to Derek Powazek who designed Fray.com—each piece was designed based on the content. These days, look at what David Sleight and his crew are doing over at Pro Publica.

Who’s doing it right?

The Washington Post, The New York Times, Pro Publica, Slate, Smashing Magazine, and Vox are all doing this well in different ways. They’re bringing content to the fore.

Readability, Medium, and A List Apart are all using big type to encourage thoughtful reading and engagement.

But for other sites …apply the Content Performance Quotient.

See also:

A workshop on building for resilience

In February, I tried out a new workshop two times—once at Webstock in New Zealand, and once in Hong Kong.

The workshop is called The Progressive Web: Building for Resilience. Here’s an excerpt form the blurb:

This workshop will show you to to think in a progressive way that works with the grain of the web. Together we’ll peel back the layers of the web and build upwards, creating experiences that work for everyone while making the best of cutting-edge browser technologies. From URL design to Progressive Web Apps, this journey will cover each stage of technological advancement.

Basically, it’s the workshop version of Resilient Web Design. If that book is the theory, this workshop is the practice.

Tim recently posted his tips for running workshops and there’s a lot in there that resonates with me. Like Tim, I’ve become less and less reliant on slides. In fact, this workshop—like my workshop on evaluating technology—has no slides. Instead it’s all about the exercises and going with the flow.

After starting with a warm-up, I canvas the room to see if there any specific topics, tools or technologies that people are particularly interested in covering. I’ll note those (on post-its slapped on the wall) for reference throughout the day, to try to make sure that those particular things are touched on at some point. Then I start with a thought experiment…

First of all, I get everyone to call out websites, services and apps that they use almost every day: Twitter, Facebook, Gmail, Slack, Google Docs, and so on. Those all get documented on the wall. Then it’s time to ask of each product, “What is the core functionality?” The idea here is to get beneath the surface-level verbs like swiping, tapping and dragging to get to the real purpose of a service: buying, selling, sharing, reading, writing, collaborating, and so on.

At this point I inform the attendees that the year is 1995. And now we’re going to build these services using the technology of this time. This is a playful way of getting answers to the question “What’s the simplest technology to enable the core functionality?” It’s mostly forms, links, and lots of heavy lifting on the server.

Then the real fun begins. “Enhance!” Moving forward in time, we get to add styles, we add interactivity with JavaScript, then Ajax, and then we get to really have fun with technologies like web sockets, geolocation, local storage, right the way up to service workers, notifications, and background sync. And the beauty of it all is that, if any of those technologies aren’t supported in a particular browser or device, the core functionality is still available.

Next, we apply this layered mindset to a new service. I split the attendees into groups, and each of them gets a procedurally-generated startup idea …generated by shuffling some cards. This is an exercise I first tried when I was teaching in Porto:

I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

The first few exercises are good creative fun: come up with a name, then a logo, then a business model. Then it’s time to build. It starts with URL design. Then it’s content prioritisation (for a representative URL). Then it’s layout (sketching!). The enhancements have begun. “How might this URL benefit from Ajax?” “How might this URL benefit from geolocation?” “How might this URL benefit from offline storage?” “How might this URL benefit from a service worker?”

Workshop team 4 Workshop team 3 Workshop team 2 Workshop team 1

At this point, we’ve applied the layered, progressive approach at the scale of an entire service, and at the scale of an individual URL. Finally, we apply the same approach at the level of a component. It might be a navigation, or a carousel, or an interactive widget. In each case, the same process applies: “What’s the core functionality? What’s the simplest technology to enable that functionality? Enhance!”

Along the way, there are plenty of rabbit holes we can go down. Whether it’s accessibility, or progressive web apps, or pattern libraries, I go along with whatever people are curious about. But all of it ties back to the progressive, layered mindset I’m hoping to foster.

By the end of the day, I’m hoping that an attendee has one of two reactions:

  1. “What a waste of time! Everything in that workshop was blindingly obvious!” (in which case, excellent!—they’re already thinking in a progressive way), or
  2. “That workshop has completely changed the way I think about building on the web!” (I’m being hyperbolic here, but at the very least I’m hoping to impart a new perspective).

Having given the workshop a few times, I’m really pleased with how it went (and more important, I’m pleased that people enjoyed it). If this sounds like something that your company or team would enjoy, get in touch and we can take it from there.

Offline itineraries with service workers

The Trivago website is a progressive web app. That means it

  1. is served over HTTPS,
  2. has a web app manifest JSON file, and it
  3. has a service worker script.

The service worker provides an opportunity for a nice bit of fun branding—if you lose your internet connection, the site provides a neat little maze game you can play. Cute!

That’s a fairly simple example of how service workers can enhance the user experience when the dreaded offline situation arises. But it strikes me that the travel industry is the perfect place to imagine other opportunities for offline enhancements.

Travel sites often provide itineraries—think airlines, trains, or hotels. The itineraries consist of places, times, and contact information. This is exactly the kind of information that you might find yourself trying to retrieve in an emergency situation, like maybe in a cab on the way to the airport or train station. Perhaps you’re stuck in traffic, in a tunnel. Or maybe you don’t have a data plan for the country you’re currently in. Either way, wouldn’t it be great if you could hit the website for your airline or hotel and get your itinerary, even if you’re offline.

Alright, let’s think this through…

Let’s assume that an individual itinerary has its own URL. That URL is a web page of information, mostly text, with perhaps an image or two (like a map). Now when you make your booking, let’s have the service worker cache that URL (and its assets) for offline access.

Hmm …but there’s a good chance that the device you make the booking on is not the same device that you’d have with you out and and about. Because caches are local to the browser, that’s a problem.

Okay, but of these kinds of sites have some kind of log-in mechanism. So we could update the log-in flow a bit: when a user logs in, check to see if they have any itineraries assigned to them, and if they do, fire off an event to the service worker (using postMessage) to cache the URLs of the itineraries.

Now that the itineraries are cached, the final step is to create a custom offline page. As well as the usual “Sorry, the internet’s down” message, we can say “Sorry, the internet’s down …but here are your itineraries”. (This is kind of like the pattern you see on blogs like mine, Ethan’s, or Mike’s—a custom offline page that lists cached URLs of articles you’ve previously visited).

That’s just one pattern off the top of my head. It’s fun to imagine the different ways that service workers could be used to enhance the experience of just about any site, but they seem particularly relevant to travel sites—dodgy internet connections and travelling go hand-in-hand. At Clearleft, we’ve been working with quite a few travel-related clients lately so that’s why these scenarios are on my mind: booking holidays, flights, and so on. But, as I’ve said before and I’ll say again, every website can benefit from becoming a progressive web app.

Ends and means

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA.

I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause?

Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo:

Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum?

Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?”

There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut.

Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple.

By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete.

If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart:

We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy.

There’s real hubris and audacity in thinking that one company should be able to tackle fixing the whole web. I think the AMP team are genuinely upset and hurt that people aren’t cheering them on. Perhaps they will dismiss the criticisms as outpourings of “Why wasn’t I consulted?” But that would be a mistake. The many thoughtful people who are extremely critical of AMP are on the same side as the AMP team when it comes the end-goal of better, faster websites. But burning the web to save it? No thanks.

Ben Thompson goes into more detail on the tension between the ends and the means in The Aggregator Paradox:

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

From that perspective, there’s a moral argument to be made for wielding monopoly power like Google is doing. No doubt the AMP team feel it would be morally wrong for Google not to use its influence in search to give preferential treatment to AMP pages.

Going back to the opening examples of online blackouts, was it morally wrong for companies to use their power to influence politics? Or would it have been morally wrong for them not to have used their influence?

When do the ends justify the means?

Here’s a more subtle example than Google AMP, but one which has me just as worried for the future of the web. Mozilla announced that any new web features they add to their browser will require HTTPS.

The end-goal here is one I agree with: HTTPS everywhere. On the face of it, the means of reaching that goal seem reasonable. After all, we already require HTTPS for sensitive JavaScript APIs like geolocation or service workers. But the devil is in the details:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

Emphasis mine.

This is a step too far. Again, I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement. In some ways, I think it might actually be worse.

There’s an assumption in this decision that websites are being made by professionals who will know how to switch to HTTPS. But the web is for everyone. Not just for everyone to use. It’s for everyone to build.

One of my greatest fears for the web is that building it becomes the domain of a professional priesthood. Anything that raises the bar to writing some HTML or CSS makes me very worried. Usually it’s toolchains that make things more complex, but in this case the barrier to entry is being brought right into the browser itself.

I’m trying to imagine future Codebar evenings, helping people to make their first websites, but now having to tell them that some CSS will be off-limits until they meet the entry requirements of HTTPS …even though CSS and HTTPS have literally nothing to do with one another. (And yes, there will be an exception for localhost and I really hope there’ll be an exception for file: as well, but that’s simply postponing the disappointment.)

No doubt Mozilla (and the W3C Technical Architecture Group) believe that they are doing the right thing. Perhaps they think it would be morally wrong if browsers didn’t enforce HTTPS even for unrelated features like new CSS properties. They believe that, in this particular case, the ends justify the means.

I strongly disagree. If you also disagree, I encourage you to make your voice heard. Remember, this isn’t about whether you think that we should all switch to HTTPS—we’re all in agreement on that. This is about whether it’s okay to create collateral damage by deliberately denying people access to web features in order to further a completely separate agenda.

This isn’t about you or me. This is about all those people who could potentially become makers of the web. We should be welcoming them, not creating barriers for them to overcome.

GDPR and Google Analytics

Enforcement of the European Union’s General Data Protection Regulation is coming very, very soon. Look busy. This regulation is not limited to companies based in the EU—it applies to any service anywhere in the world that can be used by citizens of the EU.

It’s less about data protection and more like a user’s bill of rights. That’s good. Cennydd has written a techie’s rough guide to GDPR.

The Open Data Institute’s Jeni Tennison wrote down her thoughts on how it could change data portability in particular. While she welcomes GDPR, she has some misgivings.

Blaine—who really needs to get a blog—shared his concerns in the form of the online equivalent of interpretive dance …a twitter thread (it’s called a thread because it inevitably gets all tangled, and it’s easy to break.)

The interesting thing about the so-called “cookie law” is that it makes no mention of cookies whatsoever. It doesn’t list any specific technology. Instead it states that any means of tracking or identifying users across websites requires disclosure. So if you’re setting a cookie just to manage state—so that users can log in, or keep items in a shopping basket—the legislation doesn’t apply. But as soon as your site allows a third-party to set a cookie, it’s banner time.

Google Analytics is a classic example of a third-party service that uses cookies to track people across domains. That’s pretty much why it exists. We, as site owners, get to use this incredibly powerful tool, and all we have to do in return is add one little snippet of JavaScript to our pages. In doing so, we’re allowing a third party to read or write a cookie from their domain.

Before Google Analytics, Google—the search engine business—was able to identify and track what users were searching for, and which search results they clicked on. But as soon as the user left google.com, the trail went cold. By creating an enormously useful analytics product that only required site owners to add a single line of JavaScript, Google—the online advertising business—gained the ability to keep track of users across most of the web, whether they were on a site owned by Google or not.

Under the old “cookie law”, using a third-party cookie-setting service like that meant you had to inform any of your users who were citizens of the EU. With GDPR, that changes. Now you have to get consent. A dismissible little overlay isn’t going to cut it any more. Implied consent isn’t enough.

Now this situation raises an interesting question. Who’s responsible for getting consent? Is it the site owner or the third party whose script is the conduit for the tracking?

In the first scenario, you’d need to wait for an explicit agreement from a visitor to your site before triggering the Google Analytics functionality. Suddenly it’s not as simple as adding a single line of JavaScript to your site.

In the second scenario, you don’t do anything differently than before—you just add that single line of JavaScript. But now that script would need to launch the interface for getting consent before doing any tracking. Google Analytics would go from being something invisible to something that directly impacts the user experience of your site.

I’m just using Google Analytics as an example here because it’s so widespread. This also applies to third-party sharing buttons—Twitter, Facebook, etc.—and of course, advertising.

In the case of advertising, it gets even thornier because quite often, the site owner has no idea which third party is about to do the tracking. Many, many sites use intermediary services (y’know, ‘cause bloated ad scripts aren’t slowing down sites enough so let’s throw some just-in-time bidding into the mix too). You could get consent for the intermediary service, but not for the final advert—neither you nor your site’s user would have any idea what they were consenting to.

Interesting times. One way or another, a massive amount of the web—every website using Google Analytics, embedded YouTube videos, Facebook comments, embedded tweets, or third-party advertisements—will be liable under GDPR.

It’s almost as if the ubiquitous surveillance of people’s every move on the web wasn’t a very good idea in the first place.

Design ops for design systems

Leading Design was one of the best events I attended last year. To be honest, that surprised me—I wasn’t sure how relevant it would be to me, but it turned out to be the most on-the-nose conference I could’ve wished for.

Seeing as the event was all about design leadership, there was inevitably some talk of design ops. But I noticed that the term was being used in two different ways.

Sometimes a speaker would talk about design ops and mean “operations, specifically for designers.” That means all the usual office practicalities—equipment, furniture, software—that designers might need to do their jobs. For example, one of the speakers recommended having a dedicated design ops person rather than trying to juggle that yourself. That’s good advice, as long as you understand what’s meant by design ops in that context.

There’s another context of use for the phrase “design ops”, and it’s one that we use far more often at Clearleft. It’s related to design systems.

Now, “design system” is itself a term that can be ambiguous. See also “pattern library” and “style guide”. Quite a few people have had a stab at disambiguating those terms, and I think there’s general agreement—a design system is the overall big-picture “thing” that can contain a pattern library, and/or a style guide, and/or much more besides:

None of those great posts attempt to define design ops, and that’s totally fair, because they’re all attempting to define things—style guides, pattern libraries, and design systems—whereas design ops isn’t a thing, it’s a practice. But I do think that design ops follows on nicely from design systems. I think that design ops is the practice of adopting and using a design system.

There are plenty of posts out there about the challenges of getting people to use a design system, and while very few of them use the term design ops, I think that’s what all of them are about:

Clearly design systems and design ops are very closely related: you really can’t have one without the other. What I find interesting is that a lot of the challenges relating to design systems (and pattern libraries, and style guides) might be technical, whereas the challenges of design ops are almost entirely cultural.

I realise that tying design ops directly to design systems is somewhat limiting, and the truth is that design ops can encompass much more. I like Andy’s description:

Design Ops is essentially the practice of reducing operational inefficiencies in the design workflow through process and technological advancements.

Now, in theory, that can encompass any operational stuff—equipment, furniture, software—but in practice, when we’re dealing with design ops, 90% of the time it’s related to a design system. I guess I could use a whole new term (design systems ops?) but I think the term design ops works well …as long as everyone involved is clear on the kind of design ops we’re all talking about.

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

Words I wrote in 2017

I wrote 78 blog posts in 2017. That works out at an average of six and a half blog posts per month. I’ll take it.

Here are some pieces of writing from 2017 that I’m relatively happy with:

Going Rogue. A look at the ethical questions raised by Rogue One

In AMP we trust. My unease with Google’s AMP format was growing by the day.

A minority report on artificial intelligence. Revisiting two of Spielberg’s films after a decade and a half.

Progressing the web. I really don’t want progressive web apps to just try to imitate native apps. They can be so much more.

CSS. Simple, yes, but not easy.

Intolerable. A screed. I still get very, very angry when I think about how that manifestbro duped people.

Акула. Recounting a story told by a taxi driver.

Hooked and booked. Does A/B testing lead to dark patterns?

Ubiquity and consistency. Different approaches to building on the web.

I hope there’s something in there that you like. It always a nice bonus when other people like something I’ve written, but I write for myself first and foremost. Writing is how I figure out what I think. I will, of course, continue to write and publish on my website in 2018. I’d really like it if you did the same.