Tags: ie

232

sparkline

Unworn Pleasures

I’ve made no secret of my admiration of Jocelyn Bell Burnell, and how Peter Saville’s iconic cover design for Joy Division’s Unknown Pleasures always reminds of her.

There are many, many memetic variations of that design.

Spaghetti, All Lined Up Quite Nicely. Furr Division. Depeche Mode, Boys Don't Cry. What is this? I’ve seen it on Tumblr.

I assumed that somebody somewhere at some time must have made a suitable tribute to the discover of those pulses, but I’ve never come across any Jocelyn-themed variation of the Joy Division album art.

So I made my own.

Jocelyn T-shirt.

The test order I did just showed up, and it’s looking pretty nice (although be warned that the sizes run small—I ordered a large, and I probably should’ve gone for extra large). If your music/radio-astronomy Venn diagram overlaps like mine, then you too might enjoy being the proud bearer of this wearable tribute to Dame Jocelyn Bell Burnell.

Design systems

Talking about scaling design can get very confusing very quickly. There are a bunch of terms that get thrown around: design systems, pattern libraries, style guides, and components.

The generally-accepted definition of a design system is that it’s the outer circle—it encompasses pattern libraries, style guides, and any other artefacts. But there’s something more. Just because you have a collection of design patterns doesn’t mean you have a design system. A system is a framework. It’s a rulebook. It’s what tells you how those patterns work together.

This is something that Cennydd mentioned recently:

Here’s my thing with the modularisation trend in design: where’s the gestalt?

In my mind, the design system is the gestalt. But Cennydd is absolutely right to express concern—I think a lot of people are collecting patterns and calling the resulting collection a design system. No. That’s a pattern library. You still need to have a framework for how to use those patterns.

I understand the urge to fixate on patterns. They’re small enough to be manageable, and they’re tangible—here’s a carousel; here’s a date-picker. But a design system is big and intangible.

Games are great examples of design systems. They’re frameworks. A game is a collection of rules and constraints. You can document those rules and constraints, but you can’t point to something and say, “That is football” or “That is chess” or “That is poker.”

Even though they consist entirely of rules and constraints, football, chess, and poker still produce an almost infinite possibility space. That’s quite overwhelming. So it’s easier for us to grasp instances of football, chess, and poker. We can point to a particular occurrence and say, “That is a game of football”, or “That is a chess match.”

But if you tried to figure out the rules of football, chess, or poker just from watching one particular instance of the game, you’d have your work cut for you. It’s not impossible, but it is challenging.

Likewise, it’s not very useful to create a library of patterns without providing any framework for using those patterns.

I would go so far as to say that the actual code for the patterns is the least important part of a design system (or, certainly, it’s the part that should be most malleable and open to change). It’s more important that the patterns have been identified, named, described, and crucially, accompanied by some kind of guidance on usage.

I could easily imagine using a tool like Fractal to create a library of text snippets with no actual code. Those pieces of text—which provide information on where and when to use a pattern—could be more valuable than providing a snippet of code without any context.

One of the very first large-scale pattern libraries I can remember seeing on the web was Yahoo’s Design Pattern Library. Each pattern outlined

  1. the problem being solved;
  2. when to use this pattern;
  3. when not to use this pattern.

Only then, almost incidentally, did they link off to the code for that pattern. But it was entirely possible to use the system of patterns without ever using that code. The code was just one instance of the pattern. The important part was the framework that helped you understand when and where it was appropriate to use that pattern.

I think we lose sight of the real value of a design system when we focus too much on the components. The components are the trees. The design system is the forest. As Paul asked:

What methodologies might we uncover if we were to focus more on the relationships between components, rather than the components themselves?

Thanos

I’m going to discuss Avengers: Infinity War without spoilers, unless you count the motivations of the main villain as a spoiler, in which case you should stop reading now.

The most recent book by Charles C. Mann—author of 1491 and 1493—is called The Wizard And The Prophet. It profiles two twentieth century figures with divergent belief systems: Norman Borlaug and William Vogt. (Trust me, this will become relevant to the new Avengers film.)

I’ve long been fascinated by Norman Borlaug, father of the Green Revolution. It is quite possible that he is responsible for saving more lives than any other single human being in history (with the possible exception of Stanislav Petrov who may have saved the entire human race through inaction). In his book, Mann dubs Borlaug “The Wizard”—the epitome of a can-do attitude and a willingness to use technology to solve global problems.

William Vogt, by contrast, is “The Prophet.” His groundbreaking research crystalised many central tenets of the environmental movement, including the term he coined, carrying capacity—the upper limit to a population that an environment can sustain. Vogt’s stance is that there is no getting around the carrying capacity of our planet, so we need to make do with less: fewer people consuming fewer resources.

Those are the opposing belief systems. Prophets believe that carrying capacity is fixed and that if our species exceed this limit, we are doomed. Wizards believe that technology can treat carrying capacity as damage and route around it.

Vogt’s philosophy came to dominate the environmental movement for the latter half of the twentieth century. It’s something I’ve personally found very frustrating. Groups and organisations that I nominally agree with—the Green Party, Greenpeace, etc.—have anti-technology baggage that doesn’t do them any favours. The uninformed opposition to GM foods is a perfect example. The unrealistic lauding of country life over the species-saving power of cities is another.

And yet history so far has favoured the wizards. The Malthusian population bomb never exploded, partly thanks to Borlaug’s work, but also thanks to better education for women in the developing world, which had enormously positive repercussions.

Anyway, I find this framing of fundamental differences in attitude to be fascinating. Ultimately it’s a stand-off between optimism (the wizards) and pessimism (the prophets). John Faithful Hamer uses this same lens to contrast recent works by Steven Pinker and Yuval Noah Harari. Pinker is a wizard. Harari is a prophet.

I was not expecting to be confronted with the wizards vs. prophets debate while watching Avengers: Infinity War, but there’s no getting around it—Thanos is a prophet.

Very early on, we learn that Thanos doesn’t want to destroy all life in the universe. Instead, he wants to destroy half of all life in the universe. Why? Carrying capacity. He believes the only way to save life is to reduce its number (and therefore its footprint).

Many reviews of the film have noted how the character of Thanos is strangely sympathetic. It’s no wonder! He is effectively toeing the traditional party line of the mainstream environmental movement.

There’s even a moment in the film where Thanos explains how he came to form his opinions through a tragedy in the past that he correctly predicted. “Congratulations”, says one of his heroic foes sarcastically, “You’re a prophet.”

Earlier in the film, as some of the heroes are meeting for the first time, there are gags and jokes referring to Dr. Strange’s group as “the wizards.”

I’m sure those are just coincidences.

2001 + 50

The first ten minutes of my talk at An Event Apart Seattle consisted of me geeking about science fiction. There was a point to it …I think. But I must admit it felt quite self-indulgent to ramble to a captive audience about some of my favourite works of speculative fiction.

The meta-narrative I was driving at was around the perils of prediction (and how that’s not really what science fiction is about). This is something that Arthur C. Clarke pointed out repeatedly, most famously in Hazards of Prophecy. Ironically, I used Clarke’s meisterwork of a collaboration with Stanley Kubrick as a rare example of a predictive piece of sci-fi with a good hit rate.

When I introduced 2001: A Space Odyssey in my talk, I mentioned that it was fifty years old (making it even more of a staggering achievement, considering that humans hadn’t even reached the moon at that point). What I didn’t realise at the time was that it was fifty years old to the day. The film was released in American cinemas on April 2nd, 1968; I was giving my talk on April 2nd, 2018.

Over on Wired.com, Stephen Wolfram has written about his own personal relationship with the film. It’s a wide-ranging piece, covering everything from the typography of 2001 (see also: Typeset In The Future) right through to the nature of intelligence and our place in the universe.

When it comes to the technology depicted on-screen, he makes the same point that I was driving at in my talk—that, despite some successful extrapolations, certain real-world advances were not only unpredicted, but perhaps unpredictable. The mobile phone; the collapse of the soviet union …these are real-world events that are conspicuous by their absence in other great works of sci-fi like William Gibson’s brilliant Neuromancer.

But in his Wired piece, Wolfram also points out some acts of prediction that were so accurate that we don’t even notice them.

Also interesting in 2001 is that the Picturephone is a push-button phone, with exactly the same numeric button layout as today (though without the * and # [“octothorp”]). Push-button phones actually already existed in 1968, although they were not yet widely deployed.

To use the Picturephone in 2001, one inserts a credit card. Credit cards had existed for a while even in 1968, though they were not terribly widely used. The idea of automatically reading credit cards (say, using a magnetic stripe) had actually been developed in 1960, but it didn’t become common until the 1980s.

I’ve watched 2001 many, many, many times and I’m always looking out for details of the world-building …but it never occurred to me that push-button numeric keypads or credit cards were examples of predictive extrapolation. As time goes on, more and more of these little touches will become unnoticeable and unremarkable.

On the space shuttle (or, perhaps better, space plane) the cabin looks very much like a modern airplane—which probably isn’t surprising, because things like Boeing 737s already existed in 1968. But in a correct (at least for now) modern touch, the seat backs have TVs—controlled, of course, by a row of buttons.

Now I want to watch 2001: A Space Odyssey again. If I’m really lucky, I might get to see a 70mm print in a cinema near me this year.

Fit For Purpose: Making Sense of the New CSS by Eric Meyer

Time for even more CSS goodness at An Event Apart Seattle (Special Edition). Eric’s talk is called Fit For Purpose: Making Sense of the New CSS. Here are my notes…

Eric isn’t going to dive quite as deeply as Rachel, but he is going to share some patterns he has used.

Feature queries

First up: feature queries! Or @supports, if you prefer. You can ask a browser “do you support this feature?” If you haven’t used feature queries, you might be wondering why you have to say the property and the value. Well, think about it. If you asked a browser “do you support display?”, it’s not very useful. So you have to say “do you support display: grid?”

Here’s a nice pattern from Lea Verou for detecting support for custom properties:

@supports (--css: variables)

Here’s a gotcha:

@supports (clip-path: polygon())

That won’t work because polygon() is invalid. This will work:

@supports (clip-path: polygon(0 0))

So to use feature queries, you need to understand valid values for properties.

You can chain feature queries together, or just pick the least-supported thing you’re testing for and test just for that.

Here’s a pattern Eric used when he only wanted to make text sideways, but only if grid is supported:

@supports (display: grid) {
    ...
    @supports (writing-mode: sideways-lr) {
        ...
    }
}

That’s functionally equivalent to:

@supports (display: grid) {
    ...
}
@supports (display:grid) and (writing-mode: sideways-lr) {
    ...
}

Choose whichever pattern makes sense to you. More to the point, choose the pattern that makes sense to your future self when you revisit your code.

Feature queries need to work together with media queries. Sometimes there are effects that you only want to apply on larger viewports. Do you put your feature queries inside your media queries? Or do you put your media queries inside your feature queries?

  • MOSS: Media Outside Support Statements
  • MISO: Media Inside Supports Object

Use MOSS when you have more media switches than support blocks. Use MISO when you only have a few breakpoints but lots of feature queries.

That’s one idea that Eric has. It’ll be interesting to see how this develops.

And remember, CSS is still CSS. Sometimes you don’t need a feature query at all. You could just use hanging-punctuation without testing for it. Browsers that don’t understand it will just ignore it. CSS has implicit feature queries built in. You don’t have to put your grid layout in a feature query, but you might want to put grid-specific margins and widths inside a feature query for display: grid.

Feature queries really help us get from now to the future.

Flexbox

Let’s move on to flexbox. Flexbox is great for things in a line.

On the An Event Apart site, the profile pictures have social media icons lined up at the bottom. Sometimes there are just a few. Sometimes there are a lot more. This is using flexbox. Why? Because it’s cool. Also, because it’s flexbox, you can create rules about how the icons should behave if one of the icons is taller than the others. (It’s gotten to the point that Eric has forgotten that vertically-centring things in CSS is supposed to be hard. The jokes aren’t funny any more.) Also, what if there’s no photo? Using flexbox, you can say “if there’s no photo, change the direction of the icons to be vertical.” Once again, it’s all about writing less CSS.

Also, note that the profile picture is being floated. That’s the right tool for the job. It feels almost transgressive to use float for exactly the purpose for which it was intended.

On the An Event Apart site, the header is currently using absolute positioning to pull the navigation from the bottom of the page source to the top of the viewport. But now you get overlap at some screen sizes. Flexbox would make it much more robust. (Eric uses the flexbox inspector in Firefox Nightly to demonstrate.)

With flexbox, what works horizontally works vertically. Flexbox allows you to align things, as long as you’re aligning in one direction. Flexbox makes things springy. Everything’s related and pushing against each other in a way that makes sense for this medium. It’s intuitive, even though it takes a bit of getting used to …because we’ve picked up some bad habits. To quote Yoda, “You must unlearn what you have learned.” A lot of the barrier is getting over what we’ve internalised. Eric envies the people starting out now. They get to start fresh. It’s like when people who never had to table layouts see code from that time period: it (quite rightly) doesn’t make any sense. That’s what it’s going to be like when people starting out today see the float-based layouts from Bootstrap and the like.

Grid

That’s going to happen with grid too. We must unlearn what we have learned from twenty years of floats and positioning. What makes it worth is:

  1. Flexbox and grid are pretty easy to get used to, and
  2. It’s amazing what you can do!

Eric quotes from an article called How We adopted CSS Grid at Scale:

…we agreed to use CSS Grid at the layout level and Flexbox at the component level (arranging child items of components). Although there’s some overlap and in some cases both could be used interchangeably, abiding by this rule helped us avoid any confusion in gray areas.

Don’t be afraid to set these kind of arbitrary limits that aren’t technological, but are necessary for the team to work well together.

Eric hacked his Wordpress admin interface to use grid instead of floats for an activity component (a list of dates and titles). He initially turned each list item into a separate grid. The overall list didn’t look right. What Eric really needed was a subgrid capability, so that the mini grids (the list items) would relate to one another within the larger grid (the list). But subgrid doesn’t exist yet.

In this case, there’s a way to fake it using display: contents. Eric made the list a grid and used display: contents on the list items. It’s as though you’re saying that the contents of the li are really the contents of the ul. That works in this particular case.

The feature queries for that looked like:

@supports (display: grid) {
    ...
    @supports (display: contents) {
        ...
    }
}

Eric is also using the grid “ASCII art” (named areas) technique on his personal site. This works independent of source order. For that reason, make sure your source order makes sense.

Using media queries, Eric defines entirely different layouts simply by using different ASCII art. He’s switching templates.

For a proposed redesign of the An Event Apart site, Eric used CSS grid as a prototyping tool. He took a PDF, sliced it up, exported JPGs, and then used grid to lay out those images in a flexible grid. Rapid prototyping! The Firefox grid inspector really helps here. In less than an hour, he had a working layout. He could test whether the layout was sensible and robust. Then he swapped out the sliced images for real content. That took maybe another hour (mostly because it was faster to re-type the text than try to copy and paste from a PDF). CSS makes it that damn easy now!

So even if you’re not going to put things like grid into production, they can still be enormously useful as design tools (and you’re getting to grips with this new stuff).

See also:

Beyond Engagement: the Content Performance Quotient by Jeffrey Zeldman

I’m at An Event Apart Seattle (Special Edition). Jeffrey is kicking off the show with a presentation called Beyond Engagement: the Content Performance Quotient. I’m going to jot down some notes during this talk…

First, a story. Jeffrey went to college in Bloomington, Indiana. David Frost—the British journalist—came to talk to them. Frost had a busy schedule, and when he showed up, he seemed a little tipsy. He came up to the podium and said, “Good evening, Wilmington.”

Jeffrey remembers this and knows that Seattle and Portland have a bit of a rivalry, and so Jeffrey thought, the first time he spoke in Portland, it would be funny to say “Good morning, Seattle!” …and that was the last time he spoke in Portland.

Anyway …”Good morning, Portland!”

Jeffrey wants to talk about content. He spends a lot of time in meetings with stakeholders. Those stakeholders always want things to be better, and they always talk about “engagement.” It’s the number one stakeholder request. It’s a metric that makes stakeholders feel comfortable. It’s measurable—the more seconds people give us, the better.

But is that really the right metric?

There are some kinds of sites where engagement is definitely the right metric. Instagram, for example. That’s how they make money. You want to distract yourself. Also, if you have a big content site—beautifully art-directed and photographed—then engagement is what you want. You want people to spend a lot of time there. Or if you have a kids site, or a games site, or a reading site for kids, you want them to be engaged and spend time. A List Apart, too. It’s like the opposite of Stack Overflow, where you Google something and grab the piece of code you need and then get out. But for A List Apart or Smashing Magazine, you want people to read and think and spend their time. Engagement is what you want.

But for most sites—insurance, universities—engagement is not what you want. These sites are more like a customer service desk. You want to help the customer as quickly as possible. If a customer spends 30 minutes on our site, was she engaged …or frustrated? Was it the beautiful typography and copy …or because she couldn’t find what they wanted? If someone spends a long time on an ecommerce site, is it because the products are so good …or because search isn’t working well?

What we need is a metric called speed of usefulness. Jeffrey calls this Content Performance Quotient (CPQ) …because business people love three-letter initialisms. It’s a loose measurement: How quickly can you solve the customer’s problem? It’s the shortest distance between the problem and the solution. Put another way, it’s a measurement of your value to the customer. It’s a new way to evaluate success.

From the customer’s point of view, CPQ is the time it takes the customer to get the information she came for. From the organisation’s point of view, it’s the time it takes for a specific customer to find, receive, and absorb your most important content.

We’re all guilty of neglecting the basics on our sites—just what it is it that we do? We need to remember that we’re all making stuff to make people’s live’s easier. Otherwise we end up with what Jeffrey calls “pretty garbage.” It’s aesthetically coherent and visually well-designed …but if the content is wrong and doesn’t help anyone, it’s garbage. Garbage in a delightfully responsive grid is still garbage.

Let’s think of an example of where people really learned to cut back and really pare down their message. Advertising. In the 1950s, when the Leo Burnett agency started the Marlboro campaign, TV spots were 60 seconds long. An off-camera white man in a suit with a soothing voice would tell you all about the product while the visuals showed you what he was talking about. No irony. Marlboro did a commercial where there was no copy at all until the very end. For 60 seconds they showed you cowboys doing their rugged cowboy things. Men in the 1950s wanted to feel rugged, you see. Leo Burnett aimed the Marlboro cigarettes at those men. And at the end of the 60 second montage of rugged cowboys herding steers, they said “Come to where the flavour is. Come to Marlboro Country.” For the billboard, they cut it back even more. Just “Come to Marlboro Country.” In fact, they eventually went to just “Marlboro.” Jeffrey knows that this campaign worked well, because he started smoking Marlboros as a kid.

Leaving aside the ethical implications of selling cigarettes to eight-graders, let’s think about the genius of those advertisers. Slash your architecture and shrink your content. Constantly ask yourself, “Why do we need this?”

As Jared Spool says, design is the rendering of intent. Every design is intentional. There is some intent—like engagement—driving our design. If there’s no intent behind the design, it will fail, even if what you’re doing is very good. If your design isn’t going somewhere, it’s going nowhere. You’ve got to stay ruthlessly focused on what the customer needs and “kill your darlings” as Hemingway said. Luke Wroblewski really brought this to light when he talked about Mobile First.

To paraphrase David Byrne, how did we get here?

Well, we prioritised meetings over meaning. Those meetings can be full of tension; different stakeholders arguing over what should be on the homepage. And we tried to solve this by giving everyone what they want. Having a good meeting doesn’t necessarily mean having a good meeting. We think of good meetings as conflict-free where everyone emerges happy. But maybe there should be a conflict that gets resolved. Maybe there should be winners and losers.

Behold our mighty CMS! Anyone can add content to the website. Anyone can create the information architecture …because we want to make people happy in meetings. It’s easy to give everyone what they want. It’s harder to do the right thing. Harder for us, but better for the customer and the bottom line.

As Gerry McGovern says:

Great UX professionals are like whistleblowers. They are the voice of the user.

We need to stop designing 2001 sites for a 2018 web.

One example of cutting down content was highlighted in A List Apart where web design was compared to chess: The King vs. Pawn Game of UI Design. Don’t start by going through all the rules. Teach them in context. Teach chess by starting with a checkmate move, reduced down to just three pieces on the board. From there, begin building out. Start with the most important information, and build out from there.

When you strip down the game to its core, everything you learn is a universal principle.

Another example is atomic design: focus relentlessly on the individual interaction. We do it for shopping carts. We can do it for content.

Another example on A List Apart is No More FAQs: Create Purposeful Information for a More Effective User Experience. FAQ problems include:

  • duplicate and contradictory information,
  • lack of discernible content order,
  • repetitive grammatical structure,
  • increased cognitive load, and
  • more content than they need.

Users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge).

The important word there is purpose. We need to eliminate distraction. How do we do that?

One way is the waterfall method. Do a massive content inventory. It’s not recommended (unless maybe you’re doing a massive redesign).

Agile and scrum is another way. Constantly iterate on content. Little by little over time, we make the product better. It’s the best bet if you work in-house.

If you work in an agency, a redesign is an opportunity to start fresh. Take everything off the table and start from scratch. Jeffrey’s friend Fred Gates got an assignment to redesign an online gaming platform for kids to teach them reading and management skills. The organisation didn’t have much money so they said, let’s just do the homepage. Fred challenged himself to put the whole thing on the homepage. The homepage tells the whole story. Jeffrey is using this same method on a site for an insurance company, even though the client has a bigger budget and can afford more than just the homepage. The point is, what Fred did was effective.

So this is what Jeffrey is going to be testing and working on: speed of usefulness.

And for those of you who do need to use engagement as the right metric, Jeffrey covered the two kinds of metrics in an article called We need design that is faster and design that is slower.

For example, “scannability” is good for transactions (CPQ), but bad for thoughtful content (engagement). Our news designs need to slow down the user. Bigger type, typographic hierarchy, and more whitespace. Art direction. Shout out to Derek Powazek who designed Fray.com—each piece was designed based on the content. These days, look at what David Sleight and his crew are doing over at Pro Publica.

Who’s doing it right?

The Washington Post, The New York Times, Pro Publica, Slate, Smashing Magazine, and Vox are all doing this well in different ways. They’re bringing content to the fore.

Readability, Medium, and A List Apart are all using big type to encourage thoughtful reading and engagement.

But for other sites …apply the Content Performance Quotient.

See also:

A workshop on building for resilience

In February, I tried out a new workshop two times—once at Webstock in New Zealand, and once in Hong Kong.

The workshop is called The Progressive Web: Building for Resilience. Here’s an excerpt form the blurb:

This workshop will show you to to think in a progressive way that works with the grain of the web. Together we’ll peel back the layers of the web and build upwards, creating experiences that work for everyone while making the best of cutting-edge browser technologies. From URL design to Progressive Web Apps, this journey will cover each stage of technological advancement.

Basically, it’s the workshop version of Resilient Web Design. If that book is the theory, this workshop is the practice.

Tim recently posted his tips for running workshops and there’s a lot in there that resonates with me. Like Tim, I’ve become less and less reliant on slides. In fact, this workshop—like my workshop on evaluating technology—has no slides. Instead it’s all about the exercises and going with the flow.

After starting with a warm-up, I canvas the room to see if there any specific topics, tools or technologies that people are particularly interested in covering. I’ll note those (on post-its slapped on the wall) for reference throughout the day, to try to make sure that those particular things are touched on at some point. Then I start with a thought experiment…

First of all, I get everyone to call out websites, services and apps that they use almost every day: Twitter, Facebook, Gmail, Slack, Google Docs, and so on. Those all get documented on the wall. Then it’s time to ask of each product, “What is the core functionality?” The idea here is to get beneath the surface-level verbs like swiping, tapping and dragging to get to the real purpose of a service: buying, selling, sharing, reading, writing, collaborating, and so on.

At this point I inform the attendees that the year is 1995. And now we’re going to build these services using the technology of this time. This is a playful way of getting answers to the question “What’s the simplest technology to enable the core functionality?” It’s mostly forms, links, and lots of heavy lifting on the server.

Then the real fun begins. “Enhance!” Moving forward in time, we get to add styles, we add interactivity with JavaScript, then Ajax, and then we get to really have fun with technologies like web sockets, geolocation, local storage, right the way up to service workers, notifications, and background sync. And the beauty of it all is that, if any of those technologies aren’t supported in a particular browser or device, the core functionality is still available.

Next, we apply this layered mindset to a new service. I split the attendees into groups, and each of them gets a procedurally-generated startup idea …generated by shuffling some cards. This is an exercise I first tried when I was teaching in Porto:

I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

The first few exercises are good creative fun: come up with a name, then a logo, then a business model. Then it’s time to build. It starts with URL design. Then it’s content prioritisation (for a representative URL). Then it’s layout (sketching!). The enhancements have begun. “How might this URL benefit from Ajax?” “How might this URL benefit from geolocation?” “How might this URL benefit from offline storage?” “How might this URL benefit from a service worker?”

Workshop team 4 Workshop team 3 Workshop team 2 Workshop team 1

At this point, we’ve applied the layered, progressive approach at the scale of an entire service, and at the scale of an individual URL. Finally, we apply the same approach at the level of a component. It might be a navigation, or a carousel, or an interactive widget. In each case, the same process applies: “What’s the core functionality? What’s the simplest technology to enable that functionality? Enhance!”

Along the way, there are plenty of rabbit holes we can go down. Whether it’s accessibility, or progressive web apps, or pattern libraries, I go along with whatever people are curious about. But all of it ties back to the progressive, layered mindset I’m hoping to foster.

By the end of the day, I’m hoping that an attendee has one of two reactions:

  1. “What a waste of time! Everything in that workshop was blindingly obvious!” (in which case, excellent!—they’re already thinking in a progressive way), or
  2. “That workshop has completely changed the way I think about building on the web!” (I’m being hyperbolic here, but at the very least I’m hoping to impart a new perspective).

Having given the workshop a few times, I’m really pleased with how it went (and more important, I’m pleased that people enjoyed it). If this sounds like something that your company or team would enjoy, get in touch and we can take it from there.

Offline itineraries with service workers

The Trivago website is a progressive web app. That means it

  1. is served over HTTPS,
  2. has a web app manifest JSON file, and it
  3. has a service worker script.

The service worker provides an opportunity for a nice bit of fun branding—if you lose your internet connection, the site provides a neat little maze game you can play. Cute!

That’s a fairly simple example of how service workers can enhance the user experience when the dreaded offline situation arises. But it strikes me that the travel industry is the perfect place to imagine other opportunities for offline enhancements.

Travel sites often provide itineraries—think airlines, trains, or hotels. The itineraries consist of places, times, and contact information. This is exactly the kind of information that you might find yourself trying to retrieve in an emergency situation, like maybe in a cab on the way to the airport or train station. Perhaps you’re stuck in traffic, in a tunnel. Or maybe you don’t have a data plan for the country you’re currently in. Either way, wouldn’t it be great if you could hit the website for your airline or hotel and get your itinerary, even if you’re offline.

Alright, let’s think this through…

Let’s assume that an individual itinerary has its own URL. That URL is a web page of information, mostly text, with perhaps an image or two (like a map). Now when you make your booking, let’s have the service worker cache that URL (and its assets) for offline access.

Hmm …but there’s a good chance that the device you make the booking on is not the same device that you’d have with you out and and about. Because caches are local to the browser, that’s a problem.

Okay, but of these kinds of sites have some kind of log-in mechanism. So we could update the log-in flow a bit: when a user logs in, check to see if they have any itineraries assigned to them, and if they do, fire off an event to the service worker (using postMessage) to cache the URLs of the itineraries.

Now that the itineraries are cached, the final step is to create a custom offline page. As well as the usual “Sorry, the internet’s down” message, we can say “Sorry, the internet’s down …but here are your itineraries”. (This is kind of like the pattern you see on blogs like mine, Ethan’s, or Mike’s—a custom offline page that lists cached URLs of articles you’ve previously visited).

That’s just one pattern off the top of my head. It’s fun to imagine the different ways that service workers could be used to enhance the experience of just about any site, but they seem particularly relevant to travel sites—dodgy internet connections and travelling go hand-in-hand. At Clearleft, we’ve been working with quite a few travel-related clients lately so that’s why these scenarios are on my mind: booking holidays, flights, and so on. But, as I’ve said before and I’ll say again, every website can benefit from becoming a progressive web app.

Ends and means

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA.

I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause?

Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo:

Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum?

Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?”

There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut.

Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple.

By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete.

If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart:

We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy.

There’s real hubris and audacity in thinking that one company should be able to tackle fixing the whole web. I think the AMP team are genuinely upset and hurt that people aren’t cheering them on. Perhaps they will dismiss the criticisms as outpourings of “Why wasn’t I consulted?” But that would be a mistake. The many thoughtful people who are extremely critical of AMP are on the same side as the AMP team when it comes the end-goal of better, faster websites. But burning the web to save it? No thanks.

Ben Thompson goes into more detail on the tension between the ends and the means in The Aggregator Paradox:

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

From that perspective, there’s a moral argument to be made for wielding monopoly power like Google is doing. No doubt the AMP team feel it would be morally wrong for Google not to use its influence in search to give preferential treatment to AMP pages.

Going back to the opening examples of online blackouts, was it morally wrong for companies to use their power to influence politics? Or would it have been morally wrong for them not to have used their influence?

When do the ends justify the means?

Here’s a more subtle example than Google AMP, but one which has me just as worried for the future of the web. Mozilla announced that any new web features they add to their browser will require HTTPS.

The end-goal here is one I agree with: HTTPS everywhere. On the face of it, the means of reaching that goal seem reasonable. After all, we already require HTTPS for sensitive JavaScript APIs like geolocation or service workers. But the devil is in the details:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

Emphasis mine.

This is a step too far. Again, I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement. In some ways, I think it might actually be worse.

There’s an assumption in this decision that websites are being made by professionals who will know how to switch to HTTPS. But the web is for everyone. Not just for everyone to use. It’s for everyone to build.

One of my greatest fears for the web is that building it becomes the domain of a professional priesthood. Anything that raises the bar to writing some HTML or CSS makes me very worried. Usually it’s toolchains that make things more complex, but in this case the barrier to entry is being brought right into the browser itself.

I’m trying to imagine future Codebar evenings, helping people to make their first websites, but now having to tell them that some CSS will be off-limits until they meet the entry requirements of HTTPS …even though CSS and HTTPS have literally nothing to do with one another. (And yes, there will be an exception for localhost and I really hope there’ll be an exception for file: as well, but that’s simply postponing the disappointment.)

No doubt Mozilla (and the W3C Technical Architecture Group) believe that they are doing the right thing. Perhaps they think it would be morally wrong if browsers didn’t enforce HTTPS even for unrelated features like new CSS properties. They believe that, in this particular case, the ends justify the means.

I strongly disagree. If you also disagree, I encourage you to make your voice heard. Remember, this isn’t about whether you think that we should all switch to HTTPS—we’re all in agreement on that. This is about whether it’s okay to create collateral damage by deliberately denying people access to web features in order to further a completely separate agenda.

This isn’t about you or me. This is about all those people who could potentially become makers of the web. We should be welcoming them, not creating barriers for them to overcome.

GDPR and Google Analytics

Enforcement of the European Union’s General Data Protection Regulation is coming very, very soon. Look busy. This regulation is not limited to companies based in the EU—it applies to any service anywhere in the world that can be used by citizens of the EU.

It’s less about data protection and more like a user’s bill of rights. That’s good. Cennydd has written a techie’s rough guide to GDPR.

The Open Data Institute’s Jeni Tennison wrote down her thoughts on how it could change data portability in particular. While she welcomes GDPR, she has some misgivings.

Blaine—who really needs to get a blog—shared his concerns in the form of the online equivalent of interpretive dance …a twitter thread (it’s called a thread because it inevitably gets all tangled, and it’s easy to break.)

The interesting thing about the so-called “cookie law” is that it makes no mention of cookies whatsoever. It doesn’t list any specific technology. Instead it states that any means of tracking or identifying users across websites requires disclosure. So if you’re setting a cookie just to manage state—so that users can log in, or keep items in a shopping basket—the legislation doesn’t apply. But as soon as your site allows a third-party to set a cookie, it’s banner time.

Google Analytics is a classic example of a third-party service that uses cookies to track people across domains. That’s pretty much why it exists. We, as site owners, get to use this incredibly powerful tool, and all we have to do in return is add one little snippet of JavaScript to our pages. In doing so, we’re allowing a third party to read or write a cookie from their domain.

Before Google Analytics, Google—the search engine business—was able to identify and track what users were searching for, and which search results they clicked on. But as soon as the user left google.com, the trail went cold. By creating an enormously useful analytics product that only required site owners to add a single line of JavaScript, Google—the online advertising business—gained the ability to keep track of users across most of the web, whether they were on a site owned by Google or not.

Under the old “cookie law”, using a third-party cookie-setting service like that meant you had to inform any of your users who were citizens of the EU. With GDPR, that changes. Now you have to get consent. A dismissible little overlay isn’t going to cut it any more. Implied consent isn’t enough.

Now this situation raises an interesting question. Who’s responsible for getting consent? Is it the site owner or the third party whose script is the conduit for the tracking?

In the first scenario, you’d need to wait for an explicit agreement from a visitor to your site before triggering the Google Analytics functionality. Suddenly it’s not as simple as adding a single line of JavaScript to your site.

In the second scenario, you don’t do anything differently than before—you just add that single line of JavaScript. But now that script would need to launch the interface for getting consent before doing any tracking. Google Analytics would go from being something invisible to something that directly impacts the user experience of your site.

I’m just using Google Analytics as an example here because it’s so widespread. This also applies to third-party sharing buttons—Twitter, Facebook, etc.—and of course, advertising.

In the case of advertising, it gets even thornier because quite often, the site owner has no idea which third party is about to do the tracking. Many, many sites use intermediary services (y’know, ‘cause bloated ad scripts aren’t slowing down sites enough so let’s throw some just-in-time bidding into the mix too). You could get consent for the intermediary service, but not for the final advert—neither you nor your site’s user would have any idea what they were consenting to.

Interesting times. One way or another, a massive amount of the web—every website using Google Analytics, embedded YouTube videos, Facebook comments, embedded tweets, or third-party advertisements—will be liable under GDPR.

It’s almost as if the ubiquitous surveillance of people’s every move on the web wasn’t a very good idea in the first place.

Design ops for design systems

Leading Design was one of the best events I attended last year. To be honest, that surprised me—I wasn’t sure how relevant it would be to me, but it turned out to be the most on-the-nose conference I could’ve wished for.

Seeing as the event was all about design leadership, there was inevitably some talk of design ops. But I noticed that the term was being used in two different ways.

Sometimes a speaker would talk about design ops and mean “operations, specifically for designers.” That means all the usual office practicalities—equipment, furniture, software—that designers might need to do their jobs. For example, one of the speakers recommended having a dedicated design ops person rather than trying to juggle that yourself. That’s good advice, as long as you understand what’s meant by design ops in that context.

There’s another context of use for the phrase “design ops”, and it’s one that we use far more often at Clearleft. It’s related to design systems.

Now, “design system” is itself a term that can be ambiguous. See also “pattern library” and “style guide”. Quite a few people have had a stab at disambiguating those terms, and I think there’s general agreement—a design system is the overall big-picture “thing” that can contain a pattern library, and/or a style guide, and/or much more besides:

None of those great posts attempt to define design ops, and that’s totally fair, because they’re all attempting to define things—style guides, pattern libraries, and design systems—whereas design ops isn’t a thing, it’s a practice. But I do think that design ops follows on nicely from design systems. I think that design ops is the practice of adopting and using a design system.

There are plenty of posts out there about the challenges of getting people to use a design system, and while very few of them use the term design ops, I think that’s what all of them are about:

Clearly design systems and design ops are very closely related: you really can’t have one without the other. What I find interesting is that a lot of the challenges relating to design systems (and pattern libraries, and style guides) might be technical, whereas the challenges of design ops are almost entirely cultural.

I realise that tying design ops directly to design systems is somewhat limiting, and the truth is that design ops can encompass much more. I like Andy’s description:

Design Ops is essentially the practice of reducing operational inefficiencies in the design workflow through process and technological advancements.

Now, in theory, that can encompass any operational stuff—equipment, furniture, software—but in practice, when we’re dealing with design ops, 90% of the time it’s related to a design system. I guess I could use a whole new term (design systems ops?) but I think the term design ops works well …as long as everyone involved is clear on the kind of design ops we’re all talking about.

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

Words I wrote in 2017

I wrote 78 blog posts in 2017. That works out at an average of six and a half blog posts per month. I’ll take it.

Here are some pieces of writing from 2017 that I’m relatively happy with:

Going Rogue. A look at the ethical questions raised by Rogue One

In AMP we trust. My unease with Google’s AMP format was growing by the day.

A minority report on artificial intelligence. Revisiting two of Spielberg’s films after a decade and a half.

Progressing the web. I really don’t want progressive web apps to just try to imitate native apps. They can be so much more.

CSS. Simple, yes, but not easy.

Intolerable. A screed. I still get very, very angry when I think about how that manifestbro duped people.

Акула. Recounting a story told by a taxi driver.

Hooked and booked. Does A/B testing lead to dark patterns?

Ubiquity and consistency. Different approaches to building on the web.

I hope there’s something in there that you like. It always a nice bonus when other people like something I’ve written, but I write for myself first and foremost. Writing is how I figure out what I think. I will, of course, continue to write and publish on my website in 2018. I’d really like it if you did the same.

Audio I listened to in 2017

I huffduffed 290 pieces of audio in 2017. I’ve still got a bit of a backlog of items I haven’t listened to yet, but I thought I’d share some of my favourite items from the past year. Here are twelve pieces of audio, one for each month of 2017…

Donald Hoffman’s TED talk, Do we see reality as it really is?. TED talks are supposed to blow your mind, right? (22:15)

How to Become Batman on Invisibilia. Alix Spiegel and Lulu Miller challenge you to think of blindness as social construct. Hear ‘em out. (58:02)

Where to find what’s disappeared online, and a whole lot more: the Internet Archive on Public Radio International. I just love hearing Brewster Kahle’s enthusiasm and excitement. (42:43)

Every Tuesday At Nine on Irish Music Stories. I’ve been really enjoying Shannon Heaton’s podcast this year. This one digs into that certain something that happens at an Irish music session. (40:50)

Adam Buxton talks to Brian Eno (part two is here). A fun and interesting chat about Brian Eno’s life and work. (53:10 and 46:35)

Nick Cave and Warren Ellis on Kreative Kontrol. This was far more revealing than I expected: genuine and unpretentious. (57:07)

Paul Lloyd at Patterns Day. All the talks at Patterns Day were brilliant. Paul’s really stuck with me. (28:21)

James Gleick on Time Travel at The Long Now. There were so many great talks from The Long Now’s seminars on long-term thinking. Nicky Case and Jennifer Pahlka were standouts too. (1:20:31)

Long Distance on Reply All. It all starts with a simple phone call. (47:27)

The King of Tears on Revisionist History. Malcolm Gladwell’s style suits podcasting very well. I liked this episode about country songwriter Bobby Braddock. Related: Jon’s Troika episode on tearjerkers. (42:14)

Feet on the Ground, Eyes on the Stars: The True Story of a Real Rocket Man with G.A. “Jim” Ogle. This was easily my favourite podcast episode of 2017. It’s on the User Defenders podcast but it’s not about UX. Instead, host Jason Ogle interviews his father, a rocket scientist who worked on everything from Apollo to every space shuttle mission. His story is fascinating. (2:38:21)

R.E.M. on Song Exploder. Breaking down the song Try Not To Breathe from Automatic For The People. (16:15)

I’ve gone back and added the tag “2017roundup” to each of these items. So if you’d like to subscribe to a podcast of just these episodes, here are the links:

The Last Jedi

If you haven’t seen The Last Jedi (yet), please stop reading. Spoilers ahoy.

I’ve been listening to many, many podcast episodes about the latest Star Wars film. They’re all here on Huffduffer. You can subscribe to a feed of just those episodes if you want.

I am well aware that the last thing anybody wants or needs is one more hot take on this film, but what the heck? I figured I’d jot down my somewhat simplistic thoughts.

I loved it.

But I wasn’t sure at first. I’ve talked to other people who felt similarly on first viewing—they weren’t sure if they liked it or not. I know some people who, on reflection, decided they definitely didn’t like it. I completely understand that.

A second viewing helped to cement my positive feelings towards this film. This is starting to become a trend: I didn’t think much of Rogue One on first viewing, but a second watch reversed my opinion completely. Maybe I just find it hard to really get into the flow when I’m seeing a new Star Wars film for the very first time—an event that I once thought would never occur again.

My first viewing of The Last Jedi wasn’t helped by having the worst seats in the house. My original plan was to see it with Jessica at a minute past midnight in The Duke Of York’s in Brighton. I bought front-row tickets as soon as they were available. But then it turned out that we were going to be in Seattle at that time instead. We quickly grabbed whatever tickets were left. Those seats were right at the front and far edge of the cinema, so the screen was more trapezoid than rectangular. The lights went down, the fanfare blared, and the opening crawl begin its march up …and to the left. My brain tried to compensate for the perspective effects but it was hard. Is Snoke’s face supposed to look like that? Does that person really have such a tiny head?

But while the spectacle was somewhat marred, the story unfolded in all its surprising delight. I thoroughly enjoyed the feeling of having the narrative rug repeatedly pulled out from under me.

I loved the unexpected end of Snoke in his vampiric boudoir. Let’s face it, he was the least interesting part of The Force Awakens—a two-dimensional evil mastermind. To despatch him in the middle of the middle chapter was the biggest signal that The Last Jedi was not simply going to retread the beats of the original trilogy.

I loved the reveal of Rey’s parentage. This was what I had been hoping for—that Rey came from nowhere in particular. After The Force Awakens, I wrote:

Personally, I’d like it if her parentage were unremarkable. Maybe it’s the socialist in me, but I’ve never liked the idea that the Force is based on eugenics; a genetic form of inherited wealth for the lucky 1%. I prefer to think of the Force as something that could potentially be unlocked by anyone who tries hard enough.

But I had resigned myself to the inevitable reveal that would tie her heritage into an existing lineage. What an absolute joy, then, that The Force is finally returned into everyone’s hands! Anil Dash describes this wonderfully in his post Every Last Jedi:

Though it’s well-grounded in the first definitions of The Force that we were introduced to in the original trilogy, The Last Jedi presents a radically inclusive new view of the Force that is bigger and broader than the Jedi religion which has thus-far colored our view of the entire Star Wars universe.

I was less keen on the sudden Force usage by Leia. I think it was the execution more than the idea that bothered me. Still, I realise that the problem lies just as much with me. See, lots of the criticism of this film comes from people (justifiably) saying “That’s not how The Force works!” in relation to Rey, Kylo Ren, or Luke Skywalker. I don’t share that reaction and I want to say, “Hey, who are we to decide how The Force works?”, but then during the Leia near-death scene, I found myself more or less thinking “That’s not how The Force works!”

This would be a good time to remind ourselves that, in the Star Wars universe, you can substitute the words “The Force” for “The Plot”—an invisible agency guiding actions and changing the course of events.

The first time I saw The Last Jedi, I began to really worry during the film’s climactic showdown. I wasn’t so much worried for the fate of the characters in peril; I was worried for the fate of the overarching narrative. When Luke showed up, my heart sank a little. A deus ex machina …and how did he get here exactly? And then when he emerges unscathed from a barrage of walker cannon fire, I thought “Aw no, they’ve changed the Jedi to be like superheroes …but that’s not the way The Force/Plot works!”

And then I had the rug pulled out from under me again. Yes! What a joyous bit of trickery! My faith in The Force/Plot was restored.

I know a lot of people didn’t like the Canto Bight diversion. Jessica described it as being quite prequel-y, and I can see that. And while I agree that any shot involving our heroes riding across the screen (on a Fathier, on a scout walker) just didn’t work, I liked the world-expanding scope of the caper subplot.

Still, I preferred the Galactica-like war of attrition as the Resistance is steadily reduced in size as they try to escape the relentless pursuit of the First Order. It felt like proper space opera. In some ways, it reminded of Alastair Reynolds but without the realism of the laws of physics (there’s nothing quite as egregious here as J.J. Abrams’ cosy galaxy where the destruction of a system can be seen in real time from the surface of another planet, but The Last Jedi showed again that Star Wars remains firmly in the space fantasy genre rather than hard sci-fi).

Oh, and of course I loved the porgs. But then, I never had a problem with ewoks, so treat my appraisal with a pinch of salt.

I loved seeing the west of coast of Ireland get so much screen time. Beehive huts in a Star Wars film! Mind you, that made it harder for me to get immersed in the story. I kept thinking, “Now, is that Skellig Michael? Or is it on the Dingle peninsula? Or Donegal? Or west Clare?”

For all its global success, Star Wars has always had a very personal relationship with everyone it touches. The films themselves are only part of the reason why people respond to them. The other part is what people bring with them; where they are in life at the moment they’re introduced to this world. And frankly, the films are only part of this symbiosis. As much as people like to sneer at the toys and merchandising as a cheap consumerist ploy, they played a significant part in unlocking my imagination. Growing up in a small town on the coast of Ireland, the Star Wars universe—the world, the characters—was a playground for me to make up stories …just as it was for any young child anywhere in the world.

One of my favourite shots in The Last Jedi looks like it could’ve come from the mind of that young child: an X-wing submerged in the waters of the rocky coast of Ireland. It was as though Rian Johnson had a direct line to my childhood self.

And yet, I think the reason why The Last Jedi works so well is that Rian Johnson makes no concessions to my childhood, or anyone else’s. This is his film. Of all the millions of us who were transported by this universe as children, only he gets to put his story onto the screen and into the saga. There are two ways to react to this. You can quite correctly exclaim “That’s not how I would do it!”, or you can go with it …even if that means letting go of some deeply-held feelings about what could’ve, should’ve, would’ve happened if it were our story.

That said, I completely understand why people might take against this film. Like I said, Rian Johnson makes no concessions. That’s in stark contrast to The Force Awakens. I wrote at the time:

Han Solo picked up the audience like it was a child that had fallen asleep in the car, and he gently tucked us into our familiar childhood room where we can continue to dream. And then, with a tender brush of his hand across the cheek, he left us.

The Last Jedi, on the other hand, thrusts us into this new narrative in the same way you might teach someone to swim by throwing them into the ocean from the peak of Skellig Michael. The polarised reactions to the film are from people sinking or swimming.

I choose to swim. To go with it. To let go. To let the past die.

And yet, one of my favourite takeaways from The Last Jedi is how it offers a healthy approach to dealing with events from the past. Y’see, there was always something that bothered me in the original trilogy. It was one of Yoda’s gnomic pronouncements in The Empire Strikes Back:

Try not. Do. Or do not. There is no try.

That always struck me as a very bro-ish “crushing it” approach to life. That’s why I was delighted that Rian Johnson had Yoda himself refute that attitude completely:

The greatest teacher, failure is.

That’s exactly what Luke needed to hear. It was also what I—many decades removed from my childhood—needed to hear.

Ubiquity and consistency

I keep thinking about this post from Baldur Bjarnason, Over-engineering is under-engineering. It took me a while to get my head around what he was saying, but now that (I think) I understand it, I find it to be very astute.

Let’s take a single interface element, say, a dropdown menu. This is the example Laura uses in her article for 24 Ways called Accessibility Through Semantic HTML. You’ve got two choices, broadly speaking:

  1. Use the HTML select element.
  2. Create your own dropdown widget using JavaScript (working with divs and spans).

The advantage of the first choice is that it’s lightweight, it works everywhere, and the browser does all the hard work for you.

But…

You don’t get complete control. Because the browser is doing the heavy lifting, you can’t craft the details of the dropdown to look identical on different browser/OS combinations.

That’s where the second option comes in. By scripting your own dropdown, you get complete control over the appearance and behaviour of the widget. The disadvantage is that, because you’re now doing all the work instead of the browser, it’s up to you to do all the work—that means lots of JavaScript, thinking about edge cases, and making the whole thing accessible.

This is the point that Baldur makes: no matter how much you over-engineer your own custom solution, there’ll always be something that falls between the cracks. So, ironically, the over-engineered solution—when compared to the simple under-engineered native browser solution—ends up being under-engineered.

Is it worth it? Rian Rietveld asks:

It is impossible to style select option. But is that really necessary? Is it worth abandoning the native browser behavior for a complete rewrite in JavaScript of the functionality?

The answer, as ever, is it depends. It depends on your priorities. If your priority is having consistent control over the details, then foregoing native browser functionality in favour of scripting everything yourself aligns with your goals.

But I’m reminded of something that Eric often says:

The web does not value consistency. The web values ubiquity.

Ubiquity; universality; accessibility—however you want to label it, it’s what lies at the heart of the World Wide Web. It’s the idea that anyone should be able to access a resource, regardless of technical or personal constraints. It’s an admirable goal, and what’s even more admirable is that the web succeeds in this goal! But sometimes something’s gotta give, and that something is control. Rian again:

The days that a website must be pixel perfect and must look the same in every browser are over. There are so many devices these days, that an identical design for all is not doable. Or we must take a huge effort for custom form elements design.

So far I’ve only been looking at the micro scale of a single interface element, but this tension between ubiquity and consistency plays out at larger scales too. Take page navigations. That’s literally what browsers do. Click on a link, and the browser fetches that URL, displaying progress at it goes. The alternative, as exemplified by single page apps, is to do all of that for yourself using JavaScript: figure out the routing, show some kind of progress, load some JSON, parse it, convert it into HTML, and update the DOM.

Personally, I tend to go for the first option. Partly that’s because I like to apply the rule of least power, but mostly it’s because I’m very lazy (I also have qualms about sending a whole lotta JavaScript down the wire just so the end user gets to do something that their browser would do for them anyway). But I get it. I understand why others might wish for greater control, even if it comes with a price tag of fragility.

I think Jake’s navigation transitions proposal is fascinating. What if there were a browser-native way to get more control over how page navigations happen? I reckon that would cover the justification of 90% of single page apps.

That’s a great way of examining these kinds of decisions and questioning how this tension could be resolved. If people are frustrated by the lack of control in browser-native navigations, let’s figure out a way to give them more control. If people are frustrated by the lack of styling for select elements, maybe we should figure out a way of giving them more control over styling.

Hang on though. I feel like I’ve painted a divisive picture, like you have to make a choice between ubiquity or consistency. But the rather wonderful truth is that, on the web, you can have your cake and eat it. That’s what I was getting at with the three-step approach I describe in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Like, say…

  1. The user needs to select an item from a list of options.
  2. Use a select element.
  3. Use JavaScript to replace that native element with a widget of your own devising.

Or…

  1. The user needs to navigate to another page.
  2. Use an a element with an href attribute.
  3. Use JavaScript to intercept that click, add a nice transition, and pull in the content using Ajax.

The pushback I get from people in the control/consistency camp is that this sounds like more work. It kinda is. But honestly, in my experience, it’s not that much more work. Also, and I realise I’m contradicting the part where I said I’m lazy, but that’s why it’s called work. This is our job. It’s not about what we prefer; it’s about serving the needs of the people who use what we build.

Anyway, if I were to rephrase my three-step process in terms of under-engineering and over-engineering, it might look something like this:

  1. Start with user needs.
  2. Build an under-engineered solution—one that might not offer you much control, but that works for everyone.
  3. Layer on a more over-engineered solution—one that might not work for everyone, but that offers you more control.

Ubiquity, then consistency.

Nosediving

Nosedive is the first episode of season three of Black Mirror.

It’s fairly light-hearted by the standards of Black Mirror, but all the more chilling for that. It depicts a dysutopia where people rate one another for points that unlock preferential treatment. It’s like a twisted version of the whuffie from Cory Doctorow’s Down And Out In The Magic Kingdom. Cory himself points out that reputation economies are a terrible idea.

Nosedive has become a handy shortcut for pointing to the dangers of social media (in the same way that Minority Report was a handy shortcut for gestural interfaces and Her is a handy shortcut for voice interfaces).

“Social media is bad, m’kay?” is an understandable but, I think, fairly shallow reading of Nosedive. The problem isn’t with the apps, it’s with the system. A world in which we desperately need to keep our score up if we want to have any hope of advancing? That’s a nightmare scenario.

The thing is …that system exists today. Credit scores are literally a means of applying a numeric value to human beings.

Nosedive depicts a world where your score determines which seats you get in a restaurant, or which model of car you can rent. Meanwhile, in our world, your score determines whether or not you can get a mortgage.

Nosedive depicts a world in which you know your own score. Meanwhile, in our world, good luck with that:

It is very difficult for a consumer to know in advance whether they have a high enough credit score to be accepted for credit with a given lender. This situation is due to the complexity and structure of credit scoring, which differs from one lender to another.

Lenders need not reveal their credit score head, nor need they reveal the minimum credit score required for the applicant to be accepted. Owing only to this lack of information to the consumer, it is impossible for him or her to know in advance if they will pass a lender’s credit scoring requirements.

Black Mirror has a good track record of exposing what’s unsavoury about our current time and place. On the surface, Nosedive seems to be an exposé on the dangers of going to far with the presentation of self in everyday life. Scratch a little deeper though, and it reveals an even more uncomfortable truth: that we’re living in a world driven by systems even worse than what’s depicted in this dystopia.

How about this for a nightmare scenario:

Two years ago Douglas Rushkoff had an unpleasant encounter outside his Brooklyn home. Taking out the rubbish on Christmas Eve, he was mugged — held at knife-point by an assailant who took his money, his phone and his bank cards. Shaken, he went back indoors and sent an email to his local residents’ group to warn them about what had happened.

“I got two emails back within the hour,” he says. “Not from people asking if I was OK, but complaining that I’d posted the exact spot where the mugging had taken place — because it might adversely affect their property values.”

Getaway

It had been a while since we had a movie night at Clearleft so I organised one for last night. We usually manage to get through two movies, and there’s always a unifying theme decided ahead of time.

For last night, I decided that the broad theme would be …transport. But then, through voting on Slack, people could decide what the specific mode of transport would be. The choices were:

  • taxi,
  • getaway car,
  • truck, or
  • submarine.

Nobody voted for submarines. That’s a shame, but in retrospect it’s easy to understand—submarine films aren’t about transport at all. Quite the opposite. Submarine films are about being trapped in a metal womb/tomb (and many’s the spaceship film that qualifies as a submarine movie).

There were some votes for taxis and trucks, but the getaway car was the winner. I then revealed which films had been pre-selected for each mode of transport.

Taxi

Getaway car

Shorts: Getaway Driver, The Getaway

Truck

Submarine

I thought Baby Driver would be a shoe-in for the first film, but enough people had already seen it quite recently to put it out of the running. We watched Wheelman instead, which was like Locke meets Drive.

So what would the second film be?

Well, some of those films in the full list could potentially fall into more than one category. The taxi in Collateral is (kinda) being used as a getaway car. And if you expand the criterion to getaway vehicle, then Furiosa’s war rig surely counts, right?

Okay, we were just looking for an excuse to watch Fury Road again. I mean, c’mon, it was the black and chrome edition! I had the great fortune of seeing that on the big screen a while back and I’ve been raving about it ever since. Besides, you really don’t need an excuse to rewatch Fury Road. I loved it the first time I saw it, and it just keeps getting better and better each time. The editing! The sound! The world-building!

With every viewing, it feels more and more like the film for our time. It may have been a bit of stretch to watch it under the thematic umbrella of getaway vehicles, but it’s a getaway for our current political climate: instead of the typical plot involving a gang driving at full tilt from a bank heist, imagine one where the gang turns around, ousts the bankers, and replaces the whole banking system with a matriarchal community.

Hope is a mistake”, Max mansplains (maxplains?) to Furiosa at one point. He’s wrong. Judicious hope is what drives us forward (or, this case, back …to the citadel). Watching Fury Road again, I drew hope from the character of Nux. An alt-warboy in thrall to a demagogue and raised on a diet of fake news (Valhalla! V8!) can not only be turned by tenderness, he can become an ally to those working for a better world.

Witness!

The meaning of AMP

Ethan quite rightly points out some semantic sleight of hand by Google’s AMP team:

But when I hear AMP described as an open, community-led project, it strikes me as incredibly problematic, and more than a little troubling. AMP is, I think, best described as nominally open-source. It’s a corporate-led product initiative built with, and distributed on, open web technologies.

But so what, right? Tom-ay-to, tom-a-to. Well, here’s a pernicious example of where it matters: in a recent announcement of their intent to ship a new addition to HTML, the Google Chrome team cited the mood of the web development community thusly:

Web developers: Positive (AMP team indicated desire to start using the attribute)

If AMP were actually the product of working web developers, this justification would make sense. As it is, we’ve got one team at Google citing the preference of another team at Google but representing it as the will of the people.

This is just one example of AMP’s sneaky marketing where some finely-shaved semantics allows them to appear far more reasonable than they actually are.

At AMP Conf, the Google Search team were at pains to repeat over and over that AMP pages wouldn’t get any preferential treatment in search results …but they appear in a carousel above the search results. Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

The same semantic nit-picking can be found in their defence of caching. See, they’ve even got me calling it caching! It’s hosting. If I click on a search result, and I am taken to page that has a URL beginning with https://www.google.com/amp/s/... then that page is being hosted on the domain google.com. That is literally what hosting means. Now, you might argue that the original version was hosted on a different domain, but the version that the user gets sent to is the Google copy. You can call it caching if you like, but you can’t tell me that Google aren’t hosting AMP pages.

That’s a particularly low blow, because it’s such a bait’n’switch. One of the reasons why AMP first appeared to be different to Facebook Instant Articles or Apple News was the promise that you could host your AMP pages yourself. That’s the very reason I first got interested in AMP. But if you actually want the benefits of AMP—appearing in the not-search-results carousel, pre-rendered performance, etc.—then your pages must be hosted by Google.

So, to summarise, here are three statements that Google’s AMP team are currently peddling as being true:

  1. AMP is a community project, not a Google project.
  2. AMP pages don’t receive preferential treatment in search results.
  3. AMP pages are hosted on your own domain.

I don’t think those statements are even truthy, much less true. In fact, if I were looking for the right term to semantically describe any one of those statements, the closest in meaning would be this:

A statement used intentionally for the purpose of deception.

That is the dictionary definition of a lie.

Update: That last part was a bit much. Sorry about that. I know it’s a bit much because The Register got all gloaty about it.

I don’t think the developers working on the AMP format are intentionally deceptive (although they are engaging in some impressive cognitive gymnastics). The AMP ecosystem, on the other hand, that’s another story—the preferential treatment of Google-hosted AMP pages in the carousel and in search results; that’s messed up.

Still, I would do well to remember that there are well-meaning people working on even the fishiest of projects.

Except for the people working at the shitrag that is The Register.

(The other strong signal that I overstepped the bounds of decency was that this post attracted the pond scum of Hacker News. That’s another place where the “well-meaning people work on even the fishiest of projects” rule definitely doesn’t apply.)

Pattern Libraries, Performance, and Progressive Web Apps

Ever since its founding in 2005, Clearleft has been laser-focused on user experience design.

But we’ve always maintained a strong front-end development arm. The front-end development work at Clearleft is always in service of design. Over the years we’ve built up a wealth of expertise on using HTML, CSS, and JavaScript to make better user experiences.

Recently we’ve been doing a lot of strategic design work—the really in-depth long-term engagements that begin with research and continue through to design consultancy and collaboration. That means we’ve got availability for front-end development work. Whether it’s consultancy or production work you’re looking for, this could be a good opportunity for us to work together.

There are three particular areas of front-end expertise we’re obsessed with…

Pattern Libraries

We caught the design systems bug years ago, way back when Natalie started pioneering pattern libraries as our primary deliverable (or pattern portfolios, as we called them then). This approach has proven effective time and time again. We’ve spent years now refining our workflow and thinking around modular design. Fractal is the natural expression of this obsession. Danielle and Mark have been working flat-out on version 2. They’re very eager to share everything they’ve learned along the way …and help others put together solid pattern libraries.

Danielle Huntrods Mark Perkins

Performance

Thinking about it, it’s no surprise that we’re crazy about performance at Clearleft. Like I said, our focus on user experience, and when it comes to user experience on the web, nothing but nothing is more important than performance. The good news is that the majority of performance fixes can be done on the front end—images, scripts, fonts …it’s remarkable how much a good front-end overhaul can make to the bottom line. That’s what Graham has been obsessing over.

Graham Smith

Progressive Web Apps

Over the years I’ve found myself getting swept up in exciting new technologies on the web. When Clearleft first formed, my head was deep into DOM Scripting and Ajax. Half a decade later it was HTML5. Now it’s service workers. I honestly think it’s a technology that could be as revolutionary as Ajax or HTML5 (maybe I should write a book to that effect).

I’ve been talking about service workers at conferences this year, and I can’t hide my excitement:

There’s endless possibilities of what you can do with this technology. It’s very powerful.

Combine a service worker with a web app manifest and you’ve got yourself a Progressive Web App. It’s not just a great marketing term—it’s an opportunity for the web to truly excel at delivering the kind of user experiences previously only associated with native apps.

Jeremy Keith

I’m very very keen to work with companies and organisations that want to harness the power of service workers and Progressive Web Apps. If that’s you, get in touch.

Whether it’s pattern libraries, performance, or Progressive Web Apps, we’ve got the skills and expertise to share with you.