Tags: ie

223

sparkline

GDPR and Google Analytics

Enforcement of the European Union’s General Data Protection Regulation is coming very, very soon. Look busy. This regulation is not limited to companies based in the EU—it applies to any service anywhere in the world that can be used by citizens of the EU.

It’s less about data protection and more like a user’s bill of rights. That’s good. Cennydd has written a techie’s rough guide to GDPR.

The Open Data Institute’s Jeni Tennison wrote down her thoughts on how it could change data portability in particular. While she welcomes GDPR, she has some misgivings.

Blaine—who really needs to get a blog—shared his concerns in the form of the online equivalent of interpretive dance …a twitter thread (it’s called a thread because it inevitably gets all tangled, and it’s easy to break.)

The interesting thing about the so-called “cookie law” is that it makes no mention of cookies whatsoever. It doesn’t list any specific technology. Instead it states that any means of tracking or identifying users across websites requires disclosure. So if you’re setting a cookie just to manage state—so that users can log in, or keep items in a shopping basket—the legislation doesn’t apply. But as soon as your site allows a third-party to set a cookie, it’s banner time.

Google Analytics is a classic example of a third-party service that uses cookies to track people across domains. That’s pretty much why it exists. We, as site owners, get to use this incredibly powerful tool, and all we have to do in return is add one little snippet of JavaScript to our pages. In doing so, we’re allowing a third party to read or write a cookie from their domain.

Before Google Analytics, Google—the search engine business—was able to identify and track what users were searching for, and which search results they clicked on. But as soon as the user left google.com, the trail went cold. By creating an enormously useful analytics product that only required site owners to add a single line of JavaScript, Google—the online advertising business—gained the ability to keep track of users across most of the web, whether they were on a site owned by Google or not.

Under the old “cookie law”, using a third-party cookie-setting service like that meant you had to inform any of your users who were citizens of the EU. With GDPR, that changes. Now you have to get consent. A dismissible little overlay isn’t going to cut it any more. Implied consent isn’t enough.

Now this situation raises an interesting question. Who’s responsible for getting consent? Is it the site owner or the third party whose script is the conduit for the tracking?

In the first scenario, you’d need to wait for an explicit agreement from a visitor to your site before triggering the Google Analytics functionality. Suddenly it’s not as simple as adding a single line of JavaScript to your site.

In the second scenario, you don’t do anything differently than before—you just add that single line of JavaScript. But now that script would need to launch the interface for getting consent before doing any tracking. Google Analytics would go from being something invisible to something that directly impacts the user experience of your site.

I’m just using Google Analytics as an example here because it’s so widespread. This also applies to third-party sharing buttons—Twitter, Facebook, etc.—and of course, advertising.

In the case of advertising, it gets even thornier because quite often, the site owner has no idea which third party is about to do the tracking. Many, many sites use intermediary services (y’know, ‘cause bloated ad scripts aren’t slowing down sites enough so let’s throw some just-in-time bidding into the mix too). You could get consent for the intermediary service, but not for the final advert—neither you nor your site’s user would have any idea what they were consenting to.

Interesting times. One way or another, a massive amount of the web—every website using Google Analytics, embedded YouTube videos, Facebook comments, embedded tweets, or third-party advertisements—will be liable under GDPR.

It’s almost as if the ubiquitous surveillance of people’s every move on the web wasn’t a very good idea in the first place.

Design ops for design systems

Leading Design was one of the best events I attended last year. To be honest, that surprised me—I wasn’t sure how relevant it would be to me, but it turned out to be the most on-the-nose conference I could’ve wished for.

Seeing as the event was all about design leadership, there was inevitably some talk of design ops. But I noticed that the term was being used in two different ways.

Sometimes a speaker would talk about design ops and mean “operations, specifically for designers.” That means all the usual office practicalities—equipment, furniture, software—that designers might need to do their jobs. For example, one of the speakers recommended having a dedicated design ops person rather than trying to juggle that yourself. That’s good advice, as long as you understand what’s meant by design ops in that context.

There’s another context of use for the phrase “design ops”, and it’s one that we use far more often at Clearleft. It’s related to design systems.

Now, “design system” is itself a term that can be ambiguous. See also “pattern library” and “style guide”. Quite a few people have had a stab at disambiguating those terms, and I think there’s general agreement—a design system is the overall big-picture “thing” that can contain a pattern library, and/or a style guide, and/or much more besides:

None of those great posts attempt to define design ops, and that’s totally fair, because they’re all attempting to define things—style guides, pattern libraries, and design systems—whereas design ops isn’t a thing, it’s a practice. But I do think that design ops follows on nicely from design systems. I think that design ops is the practice of adopting and using a design system.

There are plenty of posts out there about the challenges of getting people to use a design system, and while very few of them use the term design ops, I think that’s what all of them are about:

Clearly design systems and design ops are very closely related: you really can’t have one without the other. What I find interesting is that a lot of the challenges relating to design systems (and pattern libraries, and style guides) might be technical, whereas the challenges of design ops are almost entirely cultural.

I realise that tying design ops directly to design systems is somewhat limiting, and the truth is that design ops can encompass much more. I like Andy’s description:

Design Ops is essentially the practice of reducing operational inefficiencies in the design workflow through process and technological advancements.

Now, in theory, that can encompass any operational stuff—equipment, furniture, software—but in practice, when we’re dealing with design ops, 90% of the time it’s related to a design system. I guess I could use a whole new term (design systems ops?) but I think the term design ops works well …as long as everyone involved is clear on the kind of design ops we’re all talking about.

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

Words I wrote in 2017

I wrote 78 blog posts in 2017. That works out at an average of six and a half blog posts per month. I’ll take it.

Here are some pieces of writing from 2017 that I’m relatively happy with:

Going Rogue. A look at the ethical questions raised by Rogue One

In AMP we trust. My unease with Google’s AMP format was growing by the day.

A minority report on artificial intelligence. Revisiting two of Spielberg’s films after a decade and a half.

Progressing the web. I really don’t want progressive web apps to just try to imitate native apps. They can be so much more.

CSS. Simple, yes, but not easy.

Intolerable. A screed. I still get very, very angry when I think about how that manifestbro duped people.

Акула. Recounting a story told by a taxi driver.

Hooked and booked. Does A/B testing lead to dark patterns?

Ubiquity and consistency. Different approaches to building on the web.

I hope there’s something in there that you like. It always a nice bonus when other people like something I’ve written, but I write for myself first and foremost. Writing is how I figure out what I think. I will, of course, continue to write and publish on my website in 2018. I’d really like it if you did the same.

Audio I listened to in 2017

I huffduffed 290 pieces of audio in 2017. I’ve still got a bit of a backlog of items I haven’t listened to yet, but I thought I’d share some of my favourite items from the past year. Here are twelve pieces of audio, one for each month of 2017…

Donald Hoffman’s TED talk, Do we see reality as it really is?. TED talks are supposed to blow your mind, right? (22:15)

How to Become Batman on Invisibilia. Alix Spiegel and Lulu Miller challenge you to think of blindness as social construct. Hear ‘em out. (58:02)

Where to find what’s disappeared online, and a whole lot more: the Internet Archive on Public Radio International. I just love hearing Brewster Kahle’s enthusiasm and excitement. (42:43)

Every Tuesday At Nine on Irish Music Stories. I’ve been really enjoying Shannon Heaton’s podcast this year. This one digs into that certain something that happens at an Irish music session. (40:50)

Adam Buxton talks to Brian Eno (part two is here). A fun and interesting chat about Brian Eno’s life and work. (53:10 and 46:35)

Nick Cave and Warren Ellis on Kreative Kontrol. This was far more revealing than I expected: genuine and unpretentious. (57:07)

Paul Lloyd at Patterns Day. All the talks at Patterns Day were brilliant. Paul’s really stuck with me. (28:21)

James Gleick on Time Travel at The Long Now. There were so many great talks from The Long Now’s seminars on long-term thinking. Nicky Case and Jennifer Pahlka were standouts too. (1:20:31)

Long Distance on Reply All. It all starts with a simple phone call. (47:27)

The King of Tears on Revisionist History. Malcolm Gladwell’s style suits podcasting very well. I liked this episode about country songwriter Bobby Braddock. Related: Jon’s Troika episode on tearjerkers. (42:14)

Feet on the Ground, Eyes on the Stars: The True Story of a Real Rocket Man with G.A. “Jim” Ogle. This was easily my favourite podcast episode of 2017. It’s on the User Defenders podcast but it’s not about UX. Instead, host Jason Ogle interviews his father, a rocket scientist who worked on everything from Apollo to every space shuttle mission. His story is fascinating. (2:38:21)

R.E.M. on Song Exploder. Breaking down the song Try Not To Breathe from Automatic For The People. (16:15)

I’ve gone back and added the tag “2017roundup” to each of these items. So if you’d like to subscribe to a podcast of just these episodes, here are the links:

The Last Jedi

If you haven’t seen The Last Jedi (yet), please stop reading. Spoilers ahoy.

I’ve been listening to many, many podcast episodes about the latest Star Wars film. They’re all here on Huffduffer. You can subscribe to a feed of just those episodes if you want.

I am well aware that the last thing anybody wants or needs is one more hot take on this film, but what the heck? I figured I’d jot down my somewhat simplistic thoughts.

I loved it.

But I wasn’t sure at first. I’ve talked to other people who felt similarly on first viewing—they weren’t sure if they liked it or not. I know some people who, on reflection, decided they definitely didn’t like it. I completely understand that.

A second viewing helped to cement my positive feelings towards this film. This is starting to become a trend: I didn’t think much of Rogue One on first viewing, but a second watch reversed my opinion completely. Maybe I just find it hard to really get into the flow when I’m seeing a new Star Wars film for the very first time—an event that I once thought would never occur again.

My first viewing of The Last Jedi wasn’t helped by having the worst seats in the house. My original plan was to see it with Jessica at a minute past midnight in The Duke Of York’s in Brighton. I bought front-row tickets as soon as they were available. But then it turned out that we were going to be in Seattle at that time instead. We quickly grabbed whatever tickets were left. Those seats were right at the front and far edge of the cinema, so the screen was more trapezoid than rectangular. The lights went down, the fanfare blared, and the opening crawl begin its march up …and to the left. My brain tried to compensate for the perspective effects but it was hard. Is Snoke’s face supposed to look like that? Does that person really have such a tiny head?

But while the spectacle was somewhat marred, the story unfolded in all its surprising delight. I thoroughly enjoyed the feeling of having the narrative rug repeatedly pulled out from under me.

I loved the unexpected end of Snoke in his vampiric boudoir. Let’s face it, he was the least interesting part of The Force Awakens—a two-dimensional evil mastermind. To despatch him in the middle of the middle chapter was the biggest signal that The Last Jedi was not simply going to retread the beats of the original trilogy.

I loved the reveal of Rey’s parentage. This was what I had been hoping for—that Rey came from nowhere in particular. After The Force Awakens, I wrote:

Personally, I’d like it if her parentage were unremarkable. Maybe it’s the socialist in me, but I’ve never liked the idea that the Force is based on eugenics; a genetic form of inherited wealth for the lucky 1%. I prefer to think of the Force as something that could potentially be unlocked by anyone who tries hard enough.

But I had resigned myself to the inevitable reveal that would tie her heritage into an existing lineage. What an absolute joy, then, that The Force is finally returned into everyone’s hands! Anil Dash describes this wonderfully in his post Every Last Jedi:

Though it’s well-grounded in the first definitions of The Force that we were introduced to in the original trilogy, The Last Jedi presents a radically inclusive new view of the Force that is bigger and broader than the Jedi religion which has thus-far colored our view of the entire Star Wars universe.

I was less keen on the sudden Force usage by Leia. I think it was the execution more than the idea that bothered me. Still, I realise that the problem lies just as much with me. See, lots of the criticism of this film comes from people (justifiably) saying “That’s not how The Force works!” in relation to Rey, Kylo Ren, or Luke Skywalker. I don’t share that reaction and I want to say, “Hey, who are we to decide how The Force works?”, but then during the Leia near-death scene, I found myself more or less thinking “That’s not how The Force works!”

This would be a good time to remind ourselves that, in the Star Wars universe, you can substitute the words “The Force” for “The Plot”—an invisible agency guiding actions and changing the course of events.

The first time I saw The Last Jedi, I began to really worry during the film’s climactic showdown. I wasn’t so much worried for the fate of the characters in peril; I was worried for the fate of the overarching narrative. When Luke showed up, my heart sank a little. A deus ex machina …and how did he get here exactly? And then when he emerges unscathed from a barrage of walker cannon fire, I thought “Aw no, they’ve changed the Jedi to be like superheroes …but that’s not the way The Force/Plot works!”

And then I had the rug pulled out from under me again. Yes! What a joyous bit of trickery! My faith in The Force/Plot was restored.

I know a lot of people didn’t like the Canto Bight diversion. Jessica described it as being quite prequel-y, and I can see that. And while I agree that any shot involving our heroes riding across the screen (on a Fathier, on a scout walker) just didn’t work, I liked the world-expanding scope of the caper subplot.

Still, I preferred the Galactica-like war of attrition as the Resistance is steadily reduced in size as they try to escape the relentless pursuit of the First Order. It felt like proper space opera. In some ways, it reminded of Alastair Reynolds but without the realism of the laws of physics (there’s nothing quite as egregious here as J.J. Abrams’ cosy galaxy where the destruction of a system can be seen in real time from the surface of another planet, but The Last Jedi showed again that Star Wars remains firmly in the space fantasy genre rather than hard sci-fi).

Oh, and of course I loved the porgs. But then, I never had a problem with ewoks, so treat my appraisal with a pinch of salt.

I loved seeing the west of coast of Ireland get so much screen time. Beehive huts in a Star Wars film! Mind you, that made it harder for me to get immersed in the story. I kept thinking, “Now, is that Skellig Michael? Or is it on the Dingle peninsula? Or Donegal? Or west Clare?”

For all its global success, Star Wars has always had a very personal relationship with everyone it touches. The films themselves are only part of the reason why people respond to them. The other part is what people bring with them; where they are in life at the moment they’re introduced to this world. And frankly, the films are only part of this symbiosis. As much as people like to sneer at the toys and merchandising as a cheap consumerist ploy, they played a significant part in unlocking my imagination. Growing up in a small town on the coast of Ireland, the Star Wars universe—the world, the characters—was a playground for me to make up stories …just as it was for any young child anywhere in the world.

One of my favourite shots in The Last Jedi looks like it could’ve come from the mind of that young child: an X-wing submerged in the waters of the rocky coast of Ireland. It was as though Rian Johnson had a direct line to my childhood self.

And yet, I think the reason why The Last Jedi works so well is that Rian Johnson makes no concessions to my childhood, or anyone else’s. This is his film. Of all the millions of us who were transported by this universe as children, only he gets to put his story onto the screen and into the saga. There are two ways to react to this. You can quite correctly exclaim “That’s not how I would do it!”, or you can go with it …even if that means letting go of some deeply-held feelings about what could’ve, should’ve, would’ve happened if it were our story.

That said, I completely understand why people might take against this film. Like I said, Rian Johnson makes no concessions. That’s in stark contrast to The Force Awakens. I wrote at the time:

Han Solo picked up the audience like it was a child that had fallen asleep in the car, and he gently tucked us into our familiar childhood room where we can continue to dream. And then, with a tender brush of his hand across the cheek, he left us.

The Last Jedi, on the other hand, thrusts us into this new narrative in the same way you might teach someone to swim by throwing them into the ocean from the peak of Skellig Michael. The polarised reactions to the film are from people sinking or swimming.

I choose to swim. To go with it. To let go. To let the past die.

And yet, one of my favourite takeaways from The Last Jedi is how it offers a healthy approach to dealing with events from the past. Y’see, there was always something that bothered me in the original trilogy. It was one of Yoda’s gnomic pronouncements in The Empire Strikes Back:

Try not. Do. Or do not. There is no try.

That always struck me as a very bro-ish “crushing it” approach to life. That’s why I was delighted that Rian Johnson had Yoda himself refute that attitude completely:

The greatest teacher, failure is.

That’s exactly what Luke needed to hear. It was also what I—many decades removed from my childhood—needed to hear.

Ubiquity and consistency

I keep thinking about this post from Baldur Bjarnason, Over-engineering is under-engineering. It took me a while to get my head around what he was saying, but now that (I think) I understand it, I find it to be very astute.

Let’s take a single interface element, say, a dropdown menu. This is the example Laura uses in her article for 24 Ways called Accessibility Through Semantic HTML. You’ve got two choices, broadly speaking:

  1. Use the HTML select element.
  2. Create your own dropdown widget using JavaScript (working with divs and spans).

The advantage of the first choice is that it’s lightweight, it works everywhere, and the browser does all the hard work for you.

But…

You don’t get complete control. Because the browser is doing the heavy lifting, you can’t craft the details of the dropdown to look identical on different browser/OS combinations.

That’s where the second option comes in. By scripting your own dropdown, you get complete control over the appearance and behaviour of the widget. The disadvantage is that, because you’re now doing all the work instead of the browser, it’s up to you to do all the work—that means lots of JavaScript, thinking about edge cases, and making the whole thing accessible.

This is the point that Baldur makes: no matter how much you over-engineer your own custom solution, there’ll always be something that falls between the cracks. So, ironically, the over-engineered solution—when compared to the simple under-engineered native browser solution—ends up being under-engineered.

Is it worth it? Rian Rietveld asks:

It is impossible to style select option. But is that really necessary? Is it worth abandoning the native browser behavior for a complete rewrite in JavaScript of the functionality?

The answer, as ever, is it depends. It depends on your priorities. If your priority is having consistent control over the details, then foregoing native browser functionality in favour of scripting everything yourself aligns with your goals.

But I’m reminded of something that Eric often says:

The web does not value consistency. The web values ubiquity.

Ubiquity; universality; accessibility—however you want to label it, it’s what lies at the heart of the World Wide Web. It’s the idea that anyone should be able to access a resource, regardless of technical or personal constraints. It’s an admirable goal, and what’s even more admirable is that the web succeeds in this goal! But sometimes something’s gotta give, and that something is control. Rian again:

The days that a website must be pixel perfect and must look the same in every browser are over. There are so many devices these days, that an identical design for all is not doable. Or we must take a huge effort for custom form elements design.

So far I’ve only been looking at the micro scale of a single interface element, but this tension between ubiquity and consistency plays out at larger scales too. Take page navigations. That’s literally what browsers do. Click on a link, and the browser fetches that URL, displaying progress at it goes. The alternative, as exemplified by single page apps, is to do all of that for yourself using JavaScript: figure out the routing, show some kind of progress, load some JSON, parse it, convert it into HTML, and update the DOM.

Personally, I tend to go for the first option. Partly that’s because I like to apply the rule of least power, but mostly it’s because I’m very lazy (I also have qualms about sending a whole lotta JavaScript down the wire just so the end user gets to do something that their browser would do for them anyway). But I get it. I understand why others might wish for greater control, even if it comes with a price tag of fragility.

I think Jake’s navigation transitions proposal is fascinating. What if there were a browser-native way to get more control over how page navigations happen? I reckon that would cover the justification of 90% of single page apps.

That’s a great way of examining these kinds of decisions and questioning how this tension could be resolved. If people are frustrated by the lack of control in browser-native navigations, let’s figure out a way to give them more control. If people are frustrated by the lack of styling for select elements, maybe we should figure out a way of giving them more control over styling.

Hang on though. I feel like I’ve painted a divisive picture, like you have to make a choice between ubiquity or consistency. But the rather wonderful truth is that, on the web, you can have your cake and eat it. That’s what I was getting at with the three-step approach I describe in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Like, say…

  1. The user needs to select an item from a list of options.
  2. Use a select element.
  3. Use JavaScript to replace that native element with a widget of your own devising.

Or…

  1. The user needs to navigate to another page.
  2. Use an a element with an href attribute.
  3. Use JavaScript to intercept that click, add a nice transition, and pull in the content using Ajax.

The pushback I get from people in the control/consistency camp is that this sounds like more work. It kinda is. But honestly, in my experience, it’s not that much more work. Also, and I realise I’m contradicting the part where I said I’m lazy, but that’s why it’s called work. This is our job. It’s not about what we prefer; it’s about serving the needs of the people who use what we build.

Anyway, if I were to rephrase my three-step process in terms of under-engineering and over-engineering, it might look something like this:

  1. Start with user needs.
  2. Build an under-engineered solution—one that might not offer you much control, but that works for everyone.
  3. Layer on a more over-engineered solution—one that might not work for everyone, but that offers you more control.

Ubiquity, then consistency.

Nosediving

Nosedive is the first episode of season three of Black Mirror.

It’s fairly light-hearted by the standards of Black Mirror, but all the more chilling for that. It depicts a dysutopia where people rate one another for points that unlock preferential treatment. It’s like a twisted version of the whuffie from Cory Doctorow’s Down And Out In The Magic Kingdom. Cory himself points out that reputation economies are a terrible idea.

Nosedive has become a handy shortcut for pointing to the dangers of social media (in the same way that Minority Report was a handy shortcut for gestural interfaces and Her is a handy shortcut for voice interfaces).

“Social media is bad, m’kay?” is an understandable but, I think, fairly shallow reading of Nosedive. The problem isn’t with the apps, it’s with the system. A world in which we desperately need to keep our score up if we want to have any hope of advancing? That’s a nightmare scenario.

The thing is …that system exists today. Credit scores are literally a means of applying a numeric value to human beings.

Nosedive depicts a world where your score determines which seats you get in a restaurant, or which model of car you can rent. Meanwhile, in our world, your score determines whether or not you can get a mortgage.

Nosedive depicts a world in which you know your own score. Meanwhile, in our world, good luck with that:

It is very difficult for a consumer to know in advance whether they have a high enough credit score to be accepted for credit with a given lender. This situation is due to the complexity and structure of credit scoring, which differs from one lender to another.

Lenders need not reveal their credit score head, nor need they reveal the minimum credit score required for the applicant to be accepted. Owing only to this lack of information to the consumer, it is impossible for him or her to know in advance if they will pass a lender’s credit scoring requirements.

Black Mirror has a good track record of exposing what’s unsavoury about our current time and place. On the surface, Nosedive seems to be an exposé on the dangers of going to far with the presentation of self in everyday life. Scratch a little deeper though, and it reveals an even more uncomfortable truth: that we’re living in a world driven by systems even worse than what’s depicted in this dystopia.

How about this for a nightmare scenario:

Two years ago Douglas Rushkoff had an unpleasant encounter outside his Brooklyn home. Taking out the rubbish on Christmas Eve, he was mugged — held at knife-point by an assailant who took his money, his phone and his bank cards. Shaken, he went back indoors and sent an email to his local residents’ group to warn them about what had happened.

“I got two emails back within the hour,” he says. “Not from people asking if I was OK, but complaining that I’d posted the exact spot where the mugging had taken place — because it might adversely affect their property values.”

Getaway

It had been a while since we had a movie night at Clearleft so I organised one for last night. We usually manage to get through two movies, and there’s always a unifying theme decided ahead of time.

For last night, I decided that the broad theme would be …transport. But then, through voting on Slack, people could decide what the specific mode of transport would be. The choices were:

  • taxi,
  • getaway car,
  • truck, or
  • submarine.

Nobody voted for submarines. That’s a shame, but in retrospect it’s easy to understand—submarine films aren’t about transport at all. Quite the opposite. Submarine films are about being trapped in a metal womb/tomb (and many’s the spaceship film that qualifies as a submarine movie).

There were some votes for taxis and trucks, but the getaway car was the winner. I then revealed which films had been pre-selected for each mode of transport.

Taxi

Getaway car

Shorts: Getaway Driver, The Getaway

Truck

Submarine

I thought Baby Driver would be a shoe-in for the first film, but enough people had already seen it quite recently to put it out of the running. We watched Wheelman instead, which was like Locke meets Drive.

So what would the second film be?

Well, some of those films in the full list could potentially fall into more than one category. The taxi in Collateral is (kinda) being used as a getaway car. And if you expand the criterion to getaway vehicle, then Furiosa’s war rig surely counts, right?

Okay, we were just looking for an excuse to watch Fury Road again. I mean, c’mon, it was the black and chrome edition! I had the great fortune of seeing that on the big screen a while back and I’ve been raving about it ever since. Besides, you really don’t need an excuse to rewatch Fury Road. I loved it the first time I saw it, and it just keeps getting better and better each time. The editing! The sound! The world-building!

With every viewing, it feels more and more like the film for our time. It may have been a bit of stretch to watch it under the thematic umbrella of getaway vehicles, but it’s a getaway for our current political climate: instead of the typical plot involving a gang driving at full tilt from a bank heist, imagine one where the gang turns around, ousts the bankers, and replaces the whole banking system with a matriarchal community.

Hope is a mistake”, Max mansplains (maxplains?) to Furiosa at one point. He’s wrong. Judicious hope is what drives us forward (or, this case, back …to the citadel). Watching Fury Road again, I drew hope from the character of Nux. An alt-warboy in thrall to a demagogue and raised on a diet of fake news (Valhalla! V8!) can not only be turned by tenderness, he can become an ally to those working for a better world.

Witness!

The meaning of AMP

Ethan quite rightly points out some semantic sleight of hand by Google’s AMP team:

But when I hear AMP described as an open, community-led project, it strikes me as incredibly problematic, and more than a little troubling. AMP is, I think, best described as nominally open-source. It’s a corporate-led product initiative built with, and distributed on, open web technologies.

But so what, right? Tom-ay-to, tom-a-to. Well, here’s a pernicious example of where it matters: in a recent announcement of their intent to ship a new addition to HTML, the Google Chrome team cited the mood of the web development community thusly:

Web developers: Positive (AMP team indicated desire to start using the attribute)

If AMP were actually the product of working web developers, this justification would make sense. As it is, we’ve got one team at Google citing the preference of another team at Google but representing it as the will of the people.

This is just one example of AMP’s sneaky marketing where some finely-shaved semantics allows them to appear far more reasonable than they actually are.

At AMP Conf, the Google Search team were at pains to repeat over and over that AMP pages wouldn’t get any preferential treatment in search results …but they appear in a carousel above the search results. Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

The same semantic nit-picking can be found in their defence of caching. See, they’ve even got me calling it caching! It’s hosting. If I click on a search result, and I am taken to page that has a URL beginning with https://www.google.com/amp/s/... then that page is being hosted on the domain google.com. That is literally what hosting means. Now, you might argue that the original version was hosted on a different domain, but the version that the user gets sent to is the Google copy. You can call it caching if you like, but you can’t tell me that Google aren’t hosting AMP pages.

That’s a particularly low blow, because it’s such a bait’n’switch. One of the reasons why AMP first appeared to be different to Facebook Instant Articles or Apple News was the promise that you could host your AMP pages yourself. That’s the very reason I first got interested in AMP. But if you actually want the benefits of AMP—appearing in the not-search-results carousel, pre-rendered performance, etc.—then your pages must be hosted by Google.

So, to summarise, here are three statements that Google’s AMP team are currently peddling as being true:

  1. AMP is a community project, not a Google project.
  2. AMP pages don’t receive preferential treatment in search results.
  3. AMP pages are hosted on your own domain.

I don’t think those statements are even truthy, much less true. In fact, if I were looking for the right term to semantically describe any one of those statements, the closest in meaning would be this:

A statement used intentionally for the purpose of deception.

That is the dictionary definition of a lie.

Update: That last part was a bit much. Sorry about that. I know it’s a bit much because The Register got all gloaty about it.

I don’t think the developers working on the AMP format are intentionally deceptive (although they are engaging in some impressive cognitive gymnastics). The AMP ecosystem, on the other hand, that’s another story—the preferential treatment of Google-hosted AMP pages in the carousel and in search results; that’s messed up.

Still, I would do well to remember that there are well-meaning people working on even the fishiest of projects.

Except for the people working at the shitrag that is The Register.

(The other strong signal that I overstepped the bounds of decency was that this post attracted the pond scum of Hacker News. That’s another place where the “well-meaning people work on even the fishiest of projects” rule definitely doesn’t apply.)

Pattern Libraries, Performance, and Progressive Web Apps

Ever since its founding in 2005, Clearleft has been laser-focused on user experience design.

But we’ve always maintained a strong front-end development arm. The front-end development work at Clearleft is always in service of design. Over the years we’ve built up a wealth of expertise on using HTML, CSS, and JavaScript to make better user experiences.

Recently we’ve been doing a lot of strategic design work—the really in-depth long-term engagements that begin with research and continue through to design consultancy and collaboration. That means we’ve got availability for front-end development work. Whether it’s consultancy or production work you’re looking for, this could be a good opportunity for us to work together.

There are three particular areas of front-end expertise we’re obsessed with…

Pattern Libraries

We caught the design systems bug years ago, way back when Natalie started pioneering pattern libraries as our primary deliverable (or pattern portfolios, as we called them then). This approach has proven effective time and time again. We’ve spent years now refining our workflow and thinking around modular design. Fractal is the natural expression of this obsession. Danielle and Mark have been working flat-out on version 2. They’re very eager to share everything they’ve learned along the way …and help others put together solid pattern libraries.

Danielle Huntrods Mark Perkins

Performance

Thinking about it, it’s no surprise that we’re crazy about performance at Clearleft. Like I said, our focus on user experience, and when it comes to user experience on the web, nothing but nothing is more important than performance. The good news is that the majority of performance fixes can be done on the front end—images, scripts, fonts …it’s remarkable how much a good front-end overhaul can make to the bottom line. That’s what Graham has been obsessing over.

Graham Smith

Progressive Web Apps

Over the years I’ve found myself getting swept up in exciting new technologies on the web. When Clearleft first formed, my head was deep into DOM Scripting and Ajax. Half a decade later it was HTML5. Now it’s service workers. I honestly think it’s a technology that could be as revolutionary as Ajax or HTML5 (maybe I should write a book to that effect).

I’ve been talking about service workers at conferences this year, and I can’t hide my excitement:

There’s endless possibilities of what you can do with this technology. It’s very powerful.

Combine a service worker with a web app manifest and you’ve got yourself a Progressive Web App. It’s not just a great marketing term—it’s an opportunity for the web to truly excel at delivering the kind of user experiences previously only associated with native apps.

Jeremy Keith

I’m very very keen to work with companies and organisations that want to harness the power of service workers and Progressive Web Apps. If that’s you, get in touch.

Whether it’s pattern libraries, performance, or Progressive Web Apps, we’ve got the skills and expertise to share with you.

Singapore

I was in Singapore last week. It was most relaxing. Sure, it’s Disneyland With The Death Penalty but the food is wonderful.

chicken rice fishball noodles laksa grilled pork

But I wasn’t just there to sample the delights of the hawker centres. I had been invited by Mozilla to join them on the opening leg of their Developer Roadshow. We assembled in the PayPal offices one evening for a rapid-fire round of talks on emerging technologies.

We got an introduction to Quantum, the new rendering engine in Firefox. It’s looking good. And fast. Oh, and we finally get support for input type="date".

But this wasn’t a product pitch. Most of the talks were by non-Mozillians working on the cutting edge of technologies. I kicked things off with a slimmed-down version of my talk on evaluating technology. Then we heard from experts in everything from CSS to VR.

The highlight for me was meeting Hui Jing and watching her presentation on CSS layout. It was fantastic! Entertaining and informative, it was presented with gusto. I think it got everyone in the room very excited about CSS Grid.

The Singapore stop was the only I was able to make, but Hui Jing has been chronicling the whole trip. Sounds like quite a whirlwind tour. I’m so glad I was able to join in even for a portion. Thanks to Sandra and Ali for inviting me along—much appreciated.

I’ll also be speaking at Mozilla’s View Source in London in a few weeks, where I’ll be talking about building blocks of the Indie Web:

In these times of centralised services like Facebook, Twitter, and Medium, having your own website is downright disruptive. If you care about the longevity of your online presence, independent publishing is the way to go. But how can you get all the benefits of those third-party services while still owning your own data? By using the building blocks of the Indie Web, that’s how!

‘Twould be lovely to see you there.

Brian Aldiss

After the eclipse I climbed down from the hilltop and reconnected with the world. That’s when I heard the news. Brian Aldiss had passed away.

He had a good innings. A very good innings. He lived to 92 and was writing right up to the end.

I’m trying to remember the first thing I read by Brian Aldiss. I think it might have been The Billion Year Spree, his encyclopaedia of science fiction. The library in my hometown had a copy when I was growing up, and I was devouring everything SF-related.

Decades later I had the great pleasure of meeting the man. It was 2012 and I was in charge of putting together the line-up for that year’s dConstruct. I had the brilliant Lauren Beukes on the line-up all the way from South Africa and I thought it would be fun to organise some kind of sci-fi author event the evening before. Well, one thing led to another: Rifa introduced me to Tim Aldiss, who passed along a request to his father, who kindly agreed to come to Brighton for the event. Then Brighton-based Jeff Noon came on board. The end result was an hour and a half in the company of three fantastic—and fantastically different—authors.

I had the huge honour of moderating the event. Here’s the transcript of that evening and here’s the audio.

That evening and the subsequent dConstruct talks—including the mighty James Burke—combined to create one of the greatest weekends of my life. Seriously. I thought it was just me, but Chris has also written about how special that author event was.

Brian Aldiss, Jeff Noon, and Lauren Beukes on the Brighton SF panel, chaired by Jeremy Keith

Brian Aldiss was simply wonderful that evening. He regaled us with the most marvellous stories, at times hilarious, at other times incredibly touching. He was a true gentleman.

I’m so grateful that I’ll always have the memory of that evening. I’m also very grateful that I have so many Brian Aldiss books still to read.

I’ve barely made a dent into the ludicrously prolific output of the man. I’ve read just some of his books:

  • Non-stop—I’m a sucker for generation starship stories,
  • Hothouse—ludicrously lush and trippy,
  • Greybeard—a grim vision of a childless world before Children Of Men,
  • The Hand-reared Boy—filthy, honest and beautifully written,
  • Heliconia Spring—a deep-time epic …and I haven’t even read the next two books in the series!

Then there are the short stories. Hundreds of ‘em! Most famously Super-Toys Last All Summer Long—inspiration for the Kubrick/Spielberg A.I. film. It’s one of the most incredibly sad stories I’ve ever read. I find it hard to read it without weeping.

Passed by a second-hand book stall on the way into work. My defences were down. Not a bad haul for a fiver.

Whenever a great artist dies, it has become a cliché to say that they will live on through their work. In the case of Brian Aldiss and his astounding output, it’s quite literally true. I’m looking forward to many, many years of reading his words.

My sincerest condolences to his son Tim, his partner Alison, and everyone who knew and loved Brian Aldiss.

60 seconds over Idaho

I lived in Germany for the latter half of the nineties. On August 11th, 1999, parts of Germany were in the path of a total eclipse of the sun. Freiburg—the town where I was living—wasn’t in the path, so Jessica and I travelled north with some friends to Karlsruhe.

The weather wasn’t great. There was quite a bit of cloud coverage, but at the moment of totality, the clouds had thinned out enough for us to experience the incredible sight of a black sun.

(The experience was only slightly marred by the nearby idiot who took a picture with the flash on right before totality. Had my eyesight not adjusted in time, he would still be carrying that camera around with him in an anatomically uncomfortable place.)

Eighteen years and eleven days later, Jessica and I climbed up a hill to see our second total eclipse of the sun. The hill is in Sun Valley, Idaho.

Here comes the sun.

Travelling thousands of miles just to witness something that lasts for a minute might seem disproportionate, but if you’ve ever been in the path of totality, you’ll know what an awe-inspiring sight it is (if you’ve only seen a partial eclipse, trust me—there’s no comparison). There’s a primitive part of your brain screaming at you that something is horribly, horribly wrong with the world, while another part of your brain is simply stunned and amazed. Then there’s the logical part of your brain which is trying to grasp the incredible good fortune of this cosmic coincidence—that the sun is 400 times bigger than the moon and also happens to be 400 times the distance away.

This time viewing conditions were ideal. Not a cloud in the sky. It was beautiful. We even got a diamond ring.

I like to think I can be fairly articulate, but at the moment of totality all I could say was “Oh! Wow! Oh! Holy shit! Woah!”

Totality

Our two eclipses were separated by eighteen years, but they’re connected. The Saros 145 cycle has been repeating since 1639 and will continue until 3009, although the number of total eclipses only runs from 1927 to 2648.

Eighteen years and twelve days ago, we saw the eclipse in Germany. Yesterday we saw the eclipse in Idaho. In eighteen years and ten days time, we plan to be in Japan or China.

Posting to my site

I was idly thinking about the different ways I can post to adactio.com. I decided to count the ways.

Admin interface

This is the classic CMS approach. In my case the CMS is a crufty hand-rolled affair using PHP and MySQL that I wrote years ago. I log in to an admin interface and fill in a form, putting the text of my posts into a textarea. In truth, I usually write in a desktop text editor first, and then paste that into the textarea. That’s what I’m doing now—copying and pasting Markdown from the Typed app.

Directly from my site

If I’m logged in, I get a stripped down posting interface in the notes section of my site.

Notes posting interface

Bookmarklet

This is how I post links. When I’m at a URL I want to bookmark, I hit the “Bookmark it” bookmarklet in my browser’s bookmarks bar. That pops open a version of the admin interface tailored specifically for links. I really, really like bookmarklets. The one big downside is that they don’t work on mobile.

Text message

This is something I knocked together at Indie Web Camp Brighton 2015 using the Twilio API. It’s handy for posting notes if I’m travelling somewhere and data is at a premium. But I don’t use it that often.

Instagram

Thanks to Aaron’s OwnYourGram service—and the fact that my site has a micropub endpoint—I can post images from Instagram to my site. This used to happen instantaneously but Instagram changed their API rules for the worse. Between that and their shitty “algorithmic” timeline, I find myself using the service less and less. At this point I’m only on their for the doggos.

Swarm

Like OwnYourGram, Aaron’s OwnYourSwarm allows me to post check-ins and photos from the Swarm app to my site. Again, micropub makes it all possible.

OwnYourGram and OwnYourSwarm are very similar and could probably be abstracted into a generic service for posting from third-party apps to micropub endpoints. I’d quite like to post my check-ins on Untappd to my site.

Other people’s admin interfaces

Thanks to rel="me" and IndieAuth, I can log into other people’s posting interfaces using my own website as the log-in, and post to my micropub endpoint, like this. Quill is a good example of this. I don’t use it that much, but I really should—the editor interface is quite Medium-like in its design.

Anyway, those are the different ways I can update my website that I can think of right now.

Syndication

In terms of output, I’ve got a few different ways of syndicating what I post here:

Just so you know, if you comment on one of my posts on Facebook, I probably won’t see it. But if you reply to a copy of one of posts on Twitter or Instagram, it will show up over here on adactio.com thanks to the magic of Brid.gy and webmention.

Container queries

Every single browser maker has the same stance when it comes to features—they want to hear from developers at the coalface.

“Tell us what you want! We’re listening. We want to know which features to prioritise based on real-world feedback from developers like you.”

“How about container quer—”

“Not that.”

I don’t think it’s an exaggeration to say that literally every web developer I know would love to have container queries. If you’ve worked on any responsive project of any size, you’re bound to have bumped up against the problem of only being able to respond to viewport size, rather than the size of the containing element. Without container queries, our design systems can never be truly modular.

But there’s a divide growing between what our responsive designs need to do, and the tools CSS gives us to meet those needs. We’re making design decisions at smaller and smaller levels, but our code asks us to bind those decisions to a larger, often-irrelevant abstraction of a “page.”

But the message from browser makers has consistently been “it’s simply too hard.”

At the Frontend United conference in Athens a little while back, Jonathan gave a whole talk on the need for container queries. At the same event, Serg gave a talk on Houdini.

Now, as I understand it, Houdini is the CSS arm of the extensible web. Just as web components will allow us to create powerful new HTML without lobbying browser makers, Houdini will allow us to create powerful new CSS features without going cap-in-hand to standards bodies.

At this year’s CSS Day there were two Houdini talks. Tab gave a deep dive, and Philip talked specifically about Houdini as a breakthrough for polyfilling.

During the talks, you could send questions over Twitter that the speaker could be quizzed on afterwards. As Philip was talking, I began to tap out a question: “Could this be used to polyfill container queries?” My thumb was hovering over the tweet button at the very moment that Philip said in his talk, “This could be used to polyfill container queries.”

For that happen, browsers need to implement the layout API for Houdini. But I’m betting that browser makers will be far more receptive to calls to implement the layout API than calls for container queries directly.

Once we have that, there are two possible outcomes:

  1. We try to polyfill container queries and find out that the browser makers were right—it’s simply too hard. This certainty is itself a useful outcome.
  2. We successfully polyfill container queries, and then instead of asking browser makers to figure out implementation, we can hand it to them for standardisation.

But, as Eric Portis points out in his talk on container queries, Houdini is still a ways off (by the way, browser makers, that’s two different conference talks I’ve mentioned about container queries, just in case you were keeping track of how much developers want this).

Still, there are some CSS features that are Houdini-like in their extensibility. Custom properties feel like they could be wrangled to help with the container query problem. While it’s easy to think of custom properties as being like Sass variables, they’re much more powerful than that—the fact they can be a real-time bridge between JavaScript and CSS makes them scriptable. Alas, custom properties can’t be used in media queries but maybe some clever person can figure out a way to get the effect of container queries without a query-like syntax.

However it happens, I’d just love to see some movement on container queries. I’m not alone.

I know container queries would revolutionize my design practice, and better prepare responsive design for mobile, desktop, tablet—and whatever’s coming next.

Patterns Day

Patterns Day is over. It was all I hoped it would be and more.

I’ve got that weird post-conference feeling now, where that all-consuming thing that was ahead of you is now behind you, and you’re not quite sure what to do. Although, comparatively speaking, Patterns Day came together pretty quickly. I announced it less than three months ago. It sold out just over a month later. Now it’s over and done with, it feels like a whirlwind.

The day itself was also somewhat whirlwind-like. It was simultaneously packed to the brim with great talks, and yet over in the blink of an eye. Everyone who attended seemed to have a good time, which makes me very happy indeed. Although, as I said on the day, while it’s nice that everyone came along, I put the line-up together for purely selfish reasons—it was my dream line-up of people I wanted to see speak.

Boy, oh boy, did they deliver the goods! Every talk was great. And I must admit, I was pleased with how I had structured the event. The day started and finished with high-level, almost philosophical talks; the mid section was packed with hands-on nitty-gritty practical examples.

Thanks to sponsorship from Amazon UK, Craig was videoing all the talks. I’ll get them online as soon as I can. But in the meantime, Drew got hold of the audio and made mp3s of each talk. They are all available in handy podcast form for your listening and huffduffing pleasure:

  1. Laura Elizabeth
  2. Ellen de Vries
  3. Sareh Heidari
  4. Rachel Andrew
  5. Alice Bartlett
  6. Jina Anne
  7. Paul Lloyd
  8. Alla Kholmatova

If you’re feeling adventurous, you can play the Patterns Day drinking game while you listen to the talks:

  • Any time someone says “Lego”, take a drink,
  • Any time someone references Chrisopher Alexander, take a drink,
  • Any time someone says that naming things is hard, take a drink,
  • Any time says “atomic design”, take a drink, and
  • Any time says “Bootstrap”, puke the drink back up.

In between the talks, the music was provided courtesy of some Brighton-based artists

Hidde de Vries has written up an account of the day. Stu Robson has also published his notes from each talk. Sarah Drummond wrote down her thoughts on Ev’s blog.

I began the day by predicting that Patterns Day would leave us with more questions than answers …but that they would be the right questions. I think that’s pretty much what happened. Quite a few people compared it to the first Responsive Day Out in tone. I remember a wave of relief flowing across the audience when Sarah opened the show by saying:

I think if we were all to be a little more honest when we talk to each other than we are at the moment, the phrase “winging it” would be something that would come up a lot more often. If you actually speak to people, not very many people have a process for this at the moment. Most of us are kind of winging it.

  • This is hard.
  • No one knows exactly what they’re doing.
  • Nobody has figured this out yet.

Those sentiments were true of responsive design in 2013, and they’re certainly true of design systems in 2017. That’s why I think it’s so important that we share our experiences—good and bad—as we struggle to come to grips with these challenges. That’s why I put Patterns Day together. That’s also why, at the end of the day, I thanked everyone who has ever written about, spoken about, or otherwise shared their experience with design systems, pattern libraries, style guides, and components. And of course I made sure that everyone gave Anna a great big round of applause for her years of dedicated service—I wish she could’ve been there.

There were a few more “thank you”s at the end of the day, and all of them were heartfelt. Thank you to Felicity and everyone else at the Duke of York’s for the fantastic venue and making sure everything went so smoothly. Thank you to AVT for all the audio/visual wrangling. Thanks to Amazon for sponsoring the video recordings, and thanks to Deliveroo for sponsoring the tea, coffee, pastries, and popcorn (they’re hiring, by the way). Huge thanks to Alison and everyone from Clearleft who helped out on the day—Hana, James, Rowena, Chris, Benjamin, Seb, Jerlyn, and most especially Alis who worked behind the scenes to make everything go so smoothly. Thanks to Kai for providing copies of Offscreen Magazine for the taking. Thanks to Marc and Drew for taking lots of pictures. Thanks to everyone who came to Patterns Day, especially the students and organisers from Codebar Brighton—you are my heroes.

Most of all thank you, thank you, thank you, to the eight fantastic speakers who made Patterns Day so, so great—I love you all.

Laura Ellen Sareh Rachel Alice Jina Paul Alla

Talking with the tall man about poetry

When I started making websites in the 1990s, I had plenty of help. The biggest help came from the ability to view source on any web page—the web was a teacher of itself. I also got plenty of help from people who generously shared their knowledge and experience. There was Jeffrey’s Ask Dr. Web, Steve Champeon’s WebDesign-L mailing list, and Jeff Veen’s articles on Webmonkey. Years later, I was able to meet those people. That was a real privilege.

I’ve known Jeff for over a decade now. He’s gone from Adaptive Path to Google to TypeKit to Adobe to True Ventures, and it’s always fascinating to catch up with him and get his perspective on life, the universe, and everything.

He started up a podcast called Presentable about a year ago. It’s worth having a dig through the archives to have a listen to his chats with people like Andy, Jason, Anna, and Jessica. I was honoured when Jeff asked me to be on the show.

We ended up having a really good chat. It’s out now as Episode 25: The Tenuous Resilience of the Open Web. I really enjoyed having a good ol’ natter, and I hope you might enjoy listening to it.

‘Sfunny, but I feel like a few unplanned themes came up a few times. We ended up talking about art, but also about the scientific aspects of design. I couldn’t help but be reminded of the title of Jeff’s classic book, The Art and Science of Web Design.

We also talked about my most recent book, Resilient Web Design, and that’s when I noticed another theme. When discussing the web-first nature of publishing the book, I described the web version as the canonical version and all the other formats as copies that were generated from that. That sounds a lot like how I describe the indie web—something else we discussed—where you have the canonical instance on your own site but share copies on social networks: Publish on Own Site, Syndicate Elsewhere—POSSE.

We also talked about technologies, and it’s entirely possible that we sound like two old codgers on the front porch haranguing those damn kids on the lawn. You can be the judge of that. The audio is available for your huffduffing pleasure. If you enjoy listening to it half as much as I enjoyed doing it, then I enjoyed it twice as much as you.

eLife goes live

The World Wide Web was forged in the crucible of science. Tim Berners-Lee was working at CERN, the European Centre for Nuclear Research, a remarkable place where the pursuit of knowledge—rather than the pursuit of profit—is the driving force.

I often wonder whether the web as we know it—an open, decentralised system—could’ve been born anywhere else. These days it’s easy to focus on the success stories of the web in the worlds of commerce and social networking, but I still find there’s something that really “clicks” with the web and the science (Zooniverse being a classic example).

At Clearleft we’ve been lucky enough to work on science-driven projects like the Wellcome Library and the Wellcome Trust. It’s incredibly rewarding to work on projects where the bottom line is measured in knowledge-sharing rather than moolah. So when we were approached by eLife to help them with an upcoming redesign, we jumped at the chance.

We usually help organisations through our expertise in user-centred design, but in this case the design and UX were already in hand. The challenge was in the implementation. The team at eLife knew that they wanted a modular pattern library to keep their front-end components documented and easily reusable. Given Clearleft’s extensive experience with building pattern libraries, this was a match made in heaven (or whatever the scientific non-theistic equivalent of heaven is).

A group of us travelled up from Brighton to Cambridge to kick things off with a workshop. Before diving into code, it was important to set out the aims for the redesign, and figure out how a pattern library could best support those aims.

Right away, I was struck by the great working relationship between design and front-end development within eLife—there was a great collaborative spirit to the endeavour.

Some goals for the redesign soon emerged:

  • Promote the HTML reading experience as a 1st choice for readers.
  • Align the online experience with the eLife visual identity.

That led to some design principles:

  • Focus on content not site furniture.
  • Remove visual clutter and provide no more than the user needs at any stage of the experience.
  • Aid discovery of value added content beyond the manuscript.

Those design principles then informed the front-end development process. Together we came up with a priority of concerns:

  1. Access
  2. Maintainability
  3. Performance
  4. Taking advantage of browser capabilities
  5. Visual appeal

It’s interesting that maintainability was such a high priority that it superseded even performance, but we also proposed a hypothesis at the same time:

Maintainability doesn’t negatively impact performance.

The combination of the design principles and priorities led us to formulate approaches that could be used throughout the project:

  • Progressive enhancement.
  • Small-screen first responsive images.
  • Only add libraries as needed.

Then we dived into the tech stack: build tools, version control approaches, and naming methodologies. BEM was the winner there.

None of those decisions were set in stone, but they really helped to build a solid foundation for the work ahead. Graham camped out in Cambridge for a while, embedding himself in the team there as they began the process of identifying, naming, and building the components.

The work continued after Clearleft’s involvement wrapped up, and I’m happy to say that it all paid off. The new eLife site has just gone live. It’s looking—and performing—beautifully.

What a great combination: the best of the web and the best of science!

eLife is a non-profit organisation inspired by research funders and led by scientists. Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science.

Month maps

One of the topics I enjoy discussing at Indie Web Camps is how we can use design to display activity over time on personal websites. That’s how I ended up with sparklines on my site—it was the a direct result of a discussion at Indie Web Camp Nuremberg a year ago:

During the discussion at Indie Web Camp, we started looking at how silos design their profile pages to see what we could learn from them. Looking at my Twitter profile, my Instagram profile, my Untappd profile, or just about any other profile, it’s a mixture of bio and stream, with the addition of stats showing activity on the site—signs of life.

Perhaps the most interesting visual example of my activity over time is on my Github profile. Halfway down the page there’s a calendar heatmap that uses colour to indicate the amount of activity. What I find interesting is that it’s using two axes of time over a year: days of the month across the X axis and days of the week down the Y axis.

I wanted to try something similar, but showing activity by time of day down the Y axis. A month of activity feels like the right range to display, so I set about adding a calendar heatmap to monthly archives. I already had the data I needed—timestamps of posts. That’s what I was already using to display sparklines. I wrote some code to loop over those timestamps and organise them by day and by hour. Then I spit out a table with days for the columns and clumps of hours for the rows.

Calendar heatmap on Dribbble

I’m using colour (well, different shades of grey) to indicate the relative amounts of activity, but I decided to use size as well. So it’s also a bubble chart.

It doesn’t work very elegantly on small screens: the table is clipped horizontally and can be swiped left and right. Ideally the visualisation itself would change to accommodate smaller screens.

Still, I kind of like the end result. Here’s last month’s activity on my site. Here’s the same time period ten years ago. I’ve also added month heatmaps to the monthly archives for my journal, links, and notes. They’re kind of like an expanded view of the sparklines that are shown with each month.

From one year ago, here’s the daily distribution of

And then here’s the the daily distribution of everything in that month all together.

I realise that the data being displayed is probably only of interest to me, but then, that’s one of the perks of having your own website—you can do whatever you feel like.