Journal

2400 sparkline

Monday, June 5th, 2017

eLife goes live

The World Wide Web was forged in the crucible of science. Tim Berners-Lee was working at CERN, the European Centre for Nuclear Research, a remarkable place where the pursuit of knowledge—rather than the pursuit of profit—is the driving force.

I often wonder whether the web as we know it—an open, decentralised system—could’ve been born anywhere else. These days it’s easy to focus on the success stories of the web in the worlds of commerce and social networking, but I still find there’s something that really “clicks” with the web and the science (Zooniverse being a classic example).

At Clearleft we’ve been lucky enough to work on science-driven projects like the Wellcome Library and the Wellcome Trust. It’s incredibly rewarding to work on projects where the bottom line is measured in knowledge-sharing rather than moolah. So when we were approached by eLife to help them with an upcoming redesign, we jumped at the chance.

We usually help organisations through our expertise in user-centred design, but in this case the design and UX were already in hand. The challenge was in the implementation. The team at eLife knew that they wanted a modular pattern library to keep their front-end components documented and easily reusable. Given Clearleft’s extensive experience with building pattern libraries, this was a match made in heaven (or whatever the scientific non-theistic equivalent of heaven is).

A group of us travelled up from Brighton to Cambridge to kick things off with a workshop. Before diving into code, it was important to set out the aims for the redesign, and figure out how a pattern library could best support those aims.

Right away, I was struck by the great working relationship between design and front-end development within eLife—there was a great collaborative spirit to the endeavour.

Some goals for the redesign soon emerged:

  • Promote the HTML reading experience as a 1st choice for readers.
  • Align the online experience with the eLife visual identity.

That led to some design principles:

  • Focus on content not site furniture.
  • Remove visual clutter and provide no more than the user needs at any stage of the experience.
  • Aid discovery of value added content beyond the manuscript.

Those design principles then informed the front-end development process. Together we came up with a priority of concerns:

  1. Access
  2. Maintainability
  3. Performance
  4. Taking advantage of browser capabilities
  5. Visual appeal

It’s interesting that maintainability was such a high priority that it superseded even performance, but we also proposed a hypothesis at the same time:

Maintainability doesn’t negatively impact performance.

The combination of the design principles and priorities led us to formulate approaches that could be used throughout the project:

  • Progressive enhancement.
  • Small-screen first responsive images.
  • Only add libraries as needed.

Then we dived into the tech stack: build tools, version control approaches, and naming methodologies. BEM was the winner there.

None of those decisions were set in stone, but they really helped to build a solid foundation for the work ahead. Graham camped out in Cambridge for a while, embedding himself in the team there as they began the process of identifying, naming, and building the components.

The work continued after Clearleft’s involvement wrapped up, and I’m happy to say that it all paid off. The new eLife site has just gone live. It’s looking—and performing—beautifully.

What a great combination: the best of the web and the best of science!

eLife is a non-profit organisation inspired by research funders and led by scientists. Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science.

Sunday, June 4th, 2017

Month maps

One of the topics I enjoy discussing at Indie Web Camps is how we can use design to display activity over time on personal websites. That’s how I ended up with sparklines on my site—it was the a direct result of a discussion at Indie Web Camp Nuremberg a year ago:

During the discussion at Indie Web Camp, we started looking at how silos design their profile pages to see what we could learn from them. Looking at my Twitter profile, my Instagram profile, my Untappd profile, or just about any other profile, it’s a mixture of bio and stream, with the addition of stats showing activity on the site—signs of life.

Perhaps the most interesting visual example of my activity over time is on my Github profile. Halfway down the page there’s a calendar heatmap that uses colour to indicate the amount of activity. What I find interesting is that it’s using two axes of time over a year: days of the month across the X axis and days of the week down the Y axis.

I wanted to try something similar, but showing activity by time of day down the Y axis. A month of activity feels like the right range to display, so I set about adding a calendar heatmap to monthly archives. I already had the data I needed—timestamps of posts. That’s what I was already using to display sparklines. I wrote some code to loop over those timestamps and organise them by day and by hour. Then I spit out a table with days for the columns and clumps of hours for the rows.

Calendar heatmap on Dribbble

I’m using colour (well, different shades of grey) to indicate the relative amounts of activity, but I decided to use size as well. So it’s also a bubble chart.

It doesn’t work very elegantly on small screens: the table is clipped horizontally and can be swiped left and right. Ideally the visualisation itself would change to accommodate smaller screens.

Still, I kind of like the end result. Here’s last month’s activity on my site. Here’s the same time period ten years ago. I’ve also added month heatmaps to the monthly archives for my journal, links, and notes. They’re kind of like an expanded view of the sparklines that are shown with each month.

From one year ago, here’s the daily distribution of

And then here’s the the daily distribution of everything in that month all together.

I realise that the data being displayed is probably only of interest to me, but then, that’s one of the perks of having your own website—you can do whatever you feel like.

Aurora

I remember when I was first recommended to read Kim Stanley Robinson. I was chatting with Jon Tan about science fiction, and I was bemoaning the fact that dystopias seem to be the default setting. Asking "what’s the worst that could happen?" is the over-riding pre-occupation of most sci-fi. Black Mirror is the perfect example of this. Mind you, that’s probably why the ambiguous San Junipero is one of my favourites—utopia? dystopia? dystutopia? You decide.

Anyway, Jon told me I should check out Kim Stanley Robinson’s Three Californias; one book describes a dystopia, one book describes a utopia, and the other—his debut, The Wild Shore—is more ambiguous. I liked the sound of that, but I decided that if I were going to read Kim Stanley Robinson, I should start with his most famous work, the Mars trilogy.

So I read Red Mars. I liked it, but I found it tough going. It’s not exactly a light read. I still haven’t read Green Mars or Blue Mars, though I plan to. I can see why Red Mars is regarded as a classic of hard sci-fi, but it left me somewhat cold. Jessica read The Years of Rice and Salt and had a similar reaction—good premise, thoroughly researched, but tough going.

When I heard about 2312, I couldn’t resist its promise of a jaunt around the solar system. Again, I enjoyed it, but the plot—such as it was—didn’t grab me. I loved the ideas presented in the book. Heck, it inspired one of my Science Hack Day projects. Still, I found that its literary conceit wasn’t enough to carry the book—a character from Saturn who’s saturnian in nature meets a character from Mercury who’s mercurial in nature.

So I was kind of bracing myself for Aurora. Again, the subject matter really appealed to me. I’m a sucker for generation starships. Brian Aldiss’s Non-Stop was a fun read, although in typical Aldiss style, it was weird to the point of psychedelia (even if it looks positively tame next to the batshit crazy world of Hothouse). I was looking forward to reading Robinson’s hard science take on the space ark idea, but I was worried about how much of a slog the writing might be. I read some reviews and listened to some podcasts, and my heart sank when I heard about how the story is partly told by the ship’s AI, who is simultaneously trying to work out how to tell a story. It sounded just like one of those ideas that would be fine for a brief period, but which I could imagine Kim Stanley Robinson dragging out for hundreds of page.

Imagine my surprise when Aurora turned out to be an absolute pleasure. Not only does it have the thoroughly-researched hard science angle of Robinson’s other books, it’s also a rip-roaring tale, in my opinion. I had read of misgivings with the structure of the book—complaints that the story climaxes before the book is halfway done—but I think that misses the point of the story. This is not your typical tale of colonisation. Far from it. Kim Stanley Robinson is quite open about the underlying idea here, that there are certain endeavours that are simply beyond our capacity.

I know that sounds like a very pessimistic view, but I found the book to be a real testament to human ingenuity. But it certainly ruffled quite a few feathers. Like I said, the default setting for most sci-fi is to go negative, but for a sci-fi writer to claim outright that something cannot be done is audacious, and flies in the face of sci-fi tradition.

Gregory Benford wrote a review over on one of my favourite blogs, Centauri Dreams. He takes Robinson to task for stacking the deck against the crew of the ship in Aurora—an inversion of the usual deus ex machina plot devices. I find that criticism puzzling when another review, also on Centauri Dreams, by Stephen Baxter, James Benford and Joseph Miller, takes the book to task for being scientifically naïve.

For me, Aurora was perfectly balanced. It simultaneously captured the wonder of scientific exploration and our own insignificance in the universe. Best of all, it featured central characters that I was utterly invested in—one human, and one artificial. Given my previous experiences with Kim Stanley Robinson books, that was perhaps its greatest achievement. Whereas I might have previously recommended something like 2312, I would have certainly caveated the recommendation. But I wholeheartedly recommend Aurora. It’s easily the best Kim Stanley Robinson book I’ve read so far, and one of the finest science fiction books of recent years. It makes a great companion piece to Neal Stephenson’s Seveneves—not only are they both dealing with space arks, they’ve also got some in-depth descriptions of angular momentum in action, and they’re both thoroughly enjoyable stories that stretch beyond a single human lifespan.

I’m looking forward to digging back through Kim Stanley Robinson’s back catalogue, and I’m very intrigued by his newest book, New York 2140. From listening to his Long Now talk at The Interval, it sounds like the book has as much to say about near-future economics as it does about climate change.

It’s ironic though. Kim Stanley Robinson was first recommended to me because he was one of the few sci-fi writers unafraid to depict a utopia. But his writing never clicked with me until I read Aurora, whose central message sounds like the ultimate downer …that some scientific achievements will forever remain out of reach for humanity.

Tuesday, May 30th, 2017

Checking in at Indie Web Camp Nuremberg

Once I finished my workshop on evaluating technology I stayed in Nuremberg for that weekend’s Indie Web Camp.

IndieWebCamp Nuremberg

Just as with Indie Web Camp Düsseldorf the weekend before, it was a fun two days—one day of discussions, followed by one day of making.

IndieWebCamp Nuremberg IndieWebCamp Nuremberg IndieWebCamp Nuremberg IndieWebCamp Nuremberg

I spent most of the second day playing around with a new service that Aaron created called OwnYourSwarm. It’s very similar to his other service, OwnYourGram. Whereas OwnYourGram is all about posting pictures from Instagram to your own site, OwnYourSwarm is all about posting Swarm check-ins to your own site.

Usually I prefer to publish on my own site and then push copies out to other services like Twitter, Flickr, etc. (POSSE—Publish on Own Site, Syndicate Elsewhere). In the case of Instagram, that’s impossible because of their ludicrously restrictive API, so I have go the other way around (PESOS—Publish Elsewhere, Syndicate to Own Site). When it comes to check-ins, I could do it from my own site, but I’d have to create my own databases of places to check into. I don’t fancy that much (yet) so I’m using OwnYourSwarm to PESOS check-ins.

The great thing about OwnYourSwarm is that I didn’t have to do anything. I already had the building blocks in place.

First of all, I needed some way to authenticate as my website. IndieAuth takes care of all that. All I needed was rel="me" attributes pointing from my website to my profiles on Twitter, Flickr, Github, or any other services that provide OAuth. Then I can piggyback on their authentication flow (this is also how you sign in to the Indie Web wiki).

The other step is more involved. My site needs to provide an API endpoint so that services like OwnYourGram and OwnYourSwarm can post to it. That’s where micropub comes in. You can see the code for my minimal micropub endpoint if you like. If you want to test your own micropub endpoint, check out micropub.rocks—the companion to webmention.rocks.

Anyway, I already had IndieAuth and micropub set up on my site, so all I had to do was log in to OwnYourSwarm and I immediately started to get check-ins posted to my own site. They show up the same as any other note, so I decided to spend my time at Indie Web Camp Nuremberg making them look a bit different. I used Mapbox’s static map API to show an image of the location of the check-in. What’s really nice is that if I post a photo on Swarm, that gets posted to my own site too. I had fun playing around with the display of photo+map on my home page stream. I’ve made a page for keeping track of check-ins too.

All in all, a fun way to spend Indie Web Camp Nuremberg. But when it came time to demo, the one that really impressed me was Amber’s. She worked flat out on her site, getting to the second level on IndieWebify.me …including sending a webmention to my site!

IndieWebCamp Nuremberg

Wednesday, May 24th, 2017

A workshop on evaluating technology

After hacking away at Indie Web Camp Düsseldorf, I stuck around for Beyond Tellerrand. I ended up giving a talk, stepping in for Ellen. I was a poor substitute, but I hope I entertained the lovely audience for 45 minutes.

After Beyond Tellerrand, I got on a train to Nuremberg …along with a dozen of my peers who were also at the event.

All aboard the Indie Web Train from Düsseldorf to Nürnberg. Indie Web Train.

I arrived right in the middle of Web Week Nürnberg. Among the many events going on was a workshop that Joschi arranged for me to run called Evaluating Technology. The workshop version of my Beyond Tellerrand talk, basically.

This was an evolution of a workshop I ran a while back. I have to admit, I was a bit nervous going into this. I had no tangible material prepared; no slides, no handouts, nothing. Instead the workshop is a collaborative affair. In order for it to work, the attendees needed to jump in and co-create it with me. Luckily for me, I had a fantastic and enthusiastic group of people at my workshop.

Evaluating Technology

We began with a complete braindump. “Name some tools and technologies,” I said. “Just shout ‘em out.” Shout ‘em out, they did. I struggled to keep up just writing down everything they said. This was great!

Evaluating Technology

The next step was supposed to be dot-voting on which technologies to cover, but there were so many of them, we introduced an intermediate step: grouping the technologies together.

Evaluating Technology Evaluating Technology

Once the technologies were grouped into categories like build tools, browser APIs, methodologies etc., we voted on which categories to cover, only then diving deeper into specific technologies.

I proposed a number of questions to ask of each technology we covered. First of all, who benefits from the technology? Is it a tool for designers and developers, or is it a tool for the end user? Build tools, task runners, version control systems, text editors, transpilers, and pattern libraries all fall into the first category—they make life easier for the people making websites. Browser features generally fall into the second category—they improve the experience for the end user.

Looking at user-facing technologies, we asked: how well do they fail? In other words, can you add this technology as an extra layer of enhancement on top of what you’re building or do you have to make it a foundational layer that’s potentially a single point of failure?

For both classes of technologies, we asked the question: what are the assumptions? What fundamental philosophy has been baked into the technology?

Evaluating Technology Evaluating Technology

Now, the point of this workshop is not for me to answer those questions. I have a limited range of experience with the huge amount of web technologies out there. But collectively all of us attending the workshop will have a good range of experience and knowledge.

Interesting then that the technologies people voted for were:

  • service workers,
  • progressive web apps,
  • AMP,
  • web components,
  • pattern libraries and design systems.

Those are topics I actually do have some experience with. Lots of the attendees had heard of these things, they were really interested in finding out more about them, but they hadn’t necessarily used them yet.

And so I ended up doing a lot of the talking …which wasn’t the plan at all! That was just the way things worked out. I was more than happy to share my opinions on those topics, but it was of a shame that I ended up monopolising the discussion. I felt for everyone having to listen to me ramble on.

Still, by the end of the day we had covered quite a few topics. Better yet, we had a good framework for categorising and evaluating web technologies. The specific technologies we covered were interesting enough, but the general approach provided the lasting value.

All in all, a great day with a great group of people.

Evaluating Technology

I’m already looking forward to running this workshop again. If you think it would be valuable for your company, get in touch.

Tuesday, May 23rd, 2017

Going offline at Indie Web Camp Düsseldorf

I’ve just come back from a ten-day trip to Germany. The trip kicked off with Indie Web Camp Düsseldorf over the course of a weekend.

IndieWebCamp Düsseldorf 2017

Once again the wonderful people at Sipgate hosted us in their beautiful building, and once again myself and Aaron helped facilitate the two days.

IndieWebCamp Düsseldorf 2017

Saturday was the BarCamp-like discussion day. Plenty of interesting topics were covered. I led a session on service workers, and that’s also what I decided to work on for the second day—that’s when the talking is done and we get down to making.

IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017

I like what Ethan is doing on his offline page. He shows a list of pages that have been cached, but instead of just listing URLs, he shows a title and description for each page.

I’ve already got a separate cache for pages that gets added to as the user browses around my site. I needed to figure out a way to store the metadata for those pages so that I could then display it on the offline page. I came up with a workable solution, and interestingly, it involved no changes to the service worker script at all.

When you visit any blog post, I put metadata about the page into localStorage (after first checking that there’s an active service worker):

if (navigator.serviceWorker && navigator.serviceWorker.controller) {
  window.addEventListener('load', function() {
    var data = {
      "title": "A minority report on artificial intelligence",
      "description": "Revisiting Spielberg’s films after a decade and a half.",
      "published": "May 7th, 2017",
      "timestamp": "1494171049"
    };
    localStorage.setItem(
      window.location.href,
      JSON.stringify(data)
    );
  });
}

In my case, I’m outputting the metadata from the server, but you could just as easily grab some from the DOM like this:

var data = {
  "title": document.querySelector("title").innerText,
  "description": document.querySelector("meta[name='description']").getAttribute("contents")
}

Meanwhile in my service worker, when you visit that same page, it gets added to a cache called “pages”. Both localStorage and the cache API are using URLs as keys. I take advantage of that on my offline page.

The nice thing about writing JavaScript on my offline page is that I know the page will only be seen by modern browsers that support service workers, so I can use all sorts of fancy from ES6, or whatever we’re calling it now.

I start by looping through the keys of the “pages” cache (that’s right—the cache API isn’t just for service workers; you can access it from any script). Then I check to see if there is a corresponding localStorage key with the same string (a URL). If there is, I pull the metadata out of local storage and add it to an array called browsingHistory:

const browsingHistory = [];
caches.open('pages')
.then( cache => {
  cache.keys()
  .then(keys => {
    keys.forEach( request => {
      let data = JSON.parse(localStorage.getItem(request.url));
        if (data) {
          data['url'] = request.url;
          browsingHistory.push(data);
      }
    });

Then I sort the list of pages in reverse chronological order:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I loop through each page in the browsing history list and construct a link to each URL, complete with title and description:

let markup = '';
browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

Finally I dump the constructed markup into a waiting div in the page with an ID of “history”:

let container = document.getElementById('history');
container.insertAdjacentHTML('beforeend', markup);

All those steps need to be wrapped inside the then clause attached to caches.open("pages") because the cache API is asynchronous.

There you have it. Now if you’re browsing adactio.com and your network connection drops (or my server goes offline), you can choose from a list of pages you’ve previously visited.

The current situation isn’t ideal though. I’ve got a clean-up operation in my service worker to limit the number of items stored in my “pages” cache. The cache never gets bigger than 35 items. But there’s no corresponding clean-up of metadata stored in localStorage. So there could be a lot more bits of metadata in local storage than there are pages in the cache. It’s not harmful, but it’s a bit wasteful.

I can’t do a clean-up of localStorage from my service worker because service workers can’t access localStorage. There’s a very good reason for that: the localStorage API is synchronous, and everything that happens in a service worker needs to be asynchronous.

Service workers can access indexedDB: it’s asynchronous. I could use indexedDB instead of localStorage, but I’m not a masochist. My best bet would be to use the localForage library, which wraps indexedDB in the simple syntax of localStorage.

Maybe I’ll do that at the next Homebrew Website Club here in Brighton.

Monday, May 22nd, 2017

Sponsoring Patterns Day

It didn’t take long for Patterns Day to sell out (in the sense of the tickets all being sold; not in the sense of going mainstream and selling out to The Man).

I’m very pleased about the ticket situation. It certainly makes my life easier. Now I can concentrate on the logistics for the day, without having to worry about trying to flog tickets AKA marketing.

But I also feel bad. Some people who really, really wanted to come weren’t able to get tickets in time. This is usually because they work at a company where to have to get clearance for the time off, and the cost of the ticket. By the time the word came down from on high that they’ve got the green light, the tickets were already gone. That’s a real shame.

There is, however, a glimmer of hope on the horizon. There is one last chance to get tickets for Patterns Day, and that’s through sponsorship.

Here’s the deal: if I can get some things sponsored (like recordings of the talks, tea and coffee for the day, or an after-party), I can offer a few tickets in return. I can also offer your logo on the Patterns Day website, your logo on the slide between talks, and a shout-out on stage. But that’s pretty much it. I can’t offer a physical stand at the event—there just isn’t enough room. And I certainly can’t offer you a list of attendee details for your marketing list—that’s just wrong.

In order of priority, here’s what I would love to get sponsored, and here’s what I can offer in return:

  1. £2000: Sponsoring video recordings of the talks—4 tickets. This is probably the best marketing opportunity for your company; we can slap your logo at the start and end of each video when they go online.
  2. £2000: Sponsoring tea and coffee for attendees for the day—4 tickets. This is a fixed price, set by the venue.
  3. £2000+: Sponsoring an after-party near the conference—4 tickets. Ideally you’d take care of booking a venue for this, and you can go crazy decking it out with your branding. Two pubs right across from the conference venue have upstairs rooms you can book: The Joker, and The Hare And Hounds.

There you have it. There’s no room for negotiation, I’m afraid, but I think they’re pretty good deals. Remember, by sponsoring Patterns Day you’ll also have my undying gratitude, and the goodwill of all my peers coming to this event.

Reckon you can convince your marketing department? Drop me a line, let me know which sponsorship option you’d like to snap up, and those four tickets could be yours.

Sunday, May 7th, 2017

A minority report on artificial intelligence

Want to feel old? Steven Spielberg’s Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the film’s release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the film’s release, a lot of the discussion centred on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isn’t a film about time travel, it’s a film about prediction.

Or rather, the plot is about prediction. The film—like so many great works of cinema—is about seeing. It’s packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the story—as opposed to the gadgets, gizmos, and interfaces—was one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. It’s suggested right there in the film:

It helps not to think of them as human.

To which the response is:

No, they’re so much more than that.

Suppose that Agatha, Arthur, and Dashiell weren’t people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applications …and, yes, crime.

Precogs are pattern recognition filters, that’s all.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, it’s a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligence …which is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technology …but much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. It’s a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

Jeremy: …the short story is so sad, there’s such an incredible sadness to it that…

Brian: Well it’s psychological, that’s why. But I didn’t think it works as a movie; sadly, I have to say.

At the time of its release, the general consensus was that A.I. was a mess. It’s true. The film is a mess, but I think that, like Minority Report, it’s worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes not—as we first suspect—from the artificial intelligence. The horror comes from the humans. I don’t mean the cruelty of the flesh fairs. I’m talking about the cruelty of Monica, who activates David’s unconditional love only to reject it (watching now, both scenes—the activation and the rejection—are equally horrific). Then there’s the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that we’re shown, it’s hard to mourn our species’ extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, it’s about leaving the future behind:

SF’s most enduring works don’t live on because they accurately predict tomorrow. In fact, technologically speaking they’re very often wrong about it. They stay readable because they think about what change does to people and how we cope with it.

Friday, May 5th, 2017

Patterns Day speakers

Ticket sales for Patterns Day are going quite, quite briskly. If you’d like to come along, but you don’t yet have a ticket, you might want to remedy that. Especially when you hear about who else is going to be speaking…

Sareh Heidari works at the BBC building websites for a global audience, in as many as twenty different languages. If you want to know about strategies for using CSS at scale, you definitely want to hear this talk. She just stepped off stage at the excellent CSSconf EU in Berlin, and I’m so happy that Sareh’s coming to Brighton!

Patterns Day isn’t the first conference about design systems and pattern libraries on the web. That honour goes to the Clarity conference, organised by the brilliant Jina Anne. I was gutted I couldn’t make it to Clarity last year. By all accounts, it was excellent. When I started to form the vague idea of putting on an event here in the UK, I immediately contacted Jina to make sure she was okay with it—I didn’t want to step on her toes. Not only was she okay with it, but she really wanted to come along to attend. Well, never mind attending, I said, how about speaking?

I couldn’t be happier that Jina agreed to speak. She has had such a huge impact on the world of pattern libraries through her work with the Lightning design system, Clarity, and the Design Systems Slack channel.

The line-up is now complete. Looking at the speakers, I find myself grinning from ear to ear—it’s going to be an honour to introduce each and every one of them.

This is going to be such an excellent day of fun and knowledge. I can’t wait for June 30th!

Tuesday, May 2nd, 2017

Styling the Patterns Day site

Once I had a design direction for the Patterns Day site, I started combining my marked-up content with some CSS. Ironically for an event that’s all about maintainability and reusability, I wrote the styles for this one-page site with no mind for future use. I treated the page as a one-shot document. I even used ID selectors—gasp! (the IDs were in the HTML anyway as fragment identifiers).

The truth is I didn’t have much of a plan. I just started hacking away in a style element in the head of the document, playing around with colour, typography, and layout.

I started with the small-screen styles. That wasn’t a conscious decision so much as just the way I do things automatically now. When it came time to add some layout for wider viewports, I used a sprinkling of old-fashioned display: inline-block so that things looked so-so. I knew I wanted to play around with Grid layout so the inline-block styles were there as fallback for non-supporting browsers. Once things looked good enough, the fun really started.

I was building the site while I was in Seattle for An Event Apart. CSS Grid layout was definitely a hot topic there. Best of all, I was surrounded by experts: Jen, Rachel, and Eric. It was the perfect environment for me to dip my toes into the waters of grid.

Jen was very patient in talking me through the concepts, syntax, and tools for using CSS grids. Top tip: open Firefox’s inspector, select the element with the display:grid declaration, and click the “waffle” icon—instant grid overlay!

For the header of the Patterns Day site, I started by using named areas. That’s the ASCII-art approach. I got my head around it and it worked okay, but it didn’t give me quite the precision I wanted. I switched over to using explicit grid-row and grid-column declarations.

It’s definitely a new way of thinking about layout: first you define the grid, then you place the items on it (rather than previous CSS layout systems where each element interacted with the elements before and after). It was fun to move things around and not have to worry about the source order of the elements …as long as they were direct children of the element with display:grid applied.

Without any support for sub-grids, I ended up having to nest two separate grids within one another. The logo is a grid parent, which is inside the header, also a grid parent. I managed to get things to line up okay, but I think this might be a good use case for sub-grids.

The logo grid threw up some interesting challenges. I wanted each letter of the words “Patterns Day” to be styleable, but CSS doesn’t give us any way to target individual letters other than :first-letter. I wrapped each letter in a b element, made sure that they were all wrapped in an element with an aria-hidden attribute (so that the letters wouldn’t be spelled out), and then wrapped that in an element with an aria-label of “Patterns Day.” Now I could target those b elements.

For a while, I also had a br element (between “Patterns” and “Day”). That created some interesting side effects. If a br element becomes a grid item, it starts to behave very oddly: you can apply certain styles but not others. Jen and Eric then started to test other interesting elements, like hr. There was much funkiness and gnashing of specs.

It was a total nerdfest, and I loved every minute of it. This is definitely the most excitement I’ve felt around CSS for a while. It feels like a renaissance of zen gardens and layout reservoirs (kids, ask your parents).

After a couple of days playing around with grid, I had the Patterns Day site looking decent enough to launch. I dabbled with some other fun CSS stuff in there too, like gratuitous clip paths and filters when hovering over the speaker images, and applying shape-outside with an image mask.

Go ahead and view source on the Patterns Day page if you want—I ended up keeping all the CSS in the head of the document. That turned out to be pretty good for performance …for first-time visits anyway. But after launching the site, I couldn’t resist applying some more performance tweaks.