Tags: ie

266

sparkline

Beyond

After a fun and productive Indie Web Camp, I stuck around Düsseldorf for Beyond Tellerand. I love this event. I’ve spoken at it quite a few times, but this year it was nice to be there as an attendee. It’s simultaneously a chance to reconnect with old friends I haven’t seen in a while, and an opportunity to meet lovely new people. There was plenty of both this year.

I think this might have been the best Beyond Tellerrand yet, and that’s saying something. It’s not just that the talks were really good—there was also a wonderful atmosphere.

Marc somehow manages to curate a line-up that’s equal parts creativity and code; design and development. It shouldn’t work, but it does. I love the fact that he had a legend of the industry like David Carson on the same stage as first-time speaker like Dorobot …and the crowd loved ‘em equally!

During the event, I found out that I had a small part to play in the creation of the line-up…

Three years ago, I linked to a video of a talk by Mike Hill:

A terrific analysis of industrial design in film and games …featuring a scene-setting opening that delineates the difference between pleasure and happiness.

It’s a talk about chairs in Jodie Foster films. Seriously. It’s fantastic!

Marc saw my link, watched the video, and decided he wanted to get Mike Hill to speak at Beyond Tellerrand. After failing to get a response by email, Marc managed to corner Mike at an event in Amsterdam and get him on this year’s line-up.

Mike gave a talk called The Power of Metaphor and it’s absolutely brilliant. It covers the monomyth (the hero’s journey) and Jungian archetypes, illustrated with the examples Star Wars, The Dark Knight, and Jurassic Park:

Under the surface of their most celebrated films lies a hidden architecture that operates on an unconscious level; This talk is designed to illuminate the techniques that great storytellers use to engage a global audience on a deep and meaningful level through psychological metaphor.

The videos from Beyond Tellerrand are already online so you can watch the talk now.

Mike’s talk was back-to-back with a talk from Carolyn Stransky called Humanising Your Documentation:

In this talk, we’ll discuss how the language we use affects our users and the first steps towards writing accessible, approachable and use case-driven documentation.

While the talk was ostensibly about documentation, I found that it was packed full of good advice for writing well in general.

I had a thought. What if you mashed up these two talks? What if you wrote documentation through the lens of the hero’s journey?

Think about it. When somone arrives at your documentation, they’ve crossed the threshold to the underworld. They are in the cave, facing a dragon. You are their guide, their mentor, their Obi-Wan Kenobi. You can help them conquer their demons and return to the familiar world, changed by their journey.

Too much?

Replies

Last week was a bit of an event whirlwind. In the space of seven days I was at Indie Web Camp, Beyond Tellerrand, and Accessibility Club in Düsseldorf, followed by a train ride to Utrecht for Frontend United. Phew!

Indie Web Camp Düsseldorf was—as always—excellent. Once again, Sipgate generously gave us the use of their lovely, lovely space for the weekend. We had one day of really thought-provoking discussions, followed by a day of heads-down hacking and making.

I decided it was time for me to finally own my replies. For a while now, I’ve been posting notes on my own site and syndicating to Twitter. But whenever I replied to someone else’s tweet, I did from Twitter. I wanted to change that.

From a coding point of view, it wasn’t all that tricky. The real challenges were to do with the interface. I needed to add another field for the URL I’m replying to …but I didn’t want my nice and minimal posting interface to get too cluttered. I ended up putting the new form field inside a details element with a summary of “Reply to” so that the form field would be hidden by default, and toggled open by hitting that “Reply to” text:

<details>
    <summary>
        <label for="replyto">Reply to</label>
    </summary>
    <input type="url" id="replyto" name="replyto">
</details>

I sent my first test reply to a post on Aaron’s website. Aaron was sitting next to me at the time.

Once that was all working, I sent my first reply to a tweet. It was a response to a tweet from Tantek. Tantek was also sitting next to me at the time.

I spent most of the day getting that Twitter syndication to work. I had something to demo, but I foolishly decided to risk it all by attempting to create a bookmarklet so that I could post directly from a tweet page (instead of hopping back to my own site in a different tab). By canabalising the existing bookmarklet I use for posting links, I just about managed to get it working in time for the end of day demos.

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

Then again, I kind of like how wonderfully random and out-of-context they look. You can browse through all my replies so far.

I’m glad I got this set up. Now when Andy posts stuff on Twitter, I’m custodian of my responses:

@AndyBudd: Who are your current “Design Heroes”?

adactio.com: I would say Falcor from Neverending Story, the big flying dog.

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

Frameworking

There are many reasons to use a JavaScript framework like Vue, Angular, or React. Last year, Nicole asked for some of those reasons. Her question received many, many answers from people pointing out the benefits of using a framework. Interesingly, though, not a single one of those benefits was for end users.

(Mind you, if the framework is being used on the server to pre-render pages, then it’s a moot point—in that situation, it makes no difference to the end user whether you use a framework or not.)

Hidde recently tried using a client-side JavaScript framework for the first time and documented the process:

In the last few months I built my very first framework-based front-end, in Vue.js. I complemented it with a router, a store and a GraphQL library, in order to have, respectively, multiple (virtual) pages, globally shared data and a smart way to load new data in my templates.

It’s a very even-handed write-up. I highly recommend reading it. He describes the pros and cons of using a framework and using vanilla JavaScript:

I am glad I tried a framework and found its features were extremely helpful in creating a consistent interface for my users. My hope is though, that I won’t forget about vanilla. It’s perfectly valid to build a website with no or few dependencies.

Speaking of vanilla JavaScript… the blogging machine that is Chris Ferdinandi also wrote a comparison post recently, asking Why do people choose frameworks over vanilla JS? Again, it’s very even-handed and well worth a read. He readily concedes that if you’re working at scale, a framework is almost certainly a good idea:

If you’re building a large scale application (literally Facebook, Twitter, QuickBooks scale), the performance wins of a framework make the overhead worth it.

Alas, I’ve seen many, many framework-driven sites that are most definitely not that operating at that scale. Trys speaks the honest truth here:

We kid ourselves into thinking we’re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain’t much more to it than that.

Just the other day, I saw a new site launch that was mostly a marketing site—the home page weighed over five megabytes, two megabytes of which were taken up with JavaScript, and the whole thing required JavaScript to render text to the screen (I’m not going to link to it because I don’t want to engage in any kind of public shaming and finger-wagging).

I worry that all the perfectly valid (developer experience) reasons for using a framwork are outweighing the more important (user experience) reasons for avoiding shipping your dependencies to end users. Like Alex says:

If your conception of “DX” doesn’t include it, or isn’t subservient to the user experience, rethink.

And yes, I am going to take this opportunity to link once again to Alex’s article The “Developer Experience” Bait-and-Switch. Please read it if you haven’t already. Please re-read it if you have.

Anyway, my main reason for writing this is to point you to thoughtful posts like Hidde’s and Chris’s. I think it’s great to see people thoughtfully weighing up the pros and cons of choosing any particular technology—I’m a bit obsessed with the topic of evaluating technology.

If you’re weighing up the pros and cons of using, say, a particular JavaScript library or framework, that’s wonderful. My worry is that there are people working in front-end development who aren’t putting that level of thought into their technology choices, but are instead using a particular framework because it’s what they’re used to.

To quote Grace Hopper:

The most dangerous phrase in the language is, ‘We’ve always done it this way.’

Inlining SVG background images in CSS with custom properties

Here’s a tiny lesson that I picked up from Trys that I’d like to share with you…

I was working on some upcoming changes to the Clearleft site recently. One particular component needed some SVG background images. I decided I’d inline the SVGs in the CSS to avoid extra network requests. It’s pretty straightforward:

.myComponent {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

You can basically paste your SVG in there, although you need to a little bit of URL encoding: I found that converting # to %23 to was enough for my needs.

But here’s the thing. My component had some variations. One of the variations had multiple background images. There was a second background image in addition to the first. There’s no way in CSS to add an additional background image without writing a whole background-image declaration:

.myComponent--variant {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>'), url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

So now I’ve got the same SVG source inlined in two places. That negates any performance benefits I was getting from inlining in the first place.

That’s where Trys comes in. He shared a nifty technique he uses in this exact situation: put the SVG source into a custom property!

:root {
    --firstSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
    --secondSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

Then you can reference those in your background-image declarations:

.myComponent {
    background-image: var(--firstSVG);
}
.myComponent--variant {
    background-image: var(--firstSVG), var(--secondSVG);
}

Brilliant! Not only does this remove any duplication of the SVG source, it also makes your CSS nice and readable: no more big blobs of SVG source code in the middle of your style sheet.

You might be wondering what will happen in older browsers that don’t support CSS custom properties (that would be Internet Explorer 11). Those browsers won’t get any background image. Which is fine. It’s a background image. Therefore it’s decoration. If it were an important image, it wouldn’t be in the background.

Progressive enhancement, innit?

Three more Patterns Day speakers

There are 73 days to go until Patterns Day. Do you have your ticket yet?

Perhaps you’ve been holding out for some more information on the line-up. Well, I’m more than happy to share the latest news with you—today there are three new speakers on the bill…

Emil Björklund, the technical director at the Malmö outpost of Swedish agency inUse, is a super-smart person I’ve known for many years. Last year, I saw him on stage in his home town at the Confront conference sharing some of his ideas on design systems. He blew my mind! I told him there and then that he had to come to Brighton and expand on those thoughts some more. This is going to be an unmissable big-picture talk in the style of Paul’s superb talk last year.

Speaking of superb talks from last year, Alla Kholmatova is back! Her closing talk from the first Patterns Day was so fantastic that it I just had to have her come back. Oh, and since then, her brilliant book on Design Systems came out. She’s going to have a lot to share!

The one thing that I felt was missing from the first Patterns Day was a focus on inclusive design. I’m remedying that this time. Heydon Pickering, creator of the Inclusive Components website—and the accompanying book—is speaking at Patterns Day. I’m very excited about this. Given that Heydon has a habit of casually dropping knowledge bombs like the lobotomised owl selector and the flexbox holy albatross, I can’t wait to see what he unleashes on stage in Brighton on June 28th.

Emil Björklund Alla Kholmatova Heydon Pickering
Emil, Alla, and Heydon

Be there or be square.

Tickets for Patterns Day are still available, but you probably don’t want to leave it ‘till the last minute to get yours. Just sayin’.

The current—still incomplete—line-up comprises:

That isn’t even the full roster of speakers, and it’s already an unmissable event!

I very much hope you’ll join me in the beautiful Duke of York’s cinema on June 28th for a great day of design system nerdery.

Design perception

Last week I wrote a post called Dev perception:

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

The sentiment I expressed resonated with a lot of people. Like, a lot of people.

I was talking specifically about web development and technology choices, but I think the broader point applies to other disciplines too.

Last month I had the great pleasure of moderating two panels on design leadership at an event in London (I love moderating panels, and I think I’m pretty darn good at it too). I noticed that the panels comprised representatives from two different kinds of companies.

There were the digital-first companies like Spotify, Deliveroo, and Bulb—companies forged in the fires of start-up culture. Then there were the older companies that had to make the move to digital (transform, if you will). I decided to get a show of hands from the audience to see which kind of company most people were from. The overwhelming majority of attendees were from more old-school companies.

Just as most of the ink spilled in the web development world goes towards the newest frameworks and toolchains, I feel like the majority of coverage in the design world is spent on the latest outputs from digital-first companies like AirBnB, Uber, Slack, etc.

The end result is the same. A typical developer or designer is left feeling that they—and their company—are behind the curve. It’s like they’re only seeing the Instagram version of their industry, all airbrushed and filtered, and they’re comparing that to their day-to-day work. That can’t be healthy.

Personally, I’d love to hear stories from the trenches of more representative, traditional companies. I also think that would help get an important message to people working in similar companies:

You are not alone!

Split

When I talk about evaluating technology for front-end development, I like to draw a distinction between two categories of technology.

On the one hand, you’ve got the raw materials of the web: HTML, CSS, and JavaScript. This is what users will ultimately interact with.

On the other hand, you’ve got all the tools and technologies that help you produce the HTML, CSS, and JavaScript: pre-processors, post-processors, transpilers, bundlers, and other build tools.

Personally, I’m much more interested and excited by the materials than I am by the tools. But I think it’s right and proper that other developers are excited by the tools. A good balance of both is probably the healthiest mix.

I’m never sure what to call these two categories. Maybe the materials are the “external” technologies, because they’re what users will interact with. Whereas all the other technologies—that mosty live on a developer’s machine—are the “internal” technologies.

Another nice phrase is something I heard during Chris’s talk at An Event Apart in Seattle, when he quoted Brad, who talked about the front of the front end and the back of the front end.

I’m definitely more of a front-of-the-front-end kind of developer. I have opinions on the quality of the materials that get served up to users; the output should be accessible and performant. But I don’t particularly care about the tools that produced those materials on the back of the front end. Use whatever works for you (or whatever works for your team).

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time. Like I said, from a user’s point of view, it’s irrelevant what text editor or version control system you use.

Now, you could make the argument that anything that is good for developer convenience is automatically good for user experience because faster, more efficient development should result in better output. While that’s true in theory, I highly recommend Alex’s post, The “Developer Experience” Bait-and-Switch.

Where it gets interesting is when a technology that’s designed for developer convenience is made out of the very materials being delivered to users. For example, a CSS framework like Bootstrap is made of CSS. That’s different to a tool like Sass which outputs CSS. Whether or not a developer chooses to use Sass is irrelevant to the user—the final output will be CSS either way. But if a developer chooses to use a CSS framework, that decision has a direct impact on the user experience. The user must download the framework in order for the developer to get the benefit.

So whereas Sass sits at the back of the front end—where I don’t care what you use—Bootstrap sits at the front of the front end. For tools like that, I don’t think saying “use whatever works for you” is good enough. It’s got to be weighed against the cost to the user.

Historically, it’s been a similar story with JavaScript libraries. They’re written in JavaScript, and so they’re going to be executed in the browser. If a developer wanted to use jQuery to make their life easier, the user paid the price in downloading the jQuery library.

But I’ve noticed a welcome change with some of the bigger JavaScript frameworks. Whereas the initial messaging around frameworks like React touted the benefits of state management and the virtual DOM, I feel like that’s not as prevalent now. You’re much more likely to hear people—quite rightly—talk about the benefits of modularity and componentisation. If you combine that with the rise of Node—which means that JavaScript is no longer confined to the browser—then these frameworks can move from the front of the front end to the back of the front end.

We’ve certainly seen that at Clearleft. We’ve worked on multiple React projects, but in every case, the output was server-rendered. Developers get the benefit of working with a tool that helps them. Users don’t pay the price.

For me, this question of whether a framework will be used on the client side or the server side is crucial.

Let me tell you about a Clearleft project that sticks in my mind. We were working with a big international client on a product that was going to be rolled out to students and teachers in developing countries. This was right up my alley! We did plenty of research into network conditions and typical device usage. That then informed a tight performance budget. Every design decision—from web fonts to images—was informed by that performance budget. We were producing lean, mean markup, CSS, and JavaScript. But we weren’t the ones implementing the final site. That was being done by the client’s offshore software team, and they insisted on using React. “That’s okay”, I thought. “React can be used server-side so we can still output just what’s needed, right?” Alas, no. These developers did everything client side. When the final site launched, the log-in screen alone required megabytes of JavaScript just to render a form. It was, in my opinion, entirely unfit for purpose. It still pains me when I think about it.

That was a few years ago. I think that these days it has become a lot easier to make the decision to use a framework on the back of the front end. Like I said, that’s certainly been the case on recent Clearleft projects that involved React or Vue.

It surprises me, then, when I see the question of server rendering or client rendering treated almost like an implementation detail. It might be an implementation detail from a developer’s perspective, but it’s a key decision for the user experience. The performance cost of putting your entire tech stack into the browser can be enormous.

Alex Sanders from the development team at The Guardian published a post recently called Revisiting the rendering tier . In it, he describes how they’re moving to React. Now, if this were a move to client-rendered React, that would make a big impact on the user experience. The thing is, I couldn’t tell from the article whether React was going to be used in the browser or on the server. The article talks about “rendering”—which is something that browsers do—and “the DOM”—which is something that only exists in browsers.

So I asked. It turns out that this plan is very much about generating HTML and CSS on the server before sending it to the browser. Excellent!

With that question answered, I’m cool with whatever they choose to use. In this case, they’re choosing to use CSS-in-JS (although, to be pedantic, there’s no C anymore so technically it’s SS-in-JS). As long as the “JS” part is JavaScript on a server, then it makes no difference to the end user, and therefore no difference to me. Not my circus, not my monkeys. For users, the end result is the same whether styling is applied via a selector in an external stylesheet or, for example, via an inline style declaration (and in some situations, a server-rendered CSS-in-JS solution might be better for performance). And so, as a user-centred developer, this is something that I don’t need to care about.

Except…

I have misgivings. But just to be clear, these misgivings have nothing to do with users. My misgivings are entirely to do with another group of people: the people who make websites.

There’s a second-order effect. By making React—or even JavaScript in general—a requirement for styling something on a web page, the barrier to entry is raised.

At least, I think that the barrier to entry is raised. I completely acknowledge that this is a subjective judgement. In fact, the reason why a team might decide to make JavaScript a requirement for participation might well be because they believe it makes it easier for people to participate. Let me explain…

It wasn’t that long ago that devs coming from a Computer Science background were deriding CSS for its simplicity, complaining that “it’s broken” and turning their noses up at it. That rhetoric, thankfully, is waning. Nowadays they’re far more likely to acknowledge that CSS might be simple, but it isn’t easy. Concepts like the cascade and specificity are real head-scratchers, and any prior knowledge from imperative programming languages won’t help you in this declarative world—all your hard-won experience and know-how isn’t fungible. Instead, it seems as though all this cascading and specificity is butchering the modularity of your nicely isolated components.

It’s no surprise that programmers with this kind of background would treat CSS as damage and find ways to route around it. The many flavours of CSS-in-JS are testament to this. From a programmer’s point of view, this solution has made things easier. Best of all, as long as it’s being done on the server, there’s no penalty for end users. But now the price is paid in the diversity of your team. In order to participate, a Computer Science programming mindset is now pretty much a requirement. For someone coming from a more declarative background—with really good HTML and CSS skills—everything suddenly seems needlessly complex. And as Tantek observed:

Complexity reinforces privilege.

The result is a form of gatekeeping. I don’t think it’s intentional. I don’t think it’s malicious. It’s being done with the best of intentions, in pursuit of efficiency and productivity. But these code decisions are reflected in hiring practices that exclude people with different but equally valuable skills and perspectives.

Rachel describes HTML, CSS and our vanishing industry entry points:

If we make it so that you have to understand programming to even start, then we take something open and enabling, and place it back in the hands of those who are already privileged.

I think there’s a comparison here with toxic masculinity. Toxic masculinity is obviously terrible for women, but it’s also really shitty for men in the way it stigmatises any male behaviour that doesn’t fit its worldview. Likewise, if the only people your team is interested in hiring are traditional programmers, then those programmers are going to resent having to spend their time dealing with semantic markup, accessibility, styling, and other disciplines that they never trained in. Heydon correctly identifies this as reluctant gatekeeping:

By assuming the role of the Full Stack Developer (which is, in practice, a computer scientist who also writes HTML and CSS), one takes responsibility for all the code, in spite of its radical variance in syntax and purpose, and becomes the gatekeeper of at least some kinds of code one simply doesn’t care about writing well.

This hurts everyone. It’s bad for your team. It’s even worse for the wider development community.

Last year, I was asked “Is there a fear or professional challenge that keeps you up at night?” I responded:

My greatest fear for the web is that it becomes the domain of an elite priesthood of developers. I firmly believe that, as Tim Berners-Lee put it, “this is for everyone.” And I don’t just mean it’s for everyone to use—I believe it’s for everyone to make as well. That’s why I get very worried by anything that raises the barrier to entry to web design and web development.

I’ve described a number of dichotomies here:

  • Materials vs. tools,
  • Front of the front end vs. back of the front end,
  • User experience vs. developer experience,
  • Client-side rendering vs. server-side rendering,
  • Declarative languages vs. imperative languages.

But the split that worries the most is this:

  • The people who make the web vs. the people who are excluded from making the web.

Dev perception

Chris put together a terrific round-up of posts recently called Simple & Boring. It links off to a number of great articles on the topic of complexity (and simplicity) in web development.

I had linked to quite a few of the articles myself already, but one I hadn’t seen was from David DeSandro who wrote New tech gets chatter:

You don’t hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

I think that’s a very good point.

It’s relatively easy to write and speak about new technologies. You’re excited about them, and there’s probably an eager audience who can learn from what you have to say.

It’s trickier to write something insightful about a tried and trusted (perhaps even boring) technology that’s been around for a while. You could maybe write little tips and tricks, but I bet your inner critic would tell you that nobody’s interested in hearing about that old tech. It’s boring.

The result is that what’s being written about is not a reflection of what’s being widely used. And that’s okay …as long as you know that’s the case. But I worry that theres’s a perception problem. Because of the outsize weighting of new and exciting technologies, a typical developer could feel that their skills are out of date and the technologies they’re using are passé …even if those technologies are actually in wide use.

I don’t know about you, but I constantly feel like I’m behind the curve because I’m not currently using TypeScript or GraphQL or React. Those are all interesting technologies, to be sure, but the time to pick any of them up is when they solve a specific problem I’m having. Learning a new technology just to mitigate a fear of missing out isn’t a scalable strategy. It’s reasonable to investigate a technology because you genuinely think it’s exciting; it’s quite another matter to feel like you must investigate a technology in order to survive. That way lies burn-out.

I find it very grounding to talk to Drew and Rachel about the people using their Perch CMS product. These are working developers, but they are far removed from the world of tools and frameworks forged in the startup world.

In a recent (excellent) article comparing the performance of Formula One websites, Jake made this observation at the end:

However, none of the teams used any of the big modern frameworks. They’re mostly Wordpress & Drupal, with a lot of jQuery. It makes me feel like I’ve been in a bubble in terms of the technologies that make up the bulk of the web.

I think this is very astute. I also think it’s completely understandable to form ideas about what matters to developers by looking at what’s being discussed on Twitter, what’s being starred on Github, what’s being spoken about at conferences, and what’s being written about on Ev’s blog. But it worries me when I see browser devrel teams focusing their efforts on what appears to be the needs of typical developers based on the amount of ink spilled and breath expelled.

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

Trys wrote a great blog post called City life, where he compares his experience of doing CMS-driven agency work with his experience working at a startup in Shoreditch:

I was chatting to one of the team about my previous role. “I built two websites a month in WordPress”.

They laughed… “WordPress! Who uses that anymore?!”

Nearly a third of the web as it turns out - but maybe not on the Silicon Roundabout.

I’m not necessarily suggesting that there should be more articles and talks about older, more established technologies. Conferences in particular are supposed to give audiences a taste of what’s coming—they can be a great way of quickly finding out what’s exciting in the world of development. But we shouldn’t feel bad if those topics don’t match our day-to-day reality.

Ultimately what matters is building something—a website, a web app, whatever—that best serves end users. If that requires a new and exciting technology, that’s great. But if it requires an old and boring technology, that’s also great. What matters here is appropriateness.

When we’re evaluating technologies for appropriateness, I hope that we will do so through the lens of what’s best for users, not what we feel compelled to use based on a gnawing sense of irrelevancy driven by the perceived popularity of newer technologies.

CSS custom properties in generated content

Cassie posted a neat tiny lesson that she’s written a reduced test case for.

Here’s the situation…

CSS custom properties are fantastic. You can drop them in just about anywhere that a property takes a value.

Here’s an example of defining a custom property for a length:

:root {
    --my-value: 1em;
}

Then I can use that anywhere I’d normally give something a length:

.my-element {
    margin-bottom: var(--my-value);
}

I went a bit overboard with custom properties on the new Patterns Day site. I used them for colour values, font stacks, and spacing. Design tokens, I guess. They really come into their own when you combine them with media queries: you can update the values of the custom properties based on screen size …without having to redefine where those properties are applied. Also, they can be updated via JavaScript so they make for a great common language between CSS and JavaScript: you can define where they’re used in your CSS and then update their values in JavaScript, perhaps in response to user interaction.

But there are a few places where you can’t use custom properties. You can’t, for example, use them as part of a media query. This won’t work:

@media all and (min-width: var(--my-value)) {
    ...
}

You also can’t use them in generated content if the value is a number. This won’t work:

:root {
    --number-value: 15;
}
.my-element::before {
    content: var(--number-value);
}

Fair enough. Generated content in CSS is kind of a strange beast. Eric delivered an entire hour-long talk at An Event Apart in Seattle on generated content.

But Cassie found a workaround if the value you want to put into that content property is numeric. The CSS counter value is a kind of generated content—the numbers that appear in front of ordered list items. And you can control the value of those numbers from CSS.

CSS counters work kind of like variables. You name them and assign values to them using the counter-reset property:

.my-element {
    counter-reset: mycounter 15;
}

You can then reference the value of mycounter in a content property using the counter value:

.my-element {
    content: counter(mycounter);
}

Cassie realised that even though you can’t pass in a custom property directly to generated content, you can pass in a custom property to the counter-reset property. So you can do this:

:root {
    --number-value: 15;
}
.my-element {
    counter-reset: mycounter var(--number-value);
    content: counter(mycounter);
}

In a roundabout way, this allows you to use a custom property for generated content!

I realise that the use cases are pretty narrow, but I can’t help but be impressed with the thinking behind this. Personally, I would’ve just read that generated content doesn’t accept custom properties and moved on. I would’ve given up quickly. But Cassie took a step back and found a creative pass-the-parcel solution to the problem.

I feel like this is a hack in the best sense of the word: a creatively improvised solution to a problem or limitation.

I was trying to display the numeric value stored in a CSS variable inside generated content… Turns out you can’t do that. But you can do this… codepen.io/cassie-codes/p… (not saying you should, but you could)

Other people’s weeknotes

Paul is writing weeknotes. Here’s his latest.

Amy is writing weeknotes. Here’s her latest.

Aegir is writing weeknotes. Here’s his latest.

Nat is writing weeknotes. Here’s their latest.

Alice is writing weeknotes. Here’s her latest.

Mark is writing weeknotes. Here’s his latest.

I enjoy them all.

Unsolved Problems by Beth Dean

An Event Apart in Seattle continues. It’s the afternoon of day two and Beth Dean is here to give a talk called Unsolved Problems:

Technology products are being adapted faster than ever. We’ve spent a lot of time adopting new technology, but not as much time considering the social impact of doing so. This talk looks at large scale system design in the offline world, and takes lessons from them to our online work. You’ll learn how to expand your design approach from self-contained products, to considering the broader systems in which they exist.

Fun fact: An Event Apart was the first conference that Beth attended over ten years ago.

Who recognises this guy on screen? It’s Robert Stack, the creepy host of Unsolved Mysteries. It was kind of like the X-Files. The X-Files taught Beth to be a sceptic. Imagine Beth’s surprise when her job at Facebook led her to actual conspiracies. It’s been a hard year, what with Cambridge Analytica and all.

Beth’s team is focused on how people experience ads, while the whole rest of the company is focused on ads from the opposite end. She’s the Fox Mulder of the company.

Technology today has incredible reach. In recent years, we’ve seen 1:1 harm. That’s when a product negatively effects someone directly. In their book, Eric and Sara point out that Facebook is often the first company to solve these problems.

1:many harm is another use of technology. Designing in isolation isn’t new to tech. We’ve seen 1:many harm in urban planning. Brasilia is a beautiful city that nobody wants to live in. You need messy, mixed-use spaces, not a space designed for cars. Niemeier planned for efficiency, not reality.

Eichler buildings were supposed to be egalitarian. But everything that makes these single-story homes great places to live also makes them great targets for criminals. Isolation by intentional design leads to a less safe place to live.

One of Frank Gehry’s buildings turned into a deathtrap when it was covered with snow. And in summer, the reflective material makes it impossible to sit on side of it. His Facebook office building has some “interesting” restroom allocation, which was planned last.

Ohio had a deer overpopulation problem. So the solution they settled on was to introduce coyotes. Now there’s a coyote problem. When coyotes breed with stray dogs, they start to get aggressive and they hunt in packs. This is the cobra effect: when the solution to your problem makes the problem worse. The British government offered a bounty for cobras in India. So people bred snakes for the bounty. So they got rid of the bounty …and then all those snakes were released into the wild.

So-called “ride sharing” apps are about getting one person from point A to point B. They’re not about making getting around easier in general.

Google traffic directions don’t factor in the effect of Google giving everyone the same traffic directions.

AirBnB drives up rent …even though it started out as a way to help people who couldn’t make rent. Sounds like cobra farming.

Automating Inequality by Virgina Eubanks is an excellent book about being dropped by health insurance. An algorithm did it. By taking broken systems and automating them, we accelerate disenfranchisement.

Then there’s Facebook. Psychological warfare is not new. Radio and television have influenced elections long before the internet. Politicians changed their language to fit the medium of radio.

The internet has removed all friction that helps us behave cooperatively. Removing friction was once our goal, but it turns out that friction is sometimes useful. The internet has turned into an outrage machine.

Solving problems in the isolation of our own products ignores the broader context of society.

The Waze map reflects cities as they are, not the way someone wishes them to be.

—Noam Bardin, CEO of Waze

From bulletin boards to today’s web, the internet has always been toxic because human nature is toxic. Maybe that’s the bigger problem to solve.

We can look to other industries…

Ideo redesigned the hospital experience. People were introduced to their entire care staff on their first visit. Sloan Kettering took a similar approach. Artwork serves as wayfinding. Every room has its own bathroom. A Chicago hostpital included gardens because it improves recovery.

These hospital examples all:

  • Designed for an intended outcome.
  • Met people where they were.
  • Strengthened existing support networks.

We’ve seen some bad examples from urban planning, but there are success stories too.

A person on a $30 bicycle is as important as someone in a $30,000 car, said Enrique Peñalosa.

Copenhagen once faced awful traffic congestion. Now people cycle everywhere. It’s the fastest way to get around. The city is designed for bicycles first. People rode more when it felt safer. It’s no coincidence that Copenhagen ranks as one of the most livable cities in the world.

Scandinavian prisons use a concept called restorative justice. The staff plays badminton with the inmates. They cook together. Treat people like dirt and they will act like dirt. Treat people like people and they will act like people. Recividism rates in Norway are now way low.

  • Design for dignity and cooperation.
  • Solve for everyone in a system.
  • Policy should reflect intended outcomes.

The deHavilland Comet was made of metal. After a few blew apart at the seams, they switched from rivetted material. Airlines today develop a culture of crew resource management that encourages people to speak up.

  • Plan for every point of failure.
  • Empower everyone on a team to solve problems.
  • Adapt.

What can we do?

  • Policies affect design. We need to work more closely with policy makers.
  • Question access. Are all opinions equal? Where are computers making decisions that should involve people.
  • Forget neutrality. Technology is not neutral. Neutrality allows us to abdicate responsibility.
  • Stay a litte bit paranoid. Think about what the worst case scenario might be.

Make people better curators. How might we allow people to assess the veracity of information for themselves? What if we gave people better tools to affect their overall experience, not just small customisations?

We can use what we know about people to bring out their best behaviours. We can empower people to take action instead of just outrage.

What if we designed for the good of the community instead of the success of individuals. Like the Vauban in Freiburg! It was squatted, and the city gave control to the squatters to create an eco neighbourhood with affordable housing.

We need to think about what kind of worlds we want to create. What if we made the web less like a mall and more like a public park?

These are hard problems. But we solve hard technology problems every day. We could be the first generation of builders to solve technology’s hard problems.

Slow Design for an Anxious World by Jeffrey Zeldman

I’m at An Event Apart in Seattle, ready for three days of excellence. Setting the scene with the first talk of the event is the one and only Jeffrey Zeldman. His talk is called Slow Design for an Anxious World:

Most web pages are too fast or too slow. Last year, Zeldman showed us how to create design that works faster for customers in a hurry to get things done. This year he’ll show how to create designs that deliberately slow your visitors down, helping them understand more and make better decisions.

Learn to make layouts that coax the visitor to sit back, relax, and actually absorb the content your team works so hard to create. Improve UX significantly without spending a lot or chasing the tail lights of the latest whiz-bang tech. Whether you build interactive experiences or craft editorial pages, you’ll learn how to ease your customers into the experience and build the kind of engagement you thought the web had lost forever.

I’m going to attempt to jot down the gist of it as it happens…

Jeffrey begins by saying that he’s going to slooooowly ease us into the day. Slow isn’t something that our industry prizes. Things change fast on the internet. “You’re using last year’s framework!?” Ours is a newly-emerging set of practices.

Slow is negative in our culture too. We don’t like slow movies, or slow books. But somethings are better slow. Wine that takes time to make is better than wine that you produce in a prison toilet in five days. Slow-brewed coffee is well-brewed coffee. Slow dancing is nice. A slow courtship is nice. And reading slowly is something enjoyable. Sometimes you need to scan information quickly, but when we really immerse ourselves in a favourite book, we really comprehend better. Hold that thought. We’re going to come to books.

Fast is generally what we’re designing for. It’s the best kind of design for customer service designs—for people who want to accomplish something and then get on with their lives. Fast is good for customer service designs. Last year Jeffrey gave a talk last year called Beyond Engagement where he said that service-oriented content must be designed for speed of relevancy. Speed of loading is important, and so is speed of relevancy—how quickly can you give people the right content.

But slow is best for comprehension. Like Mr. Rogers. When things are a little bit slower, it’s kind of easier to understand. When you’re designing for readers, s l o w i t d o w n.

How do we slow down readers? That’s what this talk is about (he told us it would be slow—he only just got to what the point of this talk is).

Let’s start with a form factor. The book. A book is a hack where the author’s brain is transmitting a signal to the reader’s brain, and the designer of the book is making that possible. Readability is more than legibility. Readability transcends legibility, enticing people to slow down and read.

This is about absorption, not conversion. We have the luxury of doing something different here. It’s a challenge.

Remember Readability? It was designed by Arc90. They mostly made software applications for arcane enterprise systems, and that stuff tends not to be public. It’s hard for an agency to get new clients when it can’t show what it does. So they decided to make some stuff that’s just for the public. Arc90 Labs was spun up to make free software for everyone.

Readability was like Instapaper. Instapaper was made by Marco Arment so that he could articles when he was commuting on the subway. Readability aimed to do that, but to also make the content like beautiful. It’s kind of like how reader mode in Safari strips away superfluous content and formats what’s left into something more readable. Safari’s reader mode was not invented by Apple. It was based on the code from Readability. The mercury reader plug-in for Chrome also uses Readability’s code. Jeffrey went around pointing out to companies that the very existence of things like Readability was a warning—we’re making experiences so bad that people are using software to work around them. What we can do so that people don’t have to use these tools?

Craig Mod wrote an article for A List Apart called A Simpler Page back in 2011. With tablets and phones, there isn’t one canonical presentation of content online any more. Our content is sort of amorphous. Craig talked about books and newspapers on tablets. He talked about bed, knee, and breakfast distances from the body to the content.

  1. Bed (close to face): reading a novel on your stomach, lying in bed with the iPad propped up on a pillow.
  2. Knee (medium distance from face): sitting on the couch, iPad on your knee, catching up on Instapaper.
  3. Breakfast (far from face): propped up at a comfortable angle, behind your breakfast coffee and bagel, allowing hands-free news reading.

There’s some correlation between distance and relaxation. That knee position is crucial. That’s when the reader contemplates with pleasure and concentration. They’re giving themselves the luxury of contemplation. It’s a very different feeling to getting up and going over to a computer.

So Jeffrey redesigned his own site with big, big type, and just one central column of text. He stripped away the kind of stuff that Readability and Instapaper would strip away. He gave people a reader layout. You would have to sit back to read the content. He knew he succeeded because people started complaining: “Your type is huge!” “I have to lean back just to read it!” Then he redesigned A List Apart with Mike Pick. This was subtler.

Medium came along with the same focus: big type in a single column. Then the New York Times did it, when they changed their business model to a subscription paywall. They could remove quite a bit of the superfluous content. Then the Washington Post did it, more on their tablet design than their website. The New Yorker—a very old-school magazine—also went down this route, and they’re slow to change. Big type. White space. Bold art direction. Pro Publica is a wonderful non-profit newspaper that also went this route. They stepped it up by adding one more element: art direction on big pieces.

How do these sites achieve their effect of slowing you down and calming you?

Big type. We spend a lot of our time hunched forward. Big type forces you to sit back. It’s like that first moment in a yoga workshop where you’ve got to just relax before doing anything. With big type, you can sit back, take a breathe, and relax.

Hierarchy. This is classic graphic design. Clear relationships.

Minimalism. Not like Talking Heads minimalism, but the kind of minimalism where you remove every extraneous detail. Like what Mies van der Rohe did for architecture, where just the proportions—the minimalism—is the beauty. Or like what Hemingway did with writing—scratch out everything but the nouns and verbs. Kill your darlings.

Art direction. When you have a fancy story, give it some fancy art direction. Pro Publica understand that people won’t get confused about what site they’re on—they’ll understand that this particular story is special.

Whitespace. Mark Boulton wrote an article about whitespace in A List Apart. He talked about two kinds of whitespace: macro and micro. Macro is what we usually think about when we talk about whitespace. Whitespace conveys feelings of extreme luxury, and luxury brands know this. Whitespace makes us feels special. Macro whitespace can be snotty. But there’s also micro whitespace. That’s the space between lines of type, and the space inside letterforms. There’s more openness and air, even if the macro whitespace hasn’t changed.

Jeffrey has put a bunch of these things together into an example.

To recap, there are five points:

  1. Big type
  2. Hierarchy
  3. Minimalism
  4. Art direction
  5. Whitespace

There are two more things that Jeffrey wants to mention before his done. If you want people to pay attention to your design, it must be branded and it must be authoritative.

Branded. When all sites look the same, all content appears equal. Jeffrey calls this the Facebook effect. Whether it’s a noble-prize-winning author, or your uncle ranting, everthing gets the same treatment on Facebook. If you’re taking the time to post content to the web, take the time to let people know who’s talking.

Authoritative. When something looks authoritative, it cues the reader to your authenticity and integrity. Notice how every Oscar-worthy movie uses Trajan on its poster. That’s a typeface based on a Roman column. Strong, indelible letter forms carved in stone. We have absorbed those letterforms into our collective unconcious. Hollywood tap into this by using Trajan for movie titles.

Jeffrey wrote an article called To Save Real News about some of these ideas.

And with that, Jeffrey thanks us and finishes up.

Patterns Day 2: June 28th, 2019

Surprise! Patterns Day is back!

The first Patterns Day was in the Summer of 2017, and it was a glorious—a single day devoted to all things design system-y: pattern libraries, style guides, maintainability, reusability. It was a lot of fun, so let’s do it again!

Patterns Day 2 will take place on Friday, June 28th, in the beautiful Duke of York’s cinema in Brighton. If you went to the first Patterns Day, then you’ll know how luxuriously comfy it is in there.

Tickets are £175+VAT. The format will likely be the same as before: an action-packed day of eight talks, each 30 minutes long.

I’ve got an amazing line-up of speakers, but instead of telling you the whole line-up straightaway, I’m going to tease a little bit, and announce more speakers over the next few weeks and months. For now, here are the first three speakers, to give you an idea of the quality you can expect:

  • All the way from the US of A, it’s Una Kravets, who needs no introduction.
  • From the Government Digital Service, we’ve got Amy Hupe—she’ll have plenty to share about the GOV.UK design system.
  • And we’ve got Yaili, now a senior designer at Microsoft, where she works on the Azure DevOps design system.

Patterns Day will have something for everyone. We’ll be covering design, development, content strategy, product management, and accessibility. So you might want to make this a one-day outing for your whole team.

If you want to get a feel for what the day will be like, you can watch the videos of last year’s talks

Tickets for last year’s Patterns Day went fairly fast—the Duke of York’s doesn’t have a huge capacity—so don’t dilly-dally too long before grabbing your ticket!

Marty’s mashup

While the Interaction 19 event was a bit of a mixed bag overall, there were some standout speakers.

Marty Neumeier was unsurprisingly excellent. I’d seen him speak before, at UX London a few years back, so I knew he’d be good. He has a very reassuring, avuncular manner when he’s speaking. You know the way that there are some people you could just listen to all day? He’s one of those.

Marty’s talk at Interaction 19 was particularly interesting because it was about his new book. Now, why would that be of particular interest? Well, this new book—Scramble—is a business book, but it’s written in the style of a thriller. He wanted it to be like one of those airport books that people read as a guilty pleasure.

One rainy night in December, young CEO David Stone is inexplicably called back to the office. The company’s chairman tells him that the board members have reached the end of their patience. If David can’t produce a viable turnaround plan in five weeks, he’s out of a job. His only hope is to try something new. But what?

I love this idea!

I’ve talked before about borrowing narrative structures from literature and film and applying them to blog posts and conference talks—techniques like flashback, in media res, etc.—so I really like the idea of taking an entire genre and applying it to a technical topic.

The closest I’ve seen is the comic that Scott McCloud wrote for the release of Google Chrome back in 2008. But how about a romantic comedy about service workers? Or a detective novel about CSS grid?

I have a feeling I’ll be thinking about Marty Neumeier’s book next time I’m struggling to put a conference talk together.

In the meantime, if you want to learn from the master storyteller himself, Clearleft are running a two-day Brand Master Workshop with Marty on March 14th and 15th at The Barbican in London. Early bird tickets are on sale until this Thursday, so don’t dilly-dally if you were thinking about nabbing your spot.

Mirrorworld

Over on the Failed Architecture site, there’s a piece about Kevin Lynch’s 1960 book The Image Of The City. It’s kind of fun to look back at a work like that, from today’s vantage point of ubiquitous GPS and smartphones with maps that bestow God-like wayfinding. How much did Lynch—or any other futurist from the past—get right about our present?

Quite a bit, as it turns out.

Lynch invented the term ‘imageability’ to describe the degree to which the urban environment can be perceived as a clear and coherent mental image. Reshaping the city is one way to increase imageability. But what if the cognitive map were complemented by some external device? Lynch proposed that this too could strengthen the mental image and effectively support navigation.

Past visions of the future can be a lot of fun. Matt Novak’s Paleofuture blog is testament to that. Present visions of the future are rarely as enjoyable. But every so often, one comes along…

Kevin Kelly has a new piece in Wired magazine about Augmented Reality. He suggests we don’t call it AR. Sounds good to me. Instead, he proposes we use David Gelernter’s term “the mirrorworld”.

I like it! I feel like the term won’t age well, but that’s not the point. The term “cyberspace” hasn’t aged well either—it sounds positively retro now—but Gibson’s term served its purpose in prompting discussing and spurring excitement. I feel like Kelly’s “mirrorworld” could do the same.

Incidentally, the mirrorworld has already made an appearance in the William Gibson book Spook Country in the form of locative art:

Locative art, a melding of global positioning technology to virtual reality, is the new wrinkle in Gibson’s matrix. One locative artist, for example, plants a virtual image of F. Scott Fitzgerald dying at the very spot where, in fact, he had his Hollywood heart attack, and does the same for River Phoenix and his fatal overdose.

Yup, that sounds like the mirrorworld:

Time is a dimension in the mirror­world that can be adjusted. Unlike the real world, but very much like the world of software apps, you will be able to scroll back.

Now look, normally I’m wary to the point of cynicism when it comes to breathless evocations of fantastical futures extropolated from a barely functioning technology of today, but damn, if Kevin Kelly’s enthusiasm isn’t infectious! He invokes Borges. He acknowledges the challenges. But mostly he pumps up the excitement by baldly stating possible outcomes as though they are inevitabilities:

We will hyperlink objects into a network of the physical, just as the web hyperlinked words, producing marvelous benefits and new products.

When he really gets going, we enter into some next-level science-fictional domains:

The mirrorworld will be a world governed by light rays zipping around, coming into cameras, leaving displays, entering eyes, a never-­ending stream of photons painting forms that we walk through and visible ghosts that we touch. The laws of light will govern what is possible.

And then we get sentences like this:

History will be a verb.

I kind of love it. I mean, I’m sure we’ll look back on it one day and laugh, shaking our heads at its naivety, but for right now, it’s kind of refreshing to read something so unabashedly hopeful and so wildly optimistic.

2018 in numbers

I posted to adactio.com 1,387 times in 2018: sparkline

In amongst those notes were:

In my blog posts, the top tags were:

  1. frontend and development (42 posts), sparkline
  2. serviceworkers (27 posts), sparkline
  3. design (20 posts), sparkline
  4. writing and publishing (19 posts), sparkline
  5. javascript (18 posts). sparkline

In my links, the top tags were:

  1. development (305 links), sparkline
  2. frontend (289 links), sparkline
  3. design (178 links), sparkline
  4. css (110 links), sparkline
  5. javascript (106 links). sparkline

When I wasn’t updating this site:

But these are just numbers. To get some real end-of-year thoughts, read posts by Remy, Andy, Ana, or Bill Gates.

Words I wrote in 2018

I wrote just shy of a hundred blog posts in 2018. That’s an increase from 2017. I’m happy about that.

Here are some posts that turned out okay…

A lot of my writing in 2018 was on technical topics—front-end development, service workers, and so on—but I should really make more of an effort to write about a wider range of topics. I always like when Zeldman writes about his glamourous life. Maybe in 2019 I’ll spend more time letting you know what I had for lunch.

I really enjoy writing words on this website. If I go too long between blog posts, I start to feel antsy. The only relief is to move my fingers up and down on the keyboard and publish something. Sounds like a bit of an addiction, doesn’t it? Well, as habits go, this is probably one of my healthier ones.

Thanks for reading my words in 2018. I didn’t write them for you—I wrote them for me—but it’s always nice when they resonate with others. I’ll keep on writing my brains out in 2019.

Books I read in 2018

I read twenty books in 2018, which is exactly the same amount as I read in 2017. Reflecting on that last year, I said “It’s not as many as I hoped.” It does seem like a meagre amount, but in my defence, some of the books I read this year were fairly hefty tomes.

I decided to continue my experiment from last year of alternating fiction and non-fiction books. That didn’t quite work out, but it makes for a good guiding principle.

In ascending reading order, these are the books I read in 2018

A Fire Upon The Deep by Vernor Vinge

★★★☆☆

I started this towards the end of 2017 and finished it at the start of 2018. A good sci-fi romp, but stretched out a little bit long.

Time Travel: A History by James Gleick

★★★★☆

I really enjoyed this, but then, that’s hardly a surprise. The subject matter is tailor made for me. I don’t think this quite matches the brilliance of Gleick’s The Information, but I got a real kick out of it. A book dedicated to unearthing the archeology of a science-fiction concept is a truly fascinating idea. And it’s not just about time travel, per se—this is a meditation on the nature of time itself.

Traction by Gino Wickman

Andy was quite taken with this management book and purchased multiple copies for the Clearleft leadership team. I’ll refrain from rating it because it was more like a homework assignment than a book I would choose to read. It crystalises some good organisational advice into practical steps, but it probably could’ve been quite a bit shorter.

Provenance by Ann Leckie

★★★☆☆

It feels very unfair but inevitable to compare this to Ann Leckie’s amazing debut Imperial Radch series. It’s not in quite the same league, but it’s also not trying to be. This standalone book has a lighter tone. It’s a rollicking good sci-fi procedural. It may not be as mind-blowingly inventive as Ancillary Justice, but it’s still a thoroughly enjoyable read.

Visions, Ventures, Escape Velocities: A Collection of Space Futures edited by Ed Finn and Joey Eschrich, with guest editor Juliet Ulman

★★★☆☆

This book is free to download so it’s rather excellent value for money. It alternates sci-fi short stories with essays. Personally, I would skip the essays—they’re all a bit too academic for my taste. But some of these stories are truly excellent. There’s a really nice flow to the collection: it begins in low Earth orbit, then expands out to the Mars, the asteroid belt, and beyond. Death on Mars by Madeline Ashby was a real standout for me.

The Best of Richard Matheson by Richard Matheson, edited by Victor LaValle

★★★★☆

For some reason, I was sent a copy of this book by an editor at Penguin Classics. I have no idea why, but thank you, Sam! This turned out to be a lot of fun. I had forgotten just how many classics of horror and sci-fi are the work of Richard Matheson. He probably wrote your favourite Twilight Zone episode. There’s a real schlocky enoyment to be had from snacking on these short stories, occassionally interspersed with genuinely disturbing moments and glimpses of beauty.

Close To The Machine: Technophilia And Its Discontents by Ellen Ullman

★★★☆☆

Lots of ’90s feels in this memoir. A lot of this still resonates today. It’s kind of fascinating to read it now with the knowledge of how this whole internet thing would end up going.

Gnomon by Nick Harkaway

★★★★☆

This gripped me from the start, and despite its many twisty strands, it managed to keep me with it all the way through. Maybe it’s a bit longer than it needs to be, and maybe some of the diversions don’t entirely work, but it makes up for that with its audaciousness. I still prefer Goneaway World, but any Nick Harkaway book is a must-read.

Hidden Figures by Margot Lee Shetterly

★★★★☆

Terrific stuff. If you’ve seen the movie, you’ve got about one tenth of the story. The book charts a longer arc and provides much deeper social and political context.

Dawn by Octavia Butler

★★★☆☆

This is filled with interesting ideas, but the story never quite gelled for me. I’m not sure if I should continue with the rest of the Lilith’s Brood series. But there’s something compelling and unsettling in here.

Sapiens: A Brief History Of Humankind by Yuval Noah Harari

★★☆☆☆

Frustratingly inconsistent. Here’s my full review.

The Fifth Season by N.K. Jemisin

★★★★☆

The Obelisk Gate by N.K. Jemisin

★★★☆☆

The Stone Sky by N.K. Jemisin

★★★☆☆

I devoured these books back-to-back. The Fifth Season was terrific—packed to the brim with inventiveness. But neither The Obelisk Gate nor The Stone Sky quite did it for me. Maybe my expectations were set too high by that first installment. But The Broken Earth is still a fascinating and enjoyable series.

Programmed Inequality by Marie Hicks

I was really looking forward to this one, but I found its stiff academic style hard to get through. I still haven’t finished it. But I figure if I could read Sapiens through to the end, I can certainly manage this. The subject matter is certainly fascinating, and the research is really thorough, but I’m afraid the book is showing its thesis roots.

The Power by Naomi Alderman

★★★☆☆

This plays out its conceit well, and it’s a fun read, but it’s not quite a classic. It feels more like a Neil Gamain or Lauren Beukes page-turner than, say, a Margaret Atwood exploration. Definitely worth a read, though.

New York 2140 by Kim Stanley Robinson

★★★★☆

The world-building (or maybe it’s world rebuilding) is terrific. But once again, as is often the case with Kim Stanley Robinson, I find the plot to be lacking. This is not in the same league as Aurora. It’s more like 2312-on-sea. It’s frustrating. I’m torn between giving it three stars or four. I’m going to be generous because even though it’s not the best Kim Stanley Robinson book, it contains some of his best writing. There are passages that are breathtakingly good.

A Thread Across The Ocean by John Steele Gordon

★★★★☆

After (temporarily) losing my library copy of New York 2140, I picked this up in a bookstore in Charlottesville so I’d have something to read during my stay there. I was very glad I did. I really, really enjoyed this. It’s all about the transatlantic telegraphic cable, so if that’s your thing—as it is mine—you’re going to enjoy this. It makes a great companion piece to Tom Standage’s The Victorian Internet. Come for the engineering, stay for the nautical tales of derring-do.

Borne by Jeff VanderMeer

★★★★☆

Not as disturbing as the Southern Reach Trilogy, but equally unsettling in its own way. Shades of Oryx and Crake, but in a more fantastically surreal setting.

The Airs Of Earth by Brian Aldiss

★★★☆☆

A good collection of short stories from the master of sci-fi. I’ve got a backlog of old pulpy paperback Aldiss collections like this that make for good snackfood for the mind.

Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths

A Christmas present from my brother-in-law. I just cracked this open, so you’ll have to come back next year to find out how it fared.

Alright. Now it’s time to pick the winners.

I think the best fiction book I read this year was Nick Harkaway’s Gnomon.

For non-fiction, it’s a tough call. I really enjoyed Hidden Figures and A Thread Across The Ocean, but I think I’m going to have to give the top spot to James Gleick’s Time Travel: A History.

But there were no five star books this year. Maybe that will change in 2019. And maybe I’ll read more books next year, too. We’ll see.

In 2017, seven of the twenty books I read were by women. In 2018, it was nine out of twenty (not counting anthologies). That’s better, but I want keep that trajectory going in 2019.

Vienna

Back in December 1997, when Jessica and I were living in Freiburg, Dan came to visit. Together, we boarded a train east to Vienna. There we would ring in the new year to the sounds of the Salonorchester Alhambra, the band that Dan’s brother Andrew was playing in (and the band that would later be my first paying client when I made their website—I’ve still got the files lying around somewhere).

That was a fun New Year’s ball …although I remember my mortification when we went for gulash beforehand and I got a drop on the pristine tux that I had borrowed from Andrew.

My other memory of that trip was going to the Kunsthistorisches Museum to see the amazing Bruegel collection. It’s hard to imagine that ever being topped, but then this year, they put together a “once in a lifetime” collection, gathering even more Bruegel masterpieces together in Vienna.

Jessica got the crazy idea in her head that we could go there. In a day.

Looking at the flights, it turned out to be not such a crazy idea after all. Sure, it meant an early start, but it was doable. We booked our museum tickets, and then we booked plane tickets.

That’s how we ended up going to Vienna for the day this past Monday. It was maybe more time than I’d normally like to spend in airports in a 24 hour period, but it was fun. We landed, went into town for a wiener schnitzel, and then it was off to the museum for an afternoon of medieval masterpieces. Hunters in the Snow, the Tower of Babel, and a newly restored Triumph of Death sent from the Prado were just some of the highlights.

There’s a website to accompany the exhibition called Inside Bruegel. You can zoom on each painting to see the incredible detail. You can even compare the infrared and x-ray views. Dive in and explore the world of Pieter Bruegel the Elder.

The Battle between Carnival and Lent