Tags: ie

274

sparkline

Patterns Day video and audio

If you missed out on Patterns Day this year, you can still get a pale imitation of the experience of being there by watching videos of the talks.

Here are the videos, and if you’re not that into visuals, here’s a podcast of the talks (you can subscribe to this RSS feed in your podcasting app of choice).

On Twitter, Chris mentioned that “It would be nice if the talks had their topic listed,” which is a fair point. So here goes:

It’s fascinating to see emergent themes (other than, y’know, the obvious theme of design systems) in different talks. In comparison to the first Patterns Day, it felt like there was a healthy degree of questioning and scepticism—there were plenty of reminders that design systems aren’t a silver bullet. And I very much appreciated Yaili’s point that when you see beautifully polished design systems that have been made public, it’s like seeing the edited Instagram version of someone’s life. That reminded me of Responsive Day Out when Sarah Parmenter, the first speaker at the very first event, opened everything by saying “most of us are winging it.”

I can see the value in coming to a conference to hear stories from people who solved hard problems, but I think there’s equal value in coming to a conference to hear stories from people who are still grappling with hard problems. It’s reassuring. I definitely got the vibe from people at Patterns Day that it was a real relief to hear that nobody’s got this figured out.

There was also a great appreciation for the “big picture” perspective on offer at Patterns Day. For myself, I know that I’ll be cogitating upon Danielle’s talk and Emil’s talk for some time to come—both are packed full of ineresting ideas.

Good thing we’ve got the videos and the podcast to revisit whenever we want.

And if you’re itching for another event dedicated to design systems, I highly recommend snagging a ticket for the Clarity conference in San Francisco next month.

Trad time

Fifteen years ago, I went to the Willie Clancy Summer School in Miltown Malbay:

I’m back from the west of Ireland. I was sorry to leave. I had a wonderful, music-filled time.

I’m not sure why it took me a decade and a half to go back, but that’s what I did last week. Myself and Jessica once again immersed ourselves in Irish tradtional music. I’ve written up a trip report over on The Session.

On the face of it, fifteen years is a long time. Last time I made the trip to county Clare, I was taking pictures on a point-and-shoot camera. I had a phone with me, but it had a T9 keyboard that I could use for texting and not much else. Also, my hair wasn’t grey.

But in some ways, fifteen years feels like the blink of an eye.

I spent my mornings at the Willie Clancy Summer School immersed in the history of Irish traditional music, with Paddy Glackin as a guide. We were discussing tradition and change in generational timescales. There was plenty of talk about technology, but we were as likely to discuss the influence of the phonograph as the influence of the internet.

Outside of the classes, there was a real feeling of lengthy timescales too. On any given day, I would find myself listening to pre-teen musicians at one point, and septegenarian masters at another.

Now that I’m back in the Clearleft studio, I’m finding it weird to adjust back in to the shorter timescales of working on the web. Progress is measured in weeks and months. Technologies are deemed outdated after just a year or two.

The one bridging point I have between these two worlds is The Session. It’s been going in one form or another for over twenty years. And while it’s very much on and of the web, it also taps into a longer tradition. Over time it has become an enormous repository of tunes, for which I feel a great sense of responsibility …but in a good way. It’s not something I take lightly. It’s also something that gives me great satisfaction, in a way that’s hard to achieve in the rapidly moving world of the web. It’s somewhat comparable to the feelings I have for my own website, where I’ve been writing for eighteen years. But whereas adactio.com is very much focused on me, thesession.org is much more of a community endeavour.

I question sometimes whether The Session is helping or hindering the Irish music tradition. “It all helps”, Paddy Glackin told me. And I have to admit, it was very gratifying to meet other musicians during Willie Clancy week who told me how much the site benefits them.

I think I benefit from The Session more than anyone though. It keeps me grounded. It gives me a perspective that I don’t think I’d otherwise get. And in a time when it feels entirely to right to question whether the internet is even providing a net gain to our world, I take comfort in being part of a project that I think uses the very best attributes of the World Wide Web.

Movie Knight

I mentioned how much I enjoyed Mike Hill’s talk at Beyond Tellerrand in Düsseldorf:

Mike gave a talk called The Power of Metaphor and it’s absolutely brilliant. It covers the monomyth (the hero’s journey) and Jungian archetypes, illustrated with the examples Star Wars, The Dark Knight, and Jurassic Park.

At Clearleft, I’m planning to reprise the workshop I did a few years ago about narrative structure—very handy for anyone preparing a conference talk, blog post, case study, or anything really:

Ellen and I have been enjoying some great philosophical discussions about exactly what a story is, and how does it differ from a narrative structure, or a plot. I really love Ellen’s working definition: Narrative. In Space. Over Time.

This led me to think that there’s a lot that we can borrow from the world of storytelling—films, novels, fairy tales—not necessarily about the stories themselves, but the kind of narrative structures we could use to tell those stories. After all, the story itself is often the same one that’s been told time and time again—The Hero’s Journey, or some variation thereof.

I realised that Mike’s monomyth talk aligns nicely with my workshop. So I decided to prep my fellow Clearlefties for the workshop with a movie night.

Popcorn was popped, pizza was ordered, and comfy chairs were suitably arranged. Then we watched Mike’s talk. Everyone loved it. Then it was decision time. Which of three films covered in the talk would we watch? We put it to a vote.

It came out as an equal tie between Jurassic Park and The Dark Knight. How would we resolve this? A coin toss!

The toss went to The Dark Knight. In retrospect, a coin toss was a supremely fitting way to decide to watch that film.

It was fun to watch it again, particularly through the lens of Mike’s analyis of its Jungian archetypes.

But I still think the film is about game theory.

Patterns Day Two

Who says the sequels can’t be even better than the original? The second Patterns Day was The Empire Strikes Back, The Godfather Part II, and The Wrath of Khan all rolled into one …but, y’know, with design systems.

If you were there, then you know how good it was. If you weren’t, sorry. Audio of the talks should be available soon though, with video following on.

The talks were superb! I know I’m biased becuase I put the line-up together, but even so, I was blown away by the quality of the talks. There were some big-picture questioning talks, a sequence of nitty-gritty code talks in the middle, and galaxy-brain philosophical thoughts at the end. A perfect mix, in my opinion.

Words cannot express how grateful I am to Alla, Yaili, Amy, Danielle, Heydon, Varya, Una, and Emil. They really gave it their all! Some of them are seasoned speakers, and some of them are new to speaking on stage, but all of them delivered the goods above and beyond what I expected.

Big thanks to my Clearleft compadres for making everything run smoothly: Jason, Amy, Cassie, Chris, Trys, Hana, and especially Sophia for doing all the hard work behind the scenes. Trys took some remarkable photos too. He posted some on Twitter, and some on his site, but there are more to come.

Me on stage. Inside the Duke of York's for Patterns Day 2

And if you came to Patterns Day 2, thank you very, very much. I really appreciate you being there. I hope you enjoyed it even half as much as I did, because I had a ball!

Once again, thanks to buildit @ wipro digital for sponsoring the pastries and coffee, as well as running a fun giveaway on the day. Many thank to Bulb for sponsoring the forthcoming videos. Thanks again to Drew for recording the audio. And big thanks to Brighton’s own Holler Brewery for very kindly offering every attendee a free drink—the weather (and the beer) was perfect for post-conference discussion!

It was incredibly heartwarming to hear how much people enjoyed the event. I was especially pleased that people were enjoying one another’s company as much as the conference itself. I knew that quite a few people were coming in groups from work, while other people were coming by themselves. I hoped there’d be lots of interaction between attendees, and I’m so, so glad there was!

You’ve all made me very happy.

Am I cached or not?

When I was writing about the lie-fi strategy I’ve added to adactio.com, I finished with this thought:

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.”

Trys heard my plea, and came up with a very clever technique to alter the HTML of a page when it’s put into a cache.

It’s a function that reads the response body stream in, returning a new stream. Whilst reading the stream, it searches for the character codes that make up: <html. If it finds them, it tacks on a data-cached attribute.

Nice!

But then I was discussing this issue with Tantek and Aaron late one night after Indie Web Camp Düsseldorf. I realised that I might have another potential solution that doesn’t involve the service worker at all.

Caveat: this will only work for pages that have some kind of server-side generation. This won’t work for static sites.

In my case, pages are generated by PHP. I’m not doing a database lookup every time you request a page—I’ve got a server-side cache of posts, for example—but there is a little bit of assembly done for every request: get the header from here; get the main content from over there; get the footer; put them all together into a single page and serve that up.

This means I can add a timestamp to the page (using PHP). I can mark the moment that it was served up. Then I can use JavaScript on the client side to compare that timestamp to the current time.

I’ve published the code as a gist.

In a script element on each page, I have this bit of coducken:

var serverTimestamp = <?php echo time(); ?>;

Now the JavaScript variable serverTimestamp holds the timestamp that the page was generated. When the page is put in the cache, this won’t change. This number should be the number of seconds since January 1st, 1970 in the UTC timezone (that’s what my server’s timezone is set to).

Starting with JavaScript’s Date object, I use a caravan of methods like toUTCString() and getTime() to end up with a variable called clientTimestamp. This will give the current number of seconds since January 1st, 1970, regardless of whether the page is coming from the server or from the cache.

var localDate = new Date();
var localUTCString = localDate.toUTCString();
var UTCDate = new Date(localUTCString);
var clientTimestamp = UTCDate.getTime() / 1000;

Then I compare the two and see if there’s a discrepency greater than five minutes:

if (clientTimestamp - serverTimestamp > (60 * 5))

If there is, then I inject some markup into the page, telling the reader that this page might be stale:

document.querySelector('main').insertAdjacentHTML('afterbegin',`
  <p class="feedback">
    <button onclick="this.parentNode.remove()">dismiss</button>
    This page might be out of date. You can try <a href="javascript:window.location=window.location.href">refreshing</a>.
  </p>
`);

The reader has the option to refresh the page or dismiss the message.

This page might be out of date. You can try refreshing.

It’s not foolproof by any means. If the visitor’s computer has their clock set weirdly, then the comparison might return a false positive every time. Still, I thought that using UTC might be a safer bet.

All in all, I think this is a pretty good method for detecting if a page is being served from a cache. Remember, the goal here is not to determine if the user is offline—for that, there’s navigator.onLine.

The upshot is this: if you visit my site with a crappy internet connection (lie-fi), then after three seconds you may be served with a cached version of the page you’re requesting (if you visited that page previously). If that happens, you’ll now also be presented with a little message telling you that the page isn’t fresh. Then it’s up to you whether you want to have another go.

I like the way that this puts control back into the hands of the user.

The schedule for Patterns Day

Patterns Day is less than three weeks away—exciting!

We’re going to start the day at a nice civilised time. Registration is from 9am. There will be tea, coffee, and pastries, so get there in plenty of time to register and have a nice chat with your fellow attendees. There’ll be breaks throughout the day too.

Those yummy pastries and hot drinks are supplied courtesy of our sponsors Buildit @ Wipro Digital—many thanks to them!

Each talk will be 30 minutes long. There’ll be two talks back-to-back and then a break. That gives you plenty of breathing space to absorb all those knowledge bombs that the speakers will be dropping.

Lunch will be a good hour and a half. Lunch isn’t provided so you can explore the neighbourhood where there are plenty of treats on offer. And your Patterns Day badge will even get you some discounts…

The lovely Café Rust is offering these deals to attendees:

  • Cake and coffee for £5
  • Cake and cup of tea for £4
  • Sandwich and a drink for £7

The Joker (right across the street from the conference venue) is offering a 10% discount of food and drinks (but not cocktails) to Patterns Day attendees. I highly recommend their hot wings. Try the Rufio sauce—it’s awesome! Do not try the Shadow—it will kill you.

Here’s how the day is looking:

Registration
Opening remarks
Alla
Yaili
Break
Amy
Danielle
Lunch
Heydon
Varya
Break
Una
Emil
Closing remarks

We should be out of the Duke of York’s by 4:45pm after a fantastic day of talks. At that point, we can head around the corner (literally) to Holler Brewery. They are very kindly offering each attendee a free drink! Over to them:

Holler is a community based brewery, always at the centre of the local community. Here to make great beer, but also to help support community run pubs, carnival societies, mental health charities, children’s amateur dramatic groups, local arts groups and loads more, because these are what keep our communities healthy and together… the people in them!

Holler loves great beer and its way of bringing people together. They are excited to be welcoming the Patterns Day attendees and the design community to the taproom.

Terms and conditions:

  • One token entitles to you one Holler beer or one soft drink
  • Redeemable only on Friday 28th June 2019 between 4:45 and 20:00
  • You must hand your token over to the bar team

You’ll get your token when you register in the morning, along with your sticker. That’s right; sticker. Every expense has been spared so you won’t even have a name badge on a lanyard, just a nice discrete but recognisable sticker for the event.

I am so, so excited for Patterns Day! See you at the Duke of York’s on June 28th!

Indie web events in Brighton

Homebrew Website Club is a regular gathering of people getting together to tinker on their own websites. It’s a play on the original Homebrew Computer Club from the ’70s. It shares a similar spirit of sharing and collaboration.

Homebrew Website Clubs happen at various locations: London, San Francisco, Portland, Nuremberg, and more. Usually there on every second Wednesday.

I started running Homebrew Website Club Brighton a while back. I tried the “every second Wednesday” thing, but it was tricky to make that work. People found it hard to keep track of which Wednesdays were Homebrew days and which weren’t. And if you missed one, then it would potentially be weeks between attending.

So I’ve made it a weekly gathering. On Thursdays. That’s mostly because Thursdays work for me: that’s one of the evenings when Jessica has her ballet class, so it’s the perfect time for me to spend a while in the company of fellow website owners.

If you’re in Brighton and you have your own website (or you want to have your own website), you should come along. It’s every Thursday from 6pm to 7:30pm ‘round at the Clearleft studio on 68 Middle Street. Add it to your calendar.

There might be a Thursday when I’m not around, but it’s highly likely that Homebrew Website Club Brighton will happen anyway because either Trys, Benjamin or Cassie will be here.

(I’m at Homebrew Website Club Brighton right now, writing this. Remy is here too, working on some very cool webmention stuff.)

There’s something else you should add to your calendar. We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

It’s been a while since we’ve had an Indie Web Camp in Brighton. You can catch up on the Brighton Indie Web Camps we had in 2014, 2015, and 2016. Since then I’ve been to Indie Web Camps in Berlin, Nuremberg, and Düsseldorf, but it’s going to be really nice to bring it back home.

Indie Web Camp UK attendees Indie Web Camp Brighton group photo IndieWebCampBrighton2016

The event will be free to attend, but I’ll set up an official ticket page on Ti.to to keep track of who’s coming. I’ll let you know when that’s up and ready. In the meantime, you can register your interest in attending on the 2019 Indie Webcamp Brighton page on the Indie Web wiki.

Sponsor Patterns Day

Patterns Day 2 is sold out! Yay!

I didn’t even get the chance to announce the full line-up before all the tickets were sold. That was meant to my marketing strategy, see? I’d announce some more speakers every few weeks, and that would encourage more people to buy tickets. Turns out that I didn’t need to do that.

But I’m still going to announce the final two speakers here becuase I’m so excited about them—Danielle Huntrods and Varya Stepanova!

Danielle is absolutely brilliant. I know this from personal experience because I worked alongside her at Clearleft for three years. Now she’s at Bulb and I can’t wait for everyone at Patterns Day to hear her galaxy brain thoughts on design systems.

And how could I not have Varya at Patterns Day? She lives and breathes design systems. Whether it’s coding, writing, speaking, or training, she’s got years of experience to share. Ever used BEM? Yeah, that was Varya.

Anyway, if you’ve got your ticket for Patterns Day, you’re in for a treat.

If you didn’t manage to get a ticket for Patterns Day …sorry.

But do not despair. There is still one possible way of securing an elusive Patterns Day ticket: get your company to sponsor the event.

We’ve already got one sponsor—buildit @ wipro digital—who are kindly covering the costs for teas, coffees, and pastries. Now I’m looking for another sponsor to cover the costs of making video recordings of the talks.

The cost of sponsorship is £2000. In exchange, I can’t offer you a sponsor stand or anything like that—there’s just no room at the venue. But you will earn my undying thanks, and you’ll get your logo on the website and on the screen in between talks on the day (and on the final videos).

I can also give you four tickets to Patterns Day.

This is a sponsorship strategy that I like to call “blackmail.”

If you were really hoping to bring your team to Patterns Day, but you left it too late to get your tickets, now’s your chance. Convince your company to sponsor the event (and let’s face it, £2000 is a rounding error on some company’s books). Then you and your colleagues need not live with eternal regret and FOMO.

Drop me a line. Let’s talk.

Beyond

After a fun and productive Indie Web Camp, I stuck around Düsseldorf for Beyond Tellerand. I love this event. I’ve spoken at it quite a few times, but this year it was nice to be there as an attendee. It’s simultaneously a chance to reconnect with old friends I haven’t seen in a while, and an opportunity to meet lovely new people. There was plenty of both this year.

I think this might have been the best Beyond Tellerrand yet, and that’s saying something. It’s not just that the talks were really good—there was also a wonderful atmosphere.

Marc somehow manages to curate a line-up that’s equal parts creativity and code; design and development. It shouldn’t work, but it does. I love the fact that he had a legend of the industry like David Carson on the same stage as first-time speaker like Dorobot …and the crowd loved ‘em equally!

During the event, I found out that I had a small part to play in the creation of the line-up…

Three years ago, I linked to a video of a talk by Mike Hill:

A terrific analysis of industrial design in film and games …featuring a scene-setting opening that delineates the difference between pleasure and happiness.

It’s a talk about chairs in Jodie Foster films. Seriously. It’s fantastic!

Marc saw my link, watched the video, and decided he wanted to get Mike Hill to speak at Beyond Tellerrand. After failing to get a response by email, Marc managed to corner Mike at an event in Amsterdam and get him on this year’s line-up.

Mike gave a talk called The Power of Metaphor and it’s absolutely brilliant. It covers the monomyth (the hero’s journey) and Jungian archetypes, illustrated with the examples Star Wars, The Dark Knight, and Jurassic Park:

Under the surface of their most celebrated films lies a hidden architecture that operates on an unconscious level; This talk is designed to illuminate the techniques that great storytellers use to engage a global audience on a deep and meaningful level through psychological metaphor.

The videos from Beyond Tellerrand are already online so you can watch the talk now.

Mike’s talk was back-to-back with a talk from Carolyn Stransky called Humanising Your Documentation:

In this talk, we’ll discuss how the language we use affects our users and the first steps towards writing accessible, approachable and use case-driven documentation.

While the talk was ostensibly about documentation, I found that it was packed full of good advice for writing well in general.

I had a thought. What if you mashed up these two talks? What if you wrote documentation through the lens of the hero’s journey?

Think about it. When somone arrives at your documentation, they’ve crossed the threshold to the underworld. They are in the cave, facing a dragon. You are their guide, their mentor, their Obi-Wan Kenobi. You can help them conquer their demons and return to the familiar world, changed by their journey.

Too much?

Replies

Last week was a bit of an event whirlwind. In the space of seven days I was at Indie Web Camp, Beyond Tellerrand, and Accessibility Club in Düsseldorf, followed by a train ride to Utrecht for Frontend United. Phew!

Indie Web Camp Düsseldorf was—as always—excellent. Once again, Sipgate generously gave us the use of their lovely, lovely space for the weekend. We had one day of really thought-provoking discussions, followed by a day of heads-down hacking and making.

I decided it was time for me to finally own my replies. For a while now, I’ve been posting notes on my own site and syndicating to Twitter. But whenever I replied to someone else’s tweet, I did from Twitter. I wanted to change that.

From a coding point of view, it wasn’t all that tricky. The real challenges were to do with the interface. I needed to add another field for the URL I’m replying to …but I didn’t want my nice and minimal posting interface to get too cluttered. I ended up putting the new form field inside a details element with a summary of “Reply to” so that the form field would be hidden by default, and toggled open by hitting that “Reply to” text:

<details>
    <summary>
        <label for="replyto">Reply to</label>
    </summary>
    <input type="url" id="replyto" name="replyto">
</details>

I sent my first test reply to a post on Aaron’s website. Aaron was sitting next to me at the time.

Once that was all working, I sent my first reply to a tweet. It was a response to a tweet from Tantek. Tantek was also sitting next to me at the time.

I spent most of the day getting that Twitter syndication to work. I had something to demo, but I foolishly decided to risk it all by attempting to create a bookmarklet so that I could post directly from a tweet page (instead of hopping back to my own site in a different tab). By canabalising the existing bookmarklet I use for posting links, I just about managed to get it working in time for the end of day demos.

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

Then again, I kind of like how wonderfully random and out-of-context they look. You can browse through all my replies so far.

I’m glad I got this set up. Now when Andy posts stuff on Twitter, I’m custodian of my responses:

@AndyBudd: Who are your current “Design Heroes”?

adactio.com: I would say Falcor from Neverending Story, the big flying dog.

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

Frameworking

There are many reasons to use a JavaScript framework like Vue, Angular, or React. Last year, Nicole asked for some of those reasons. Her question received many, many answers from people pointing out the benefits of using a framework. Interesingly, though, not a single one of those benefits was for end users.

(Mind you, if the framework is being used on the server to pre-render pages, then it’s a moot point—in that situation, it makes no difference to the end user whether you use a framework or not.)

Hidde recently tried using a client-side JavaScript framework for the first time and documented the process:

In the last few months I built my very first framework-based front-end, in Vue.js. I complemented it with a router, a store and a GraphQL library, in order to have, respectively, multiple (virtual) pages, globally shared data and a smart way to load new data in my templates.

It’s a very even-handed write-up. I highly recommend reading it. He describes the pros and cons of using a framework and using vanilla JavaScript:

I am glad I tried a framework and found its features were extremely helpful in creating a consistent interface for my users. My hope is though, that I won’t forget about vanilla. It’s perfectly valid to build a website with no or few dependencies.

Speaking of vanilla JavaScript… the blogging machine that is Chris Ferdinandi also wrote a comparison post recently, asking Why do people choose frameworks over vanilla JS? Again, it’s very even-handed and well worth a read. He readily concedes that if you’re working at scale, a framework is almost certainly a good idea:

If you’re building a large scale application (literally Facebook, Twitter, QuickBooks scale), the performance wins of a framework make the overhead worth it.

Alas, I’ve seen many, many framework-driven sites that are most definitely not that operating at that scale. Trys speaks the honest truth here:

We kid ourselves into thinking we’re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain’t much more to it than that.

Just the other day, I saw a new site launch that was mostly a marketing site—the home page weighed over five megabytes, two megabytes of which were taken up with JavaScript, and the whole thing required JavaScript to render text to the screen (I’m not going to link to it because I don’t want to engage in any kind of public shaming and finger-wagging).

I worry that all the perfectly valid (developer experience) reasons for using a framwork are outweighing the more important (user experience) reasons for avoiding shipping your dependencies to end users. Like Alex says:

If your conception of “DX” doesn’t include it, or isn’t subservient to the user experience, rethink.

And yes, I am going to take this opportunity to link once again to Alex’s article The “Developer Experience” Bait-and-Switch. Please read it if you haven’t already. Please re-read it if you have.

Anyway, my main reason for writing this is to point you to thoughtful posts like Hidde’s and Chris’s. I think it’s great to see people thoughtfully weighing up the pros and cons of choosing any particular technology—I’m a bit obsessed with the topic of evaluating technology.

If you’re weighing up the pros and cons of using, say, a particular JavaScript library or framework, that’s wonderful. My worry is that there are people working in front-end development who aren’t putting that level of thought into their technology choices, but are instead using a particular framework because it’s what they’re used to.

To quote Grace Hopper:

The most dangerous phrase in the language is, ‘We’ve always done it this way.’

Inlining SVG background images in CSS with custom properties

Here’s a tiny lesson that I picked up from Trys that I’d like to share with you…

I was working on some upcoming changes to the Clearleft site recently. One particular component needed some SVG background images. I decided I’d inline the SVGs in the CSS to avoid extra network requests. It’s pretty straightforward:

.myComponent {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

You can basically paste your SVG in there, although you need to a little bit of URL encoding: I found that converting # to %23 to was enough for my needs.

But here’s the thing. My component had some variations. One of the variations had multiple background images. There was a second background image in addition to the first. There’s no way in CSS to add an additional background image without writing a whole background-image declaration:

.myComponent--variant {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>'), url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

So now I’ve got the same SVG source inlined in two places. That negates any performance benefits I was getting from inlining in the first place.

That’s where Trys comes in. He shared a nifty technique he uses in this exact situation: put the SVG source into a custom property!

:root {
    --firstSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
    --secondSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

Then you can reference those in your background-image declarations:

.myComponent {
    background-image: var(--firstSVG);
}
.myComponent--variant {
    background-image: var(--firstSVG), var(--secondSVG);
}

Brilliant! Not only does this remove any duplication of the SVG source, it also makes your CSS nice and readable: no more big blobs of SVG source code in the middle of your style sheet.

You might be wondering what will happen in older browsers that don’t support CSS custom properties (that would be Internet Explorer 11). Those browsers won’t get any background image. Which is fine. It’s a background image. Therefore it’s decoration. If it were an important image, it wouldn’t be in the background.

Progressive enhancement, innit?

Three more Patterns Day speakers

There are 73 days to go until Patterns Day. Do you have your ticket yet?

Perhaps you’ve been holding out for some more information on the line-up. Well, I’m more than happy to share the latest news with you—today there are three new speakers on the bill…

Emil Björklund, the technical director at the Malmö outpost of Swedish agency inUse, is a super-smart person I’ve known for many years. Last year, I saw him on stage in his home town at the Confront conference sharing some of his ideas on design systems. He blew my mind! I told him there and then that he had to come to Brighton and expand on those thoughts some more. This is going to be an unmissable big-picture talk in the style of Paul’s superb talk last year.

Speaking of superb talks from last year, Alla Kholmatova is back! Her closing talk from the first Patterns Day was so fantastic that it I just had to have her come back. Oh, and since then, her brilliant book on Design Systems came out. She’s going to have a lot to share!

The one thing that I felt was missing from the first Patterns Day was a focus on inclusive design. I’m remedying that this time. Heydon Pickering, creator of the Inclusive Components website—and the accompanying book—is speaking at Patterns Day. I’m very excited about this. Given that Heydon has a habit of casually dropping knowledge bombs like the lobotomised owl selector and the flexbox holy albatross, I can’t wait to see what he unleashes on stage in Brighton on June 28th.

Emil Björklund Alla Kholmatova Heydon Pickering
Emil, Alla, and Heydon

Be there or be square.

Tickets for Patterns Day are still available, but you probably don’t want to leave it ‘till the last minute to get yours. Just sayin’.

The current—still incomplete—line-up comprises:

That isn’t even the full roster of speakers, and it’s already an unmissable event!

I very much hope you’ll join me in the beautiful Duke of York’s cinema on June 28th for a great day of design system nerdery.

Design perception

Last week I wrote a post called Dev perception:

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

The sentiment I expressed resonated with a lot of people. Like, a lot of people.

I was talking specifically about web development and technology choices, but I think the broader point applies to other disciplines too.

Last month I had the great pleasure of moderating two panels on design leadership at an event in London (I love moderating panels, and I think I’m pretty darn good at it too). I noticed that the panels comprised representatives from two different kinds of companies.

There were the digital-first companies like Spotify, Deliveroo, and Bulb—companies forged in the fires of start-up culture. Then there were the older companies that had to make the move to digital (transform, if you will). I decided to get a show of hands from the audience to see which kind of company most people were from. The overwhelming majority of attendees were from more old-school companies.

Just as most of the ink spilled in the web development world goes towards the newest frameworks and toolchains, I feel like the majority of coverage in the design world is spent on the latest outputs from digital-first companies like AirBnB, Uber, Slack, etc.

The end result is the same. A typical developer or designer is left feeling that they—and their company—are behind the curve. It’s like they’re only seeing the Instagram version of their industry, all airbrushed and filtered, and they’re comparing that to their day-to-day work. That can’t be healthy.

Personally, I’d love to hear stories from the trenches of more representative, traditional companies. I also think that would help get an important message to people working in similar companies:

You are not alone!

Split

When I talk about evaluating technology for front-end development, I like to draw a distinction between two categories of technology.

On the one hand, you’ve got the raw materials of the web: HTML, CSS, and JavaScript. This is what users will ultimately interact with.

On the other hand, you’ve got all the tools and technologies that help you produce the HTML, CSS, and JavaScript: pre-processors, post-processors, transpilers, bundlers, and other build tools.

Personally, I’m much more interested and excited by the materials than I am by the tools. But I think it’s right and proper that other developers are excited by the tools. A good balance of both is probably the healthiest mix.

I’m never sure what to call these two categories. Maybe the materials are the “external” technologies, because they’re what users will interact with. Whereas all the other technologies—that mosty live on a developer’s machine—are the “internal” technologies.

Another nice phrase is something I heard during Chris’s talk at An Event Apart in Seattle, when he quoted Brad, who talked about the front of the front end and the back of the front end.

I’m definitely more of a front-of-the-front-end kind of developer. I have opinions on the quality of the materials that get served up to users; the output should be accessible and performant. But I don’t particularly care about the tools that produced those materials on the back of the front end. Use whatever works for you (or whatever works for your team).

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time. Like I said, from a user’s point of view, it’s irrelevant what text editor or version control system you use.

Now, you could make the argument that anything that is good for developer convenience is automatically good for user experience because faster, more efficient development should result in better output. While that’s true in theory, I highly recommend Alex’s post, The “Developer Experience” Bait-and-Switch.

Where it gets interesting is when a technology that’s designed for developer convenience is made out of the very materials being delivered to users. For example, a CSS framework like Bootstrap is made of CSS. That’s different to a tool like Sass which outputs CSS. Whether or not a developer chooses to use Sass is irrelevant to the user—the final output will be CSS either way. But if a developer chooses to use a CSS framework, that decision has a direct impact on the user experience. The user must download the framework in order for the developer to get the benefit.

So whereas Sass sits at the back of the front end—where I don’t care what you use—Bootstrap sits at the front of the front end. For tools like that, I don’t think saying “use whatever works for you” is good enough. It’s got to be weighed against the cost to the user.

Historically, it’s been a similar story with JavaScript libraries. They’re written in JavaScript, and so they’re going to be executed in the browser. If a developer wanted to use jQuery to make their life easier, the user paid the price in downloading the jQuery library.

But I’ve noticed a welcome change with some of the bigger JavaScript frameworks. Whereas the initial messaging around frameworks like React touted the benefits of state management and the virtual DOM, I feel like that’s not as prevalent now. You’re much more likely to hear people—quite rightly—talk about the benefits of modularity and componentisation. If you combine that with the rise of Node—which means that JavaScript is no longer confined to the browser—then these frameworks can move from the front of the front end to the back of the front end.

We’ve certainly seen that at Clearleft. We’ve worked on multiple React projects, but in every case, the output was server-rendered. Developers get the benefit of working with a tool that helps them. Users don’t pay the price.

For me, this question of whether a framework will be used on the client side or the server side is crucial.

Let me tell you about a Clearleft project that sticks in my mind. We were working with a big international client on a product that was going to be rolled out to students and teachers in developing countries. This was right up my alley! We did plenty of research into network conditions and typical device usage. That then informed a tight performance budget. Every design decision—from web fonts to images—was informed by that performance budget. We were producing lean, mean markup, CSS, and JavaScript. But we weren’t the ones implementing the final site. That was being done by the client’s offshore software team, and they insisted on using React. “That’s okay”, I thought. “React can be used server-side so we can still output just what’s needed, right?” Alas, no. These developers did everything client side. When the final site launched, the log-in screen alone required megabytes of JavaScript just to render a form. It was, in my opinion, entirely unfit for purpose. It still pains me when I think about it.

That was a few years ago. I think that these days it has become a lot easier to make the decision to use a framework on the back of the front end. Like I said, that’s certainly been the case on recent Clearleft projects that involved React or Vue.

It surprises me, then, when I see the question of server rendering or client rendering treated almost like an implementation detail. It might be an implementation detail from a developer’s perspective, but it’s a key decision for the user experience. The performance cost of putting your entire tech stack into the browser can be enormous.

Alex Sanders from the development team at The Guardian published a post recently called Revisiting the rendering tier . In it, he describes how they’re moving to React. Now, if this were a move to client-rendered React, that would make a big impact on the user experience. The thing is, I couldn’t tell from the article whether React was going to be used in the browser or on the server. The article talks about “rendering”—which is something that browsers do—and “the DOM”—which is something that only exists in browsers.

So I asked. It turns out that this plan is very much about generating HTML and CSS on the server before sending it to the browser. Excellent!

With that question answered, I’m cool with whatever they choose to use. In this case, they’re choosing to use CSS-in-JS (although, to be pedantic, there’s no C anymore so technically it’s SS-in-JS). As long as the “JS” part is JavaScript on a server, then it makes no difference to the end user, and therefore no difference to me. Not my circus, not my monkeys. For users, the end result is the same whether styling is applied via a selector in an external stylesheet or, for example, via an inline style declaration (and in some situations, a server-rendered CSS-in-JS solution might be better for performance). And so, as a user-centred developer, this is something that I don’t need to care about.

Except…

I have misgivings. But just to be clear, these misgivings have nothing to do with users. My misgivings are entirely to do with another group of people: the people who make websites.

There’s a second-order effect. By making React—or even JavaScript in general—a requirement for styling something on a web page, the barrier to entry is raised.

At least, I think that the barrier to entry is raised. I completely acknowledge that this is a subjective judgement. In fact, the reason why a team might decide to make JavaScript a requirement for participation might well be because they believe it makes it easier for people to participate. Let me explain…

It wasn’t that long ago that devs coming from a Computer Science background were deriding CSS for its simplicity, complaining that “it’s broken” and turning their noses up at it. That rhetoric, thankfully, is waning. Nowadays they’re far more likely to acknowledge that CSS might be simple, but it isn’t easy. Concepts like the cascade and specificity are real head-scratchers, and any prior knowledge from imperative programming languages won’t help you in this declarative world—all your hard-won experience and know-how isn’t fungible. Instead, it seems as though all this cascading and specificity is butchering the modularity of your nicely isolated components.

It’s no surprise that programmers with this kind of background would treat CSS as damage and find ways to route around it. The many flavours of CSS-in-JS are testament to this. From a programmer’s point of view, this solution has made things easier. Best of all, as long as it’s being done on the server, there’s no penalty for end users. But now the price is paid in the diversity of your team. In order to participate, a Computer Science programming mindset is now pretty much a requirement. For someone coming from a more declarative background—with really good HTML and CSS skills—everything suddenly seems needlessly complex. And as Tantek observed:

Complexity reinforces privilege.

The result is a form of gatekeeping. I don’t think it’s intentional. I don’t think it’s malicious. It’s being done with the best of intentions, in pursuit of efficiency and productivity. But these code decisions are reflected in hiring practices that exclude people with different but equally valuable skills and perspectives.

Rachel describes HTML, CSS and our vanishing industry entry points:

If we make it so that you have to understand programming to even start, then we take something open and enabling, and place it back in the hands of those who are already privileged.

I think there’s a comparison here with toxic masculinity. Toxic masculinity is obviously terrible for women, but it’s also really shitty for men in the way it stigmatises any male behaviour that doesn’t fit its worldview. Likewise, if the only people your team is interested in hiring are traditional programmers, then those programmers are going to resent having to spend their time dealing with semantic markup, accessibility, styling, and other disciplines that they never trained in. Heydon correctly identifies this as reluctant gatekeeping:

By assuming the role of the Full Stack Developer (which is, in practice, a computer scientist who also writes HTML and CSS), one takes responsibility for all the code, in spite of its radical variance in syntax and purpose, and becomes the gatekeeper of at least some kinds of code one simply doesn’t care about writing well.

This hurts everyone. It’s bad for your team. It’s even worse for the wider development community.

Last year, I was asked “Is there a fear or professional challenge that keeps you up at night?” I responded:

My greatest fear for the web is that it becomes the domain of an elite priesthood of developers. I firmly believe that, as Tim Berners-Lee put it, “this is for everyone.” And I don’t just mean it’s for everyone to use—I believe it’s for everyone to make as well. That’s why I get very worried by anything that raises the barrier to entry to web design and web development.

I’ve described a number of dichotomies here:

  • Materials vs. tools,
  • Front of the front end vs. back of the front end,
  • User experience vs. developer experience,
  • Client-side rendering vs. server-side rendering,
  • Declarative languages vs. imperative languages.

But the split that worries the most is this:

  • The people who make the web vs. the people who are excluded from making the web.

Dev perception

Chris put together a terrific round-up of posts recently called Simple & Boring. It links off to a number of great articles on the topic of complexity (and simplicity) in web development.

I had linked to quite a few of the articles myself already, but one I hadn’t seen was from David DeSandro who wrote New tech gets chatter:

You don’t hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

I think that’s a very good point.

It’s relatively easy to write and speak about new technologies. You’re excited about them, and there’s probably an eager audience who can learn from what you have to say.

It’s trickier to write something insightful about a tried and trusted (perhaps even boring) technology that’s been around for a while. You could maybe write little tips and tricks, but I bet your inner critic would tell you that nobody’s interested in hearing about that old tech. It’s boring.

The result is that what’s being written about is not a reflection of what’s being widely used. And that’s okay …as long as you know that’s the case. But I worry that theres’s a perception problem. Because of the outsize weighting of new and exciting technologies, a typical developer could feel that their skills are out of date and the technologies they’re using are passé …even if those technologies are actually in wide use.

I don’t know about you, but I constantly feel like I’m behind the curve because I’m not currently using TypeScript or GraphQL or React. Those are all interesting technologies, to be sure, but the time to pick any of them up is when they solve a specific problem I’m having. Learning a new technology just to mitigate a fear of missing out isn’t a scalable strategy. It’s reasonable to investigate a technology because you genuinely think it’s exciting; it’s quite another matter to feel like you must investigate a technology in order to survive. That way lies burn-out.

I find it very grounding to talk to Drew and Rachel about the people using their Perch CMS product. These are working developers, but they are far removed from the world of tools and frameworks forged in the startup world.

In a recent (excellent) article comparing the performance of Formula One websites, Jake made this observation at the end:

However, none of the teams used any of the big modern frameworks. They’re mostly Wordpress & Drupal, with a lot of jQuery. It makes me feel like I’ve been in a bubble in terms of the technologies that make up the bulk of the web.

I think this is very astute. I also think it’s completely understandable to form ideas about what matters to developers by looking at what’s being discussed on Twitter, what’s being starred on Github, what’s being spoken about at conferences, and what’s being written about on Ev’s blog. But it worries me when I see browser devrel teams focusing their efforts on what appears to be the needs of typical developers based on the amount of ink spilled and breath expelled.

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

Trys wrote a great blog post called City life, where he compares his experience of doing CMS-driven agency work with his experience working at a startup in Shoreditch:

I was chatting to one of the team about my previous role. “I built two websites a month in WordPress”.

They laughed… “WordPress! Who uses that anymore?!”

Nearly a third of the web as it turns out - but maybe not on the Silicon Roundabout.

I’m not necessarily suggesting that there should be more articles and talks about older, more established technologies. Conferences in particular are supposed to give audiences a taste of what’s coming—they can be a great way of quickly finding out what’s exciting in the world of development. But we shouldn’t feel bad if those topics don’t match our day-to-day reality.

Ultimately what matters is building something—a website, a web app, whatever—that best serves end users. If that requires a new and exciting technology, that’s great. But if it requires an old and boring technology, that’s also great. What matters here is appropriateness.

When we’re evaluating technologies for appropriateness, I hope that we will do so through the lens of what’s best for users, not what we feel compelled to use based on a gnawing sense of irrelevancy driven by the perceived popularity of newer technologies.

CSS custom properties in generated content

Cassie posted a neat tiny lesson that she’s written a reduced test case for.

Here’s the situation…

CSS custom properties are fantastic. You can drop them in just about anywhere that a property takes a value.

Here’s an example of defining a custom property for a length:

:root {
    --my-value: 1em;
}

Then I can use that anywhere I’d normally give something a length:

.my-element {
    margin-bottom: var(--my-value);
}

I went a bit overboard with custom properties on the new Patterns Day site. I used them for colour values, font stacks, and spacing. Design tokens, I guess. They really come into their own when you combine them with media queries: you can update the values of the custom properties based on screen size …without having to redefine where those properties are applied. Also, they can be updated via JavaScript so they make for a great common language between CSS and JavaScript: you can define where they’re used in your CSS and then update their values in JavaScript, perhaps in response to user interaction.

But there are a few places where you can’t use custom properties. You can’t, for example, use them as part of a media query. This won’t work:

@media all and (min-width: var(--my-value)) {
    ...
}

You also can’t use them in generated content if the value is a number. This won’t work:

:root {
    --number-value: 15;
}
.my-element::before {
    content: var(--number-value);
}

Fair enough. Generated content in CSS is kind of a strange beast. Eric delivered an entire hour-long talk at An Event Apart in Seattle on generated content.

But Cassie found a workaround if the value you want to put into that content property is numeric. The CSS counter value is a kind of generated content—the numbers that appear in front of ordered list items. And you can control the value of those numbers from CSS.

CSS counters work kind of like variables. You name them and assign values to them using the counter-reset property:

.my-element {
    counter-reset: mycounter 15;
}

You can then reference the value of mycounter in a content property using the counter value:

.my-element {
    content: counter(mycounter);
}

Cassie realised that even though you can’t pass in a custom property directly to generated content, you can pass in a custom property to the counter-reset property. So you can do this:

:root {
    --number-value: 15;
}
.my-element {
    counter-reset: mycounter var(--number-value);
    content: counter(mycounter);
}

In a roundabout way, this allows you to use a custom property for generated content!

I realise that the use cases are pretty narrow, but I can’t help but be impressed with the thinking behind this. Personally, I would’ve just read that generated content doesn’t accept custom properties and moved on. I would’ve given up quickly. But Cassie took a step back and found a creative pass-the-parcel solution to the problem.

I feel like this is a hack in the best sense of the word: a creatively improvised solution to a problem or limitation.

I was trying to display the numeric value stored in a CSS variable inside generated content… Turns out you can’t do that. But you can do this… codepen.io/cassie-codes/p… (not saying you should, but you could)

Other people’s weeknotes

Paul is writing weeknotes. Here’s his latest.

Amy is writing weeknotes. Here’s her latest.

Aegir is writing weeknotes. Here’s his latest.

Nat is writing weeknotes. Here’s their latest.

Alice is writing weeknotes. Here’s her latest.

Mark is writing weeknotes. Here’s his latest.

I enjoy them all.

Unsolved Problems by Beth Dean

An Event Apart in Seattle continues. It’s the afternoon of day two and Beth Dean is here to give a talk called Unsolved Problems:

Technology products are being adapted faster than ever. We’ve spent a lot of time adopting new technology, but not as much time considering the social impact of doing so. This talk looks at large scale system design in the offline world, and takes lessons from them to our online work. You’ll learn how to expand your design approach from self-contained products, to considering the broader systems in which they exist.

Fun fact: An Event Apart was the first conference that Beth attended over ten years ago.

Who recognises this guy on screen? It’s Robert Stack, the creepy host of Unsolved Mysteries. It was kind of like the X-Files. The X-Files taught Beth to be a sceptic. Imagine Beth’s surprise when her job at Facebook led her to actual conspiracies. It’s been a hard year, what with Cambridge Analytica and all.

Beth’s team is focused on how people experience ads, while the whole rest of the company is focused on ads from the opposite end. She’s the Fox Mulder of the company.

Technology today has incredible reach. In recent years, we’ve seen 1:1 harm. That’s when a product negatively effects someone directly. In their book, Eric and Sara point out that Facebook is often the first company to solve these problems.

1:many harm is another use of technology. Designing in isolation isn’t new to tech. We’ve seen 1:many harm in urban planning. Brasilia is a beautiful city that nobody wants to live in. You need messy, mixed-use spaces, not a space designed for cars. Niemeier planned for efficiency, not reality.

Eichler buildings were supposed to be egalitarian. But everything that makes these single-story homes great places to live also makes them great targets for criminals. Isolation by intentional design leads to a less safe place to live.

One of Frank Gehry’s buildings turned into a deathtrap when it was covered with snow. And in summer, the reflective material makes it impossible to sit on side of it. His Facebook office building has some “interesting” restroom allocation, which was planned last.

Ohio had a deer overpopulation problem. So the solution they settled on was to introduce coyotes. Now there’s a coyote problem. When coyotes breed with stray dogs, they start to get aggressive and they hunt in packs. This is the cobra effect: when the solution to your problem makes the problem worse. The British government offered a bounty for cobras in India. So people bred snakes for the bounty. So they got rid of the bounty …and then all those snakes were released into the wild.

So-called “ride sharing” apps are about getting one person from point A to point B. They’re not about making getting around easier in general.

Google traffic directions don’t factor in the effect of Google giving everyone the same traffic directions.

AirBnB drives up rent …even though it started out as a way to help people who couldn’t make rent. Sounds like cobra farming.

Automating Inequality by Virgina Eubanks is an excellent book about being dropped by health insurance. An algorithm did it. By taking broken systems and automating them, we accelerate disenfranchisement.

Then there’s Facebook. Psychological warfare is not new. Radio and television have influenced elections long before the internet. Politicians changed their language to fit the medium of radio.

The internet has removed all friction that helps us behave cooperatively. Removing friction was once our goal, but it turns out that friction is sometimes useful. The internet has turned into an outrage machine.

Solving problems in the isolation of our own products ignores the broader context of society.

The Waze map reflects cities as they are, not the way someone wishes them to be.

—Noam Bardin, CEO of Waze

From bulletin boards to today’s web, the internet has always been toxic because human nature is toxic. Maybe that’s the bigger problem to solve.

We can look to other industries…

Ideo redesigned the hospital experience. People were introduced to their entire care staff on their first visit. Sloan Kettering took a similar approach. Artwork serves as wayfinding. Every room has its own bathroom. A Chicago hostpital included gardens because it improves recovery.

These hospital examples all:

  • Designed for an intended outcome.
  • Met people where they were.
  • Strengthened existing support networks.

We’ve seen some bad examples from urban planning, but there are success stories too.

A person on a $30 bicycle is as important as someone in a $30,000 car, said Enrique Peñalosa.

Copenhagen once faced awful traffic congestion. Now people cycle everywhere. It’s the fastest way to get around. The city is designed for bicycles first. People rode more when it felt safer. It’s no coincidence that Copenhagen ranks as one of the most livable cities in the world.

Scandinavian prisons use a concept called restorative justice. The staff plays badminton with the inmates. They cook together. Treat people like dirt and they will act like dirt. Treat people like people and they will act like people. Recividism rates in Norway are now way low.

  • Design for dignity and cooperation.
  • Solve for everyone in a system.
  • Policy should reflect intended outcomes.

The deHavilland Comet was made of metal. After a few blew apart at the seams, they switched from rivetted material. Airlines today develop a culture of crew resource management that encourages people to speak up.

  • Plan for every point of failure.
  • Empower everyone on a team to solve problems.
  • Adapt.

What can we do?

  • Policies affect design. We need to work more closely with policy makers.
  • Question access. Are all opinions equal? Where are computers making decisions that should involve people.
  • Forget neutrality. Technology is not neutral. Neutrality allows us to abdicate responsibility.
  • Stay a litte bit paranoid. Think about what the worst case scenario might be.

Make people better curators. How might we allow people to assess the veracity of information for themselves? What if we gave people better tools to affect their overall experience, not just small customisations?

We can use what we know about people to bring out their best behaviours. We can empower people to take action instead of just outrage.

What if we designed for the good of the community instead of the success of individuals. Like the Vauban in Freiburg! It was squatted, and the city gave control to the squatters to create an eco neighbourhood with affordable housing.

We need to think about what kind of worlds we want to create. What if we made the web less like a mall and more like a public park?

These are hard problems. But we solve hard technology problems every day. We could be the first generation of builders to solve technology’s hard problems.