Tags: v

611

sparkline

Nosediving

Nosedive is the first episode of season three of Black Mirror.

It’s fairly light-hearted by the standards of Black Mirror, but all the more chilling for that. It depicts a dysutopia where people rate one another for points that unlock preferential treatment. It’s like a twisted version of the whuffie from Cory Doctorow’s Down And Out In The Magic Kingdom. Cory himself points out that reputation economies are a terrible idea.

Nosedive has become a handy shortcut for pointing to the dangers of social media (in the same way that Minority Report was a handy shortcut for gestural interfaces and Her is a handy shortcut for voice interfaces).

“Social media is bad, m’kay?” is an understandable but, I think, fairly shallow reading of Nosedive. The problem isn’t with the apps, it’s with the system. A world in which we desperately need to keep our score up if we want to have any hope of advancing? That’s a nightmare scenario.

The thing is …that system exists today. Credit scores are literally a means of applying a numeric value to human beings.

Nosedive depicts a world where your score determines which seats you get in a restaurant, or which model of car you can rent. Meanwhile, in our world, your score determines whether or not you can get a mortgage.

Nosedive depicts a world in which you know your own score. Meanwhile, in our world, good luck with that:

It is very difficult for a consumer to know in advance whether they have a high enough credit score to be accepted for credit with a given lender. This situation is due to the complexity and structure of credit scoring, which differs from one lender to another.

Lenders need not reveal their credit score head, nor need they reveal the minimum credit score required for the applicant to be accepted. Owing only to this lack of information to the consumer, it is impossible for him or her to know in advance if they will pass a lender’s credit scoring requirements.

Black Mirror has a good track record of exposing what’s unsavoury about our current time and place. On the surface, Nosedive seems to be an exposé on the dangers of going to far with the presentation of self in everyday life. Scratch a little deeper though, and it reveals an even more uncomfortable truth: that we’re living in a world driven by systems even worse than what’s depicted in this dystopia.

How about this for a nightmare scenario:

Two years ago Douglas Rushkoff had an unpleasant encounter outside his Brooklyn home. Taking out the rubbish on Christmas Eve, he was mugged — held at knife-point by an assailant who took his money, his phone and his bank cards. Shaken, he went back indoors and sent an email to his local residents’ group to warn them about what had happened.

“I got two emails back within the hour,” he says. “Not from people asking if I was OK, but complaining that I’d posted the exact spot where the mugging had taken place — because it might adversely affect their property values.”

Getaway

It had been a while since we had a movie night at Clearleft so I organised one for last night. We usually manage to get through two movies, and there’s always a unifying theme decided ahead of time.

For last night, I decided that the broad theme would be …transport. But then, through voting on Slack, people could decide what the specific mode of transport would be. The choices were:

  • taxi,
  • getaway car,
  • truck, or
  • submarine.

Nobody voted for submarines. That’s a shame, but in retrospect it’s easy to understand—submarine films aren’t about transport at all. Quite the opposite. Submarine films are about being trapped in a metal womb/tomb (and many’s the spaceship film that qualifies as a submarine movie).

There were some votes for taxis and trucks, but the getaway car was the winner. I then revealed which films had been pre-selected for each mode of transport.

Taxi

Getaway car

Shorts: Getaway Driver, The Getaway

Truck

Submarine

I thought Baby Driver would be a shoe-in for the first film, but enough people had already seen it quite recently to put it out of the running. We watched Wheelman instead, which was like Locke meets Drive.

So what would the second film be?

Well, some of those films in the full list could potentially fall into more than one category. The taxi in Collateral is (kinda) being used as a getaway car. And if you expand the criterion to getaway vehicle, then Furiosa’s war rig surely counts, right?

Okay, we were just looking for an excuse to watch Fury Road again. I mean, c’mon, it was the black and chrome edition! I had the great fortune of seeing that on the big screen a while back and I’ve been raving about it ever since. Besides, you really don’t need an excuse to rewatch Fury Road. I loved it the first time I saw it, and it just keeps getting better and better each time. The editing! The sound! The world-building!

With every viewing, it feels more and more like the film for our time. It may have been a bit of stretch to watch it under the thematic umbrella of getaway vehicles, but it’s a getaway for our current political climate: instead of the typical plot involving a gang driving at full tilt from a bank heist, imagine one where the gang turns around, ousts the bankers, and replaces the whole banking system with a matriarchal community.

Hope is a mistake”, Max mansplains (maxplains?) to Furiosa at one point. He’s wrong. Judicious hope is what drives us forward (or, this case, back …to the citadel). Watching Fury Road again, I drew hope from the character of Nux. An alt-warboy in thrall to a demagogue and raised on a diet of fake news (Valhalla! V8!) can not only be turned by tenderness, he can become an ally to those working for a better world.

Witness!

Brighton conferences

I’ve been to four conferences in two weeks. I wasn’t speaking at any of them so I was able to relax and enjoy the talks.

There was UX Brighton on November 3rd, featuring a terrific opening keynote from Boxman.

James Box speaking at UX Brighton 2017

One week later, I was in the Duke of York’s cinema for FFConf along with all the other Clearleft frontend devs—it’s always a thought-provoking day out.

FFConf 2017 Day 2

Yesterday, I went to Meaning in the daytime, and Bytes in the evening.

It was amazing to get to see @ambrwlsn90 speak at #bytesconf tonight. She is a brilliant speaker! 🙌🏻

Every one of those events was in Brighton. That’s pretty good going for a town this size …and that’s not even counting the regular events like Async, Codebar, and Ladies That UX.

What is a Progressive Web App?

It seems like any new field goes through an inevitable growth spurt that involves “defining the damn thing.” For the first few years of the IA Summit, every second presentation seemed to be about defining what Information Architecture actually is. See also: UX. See also: Content Strategy.

Now it seems to be happening with Progressive Web Apps …which is odd, considering the damn thing is defined damn well.

I’ve written before about the naming of Progressive Web Apps. On the whole, I think it’s a pretty good term, especially if you’re trying to convince the marketing team.

Regardless of the specifics of the name, what I like about Progressive Web Apps is that they have a clear definition. It reminds me of Responsive Web Design. Whatever you think of that name, it comes with a clear list of requirements:

  1. A fluid layout,
  2. Fluid images, and
  3. Media queries.

Likewise, Progressive Web Apps consist of:

  1. HTTPS,
  2. A service worker, and
  3. A Web App Manifest.

There’s more you can do in addition to that (just as there’s plenty more you can do on a responsive site), but the core definition is nice and clear.

Except, for some reason, that clarity is being lost.

Here’s a post by Ben Halpern called What the heck is a “Progressive Web App”? Seriously.

I have a really hard time describing what a progressive web app actually is.

He points to Google’s intro to Progressive Web Apps:

Progressive Web Apps are user experiences that have the reach of the web, and are:

  • Reliable - Load instantly and never show the downasaur, even in uncertain network conditions.
  • Fast - Respond quickly to user interactions with silky smooth animations and no janky scrolling.
  • Engaging - Feel like a natural app on the device, with an immersive user experience.

Those are great descriptions of the benefits of Progressive Web Apps. Perfect material for convincing your clients or your boss. But that appears on developers.google.com …surely it would be more beneficial for that audience to know the technologies that comprise Progressive Web Apps?

Ben Halpern again:

Google’s continued use of the term “quality” in describing things leaves me with a ton of confusion. It really seems like they want PWA to be a general term that doesn’t imply any particular implementation, and have it be focused around the user experience, but all I see over the web is confusion as to what they mean by these things. My website is already “engaging” and “immersive”, does that mean it’s a PWA?

I think it’s important to use the right language for the right audience.

If you’re talking to the business people, tell them about the return on investment you get from Progressive Web Apps.

If you’re talking to the marketing people, tell them about the experiential benefits of Progressive Web Apps.

But if you’re talking to developers, tell them that a Progressive Web App is a website served over HTTPS with a service worker and manifest file.

Installing Progressive Web Apps

When I was testing the dConstruct Audio Archive—which is now a Progressive Web App—I noticed some interesting changes in how Chrome on Android offers the “add to home screen” prompt.

It used to literally say “add to home screen.”

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome. And there’s the “add to home screen” prompt for https://html5forwebdesigners.com/ HTTPS + manifest.json + Service Worker = “Add to Home Screen” prompt. Add to home screen.

Now it simply says “add.”

The dConstruct Audio Archive is now a Progressive Web App

I vaguely remember there being some talk of changing the labelling, but I could’ve sworn it was going to change to “install”. I’ve got to be honest, just having the word “add” doesn’t seem to provide much context. Based on the quick’n’dirty usability testing I did with some co-workers, it just made things confusing. “Add what?” “What am I adding?”

Additionally, the prompt appeared immediately on the first visit to the site. I thought there was supposed to be an added “engagement” metric in order for the prompt to appear; that the user needs to visit the site more than once.

You’d think I’d be happy that users will be presented with the home-screen prompt immediately, but based on the behaviour I saw, I’m not sure it’s a good thing. Here’s what I observed:

  1. The user types the URL archive.dconstruct.org into the address bar.
  2. The site loads.
  3. The home-screen prompt slides up from the bottom of the screen.
  4. The user immediately moves to dismiss the prompt (cue me interjecting “Don’t close that!”).

This behaviour is entirely unsurprising for three reasons:

  1. We web designers and web developers have trained users to dismiss overlays and pop-ups if they actually want to get to the content. Nobody’s going to bother to actually read the prompt if there’s a 99% chance it’s going to say “Sign up to our newsletter!” or “Take our survey!”.
  2. The prompt appears below the “line of death” so there’s no way to tell it’s a browser or OS-level dialogue rather than a JavaScript-driven pop-up from the site.
  3. Because the prompt now appears on the first visit, no trust has been established between the user and the site. If the prompt only appeared on later visits (or later navigations during the first visit) perhaps it would stand a greater chance of survival.

It’s still possible to add a Progressive Web App to the home screen, but the option to do that is hidden behind the mysterious three-dots-vertically-stacked icon (I propose we call this the shish kebab icon to distinguish it from the equally impenetrable hamburger icon).

I was chatting with Andreas from Mozilla at the View Source conference last week, and he was filling me in on how Firefox on Android does the add-to-homescreen flow. Instead of a one-time prompt, they’ve added a persistent icon above the “line of death” (the icon is a combination of a house and a plus symbol).

When a Firefox 58 user arrives on a website that is served over HTTPS and has a valid manifest, a subtle badge will appear in the address bar: when tapped, an “Add to Home screen” confirmation dialog will slide in, through which the web app can be added to the Android home screen.

This kind of badging also has issues (without the explicit text “add to home screen”, the user doesn’t know what the icon does), but I think a more persistently visible option like this works better than the a one-time prompt.

Firefox is following the lead of the badging approach pioneered by the Samsung Internet browser. It provides a plus symbol that, when pressed, reveals the options to add to home screen or simply bookmark.

What does it mean to be an App?

I don’t think Chrome for Android has any plans for this kind of badging, but they are working on letting the site authors provide their own prompts. I’m not sure this is such a good idea, given our history of abusing pop-ups and overlays.

Sadly, I feel that any solution that relies on an unrequested overlay is doomed. That’s on us. The way we’ve turned browsing the web—especially on mobile—into a frustrating chore of dismissing unwanted overlays is a classic tragedy of the commons. We blew it. Users don’t trust unrequested overlays, and I can’t blame them.

For what it’s worth, my opinion is that ambient badging is a better user experience than one-time prompts. That opinion is informed by a meagre amount of testing though. I’d love to hear from anyone who’s been doing more detailed usability testing of both approaches. I assume that Google, Mozilla, and Samsung are doing this kind of testing, and it would be really great to see the data from that (hint, hint).

But it might well be that ambient badging is just too subtle to even be noticed by the user.

On one end of the scale you’ve got the intrusiveness of an add-to-home-screen prompt, but on the other end of the scale you’ve got the discoverability problem of a subtle badge icon. I wonder if there might be a compromise solution—maybe a badge icon that pulses or glows on the first or second visit?

Of course that would also need to be thoroughly tested.

The dConstruct Audio Archive works offline

The dConstruct conference is as old as Clearleft itself. We put on the first event back in 2005, the year of our founding. The last dConstruct was in 2015. It had a good run.

I’m really proud of the three years I ran the show—2012, 2013, and 2014—and I have great memories from each event. I’m inordinately pleased that the individual websites are still online after all these years. I’m equally pleased with the dConstruct audio archive that we put online in 2012. Now that the event itself is no longer running, it truly is an archive—a treasury of voices from the past.

I think that these kinds of online archives are eminently suitable for some offline design. So I’ve added a service worker script to the dConstruct archive.

Caching

To start with, there’s the no-brainer: as soon as someone hits the website, pre-cache static assets like CSS, JavaScript, the logo, and icon images. Now subsequent page loads will be quicker—those assets are taken straight from the cache.

But what about the individual pages? For something like Resilient Web Design—another site that won’t be updated—I pre-cache everything. I could do that with the dConstruct archive. All of the pages with all of the images add up to less than two megabytes; the entire site weighs less than a single page on Wired.com or The Verge.

In the end, I decided to go with a cache-as-you-go strategy. Every time a page or an image is fetched from the network, it is immediately put in a cache. The next time that page or image is requested, the file is served from that cache instead of the network.

Here’s the logic for fetch requests:

  1. First, look to see if the file is in a cache. If it is, great! Serve that.
  2. If the file isn’t in a cache, make a network request and serve the response …but put a copy of a file in the cache.
  3. The next time that file is requested, go to step one.

Save for offline

That caching strategy works great for pages, images, and other assets. But there’s one kind of file on the dConstruct archive that’s a bit different: the audio files. They can be fairly big, so I don’t want to cache those unless the user specifically requests it.

If you end up on the page for a particular talk, and your browser supports service workers, you’ll get an additional UI element in the list of options: a toggle to “save offline” (under the hood, it’s a checkbox). If you activate that option, then the audio file gets put into a cache.

Now if you lose your network connection while browsing the site, you’ll get a custom offline page with the option to listen to any audio files you saved for offline listening. You’ll also see this collection of talks on the homepage, regardless of whether you’ve got an internet connection or not.

So if you’ve got a long plane journey ahead of you, have a browse around the dConstruct archive and select some talks for your offline listening pleasure.

Or just enjoy the speediness of browsing the site.

Turning another website into a Progressive Web App.

Speak and repeat

Rachel and Drew are starting a new service called Notist. It’s going to be a place where conference speakers can collate their materials. They’ve also got a blog.

The latest blog post, by Rachel, is called Do I need to write a brand new talk every time?

New presenters often feel that they need to write a brand-new talk for each conference they are invited to. Unless your job is giving presentations, or you are being paid very well for each talk you give, it is unlikely that you will be able to keep this up if you do more than a couple of talks per year.

It’s true. When I first started giving talks, I felt really guilty at the thought of “recycling” a talk I had already given. “Those people have paid money to be here—they deserve a brand new talk”, I thought. But then someone pointed out to me, “Y’know, it’s actually really arrogant to think that anyone would’ve seen any previous talk of yours.” Good point.

Giving the same talk more than once also allows me to put in the extra effort into the talk prep. If I’m going through the hair-tearing-out hell of trying to wrestle a talk into shape, I’m inevitably going to ask, “Why am I putting myself through this‽” If the answer to that question is “So you can give this talk just once”, I’d probably give up in frustration. But if I know that I’ll have an opportunity to present it more than once, improving it each time, then that gives me the encouragement to keep going.

I do occasionally give a one-off specially-commissioned talk, but those are the exceptions. My talk on the A element at CSS Day’s HTML Special was one of those. Same with my dConstruct talk back in 2008. I just gave a new talk on indie web building blocks at Mozilla’s View Source event, but I’d quite like to give that one again (if you’re running an event, get in touch if that sounds like something you’d like).

My most recent talk isEvaluating Technology. I first gave it at An Event Apart in San Francisco exactly a year ago. I’ll present it for the final time at An Event Apart in Denver in a few weeks. Then it will be retired; taken out to the woodshed; pivoted to video.

I’m already starting to think about my next talk. The process of writing a talk is something else that Rachel has written about. She’s far more together than me. My process involves lots more procrastination, worry, panic, and pacing. Some of the half-baked ideas will probably leak out as blog posts here. It’s a tortuous process, but in the end, I find the satisfaction of delivering the final talk to be very rewarding.

Here’s the thing, though: until I deliver the talk for the first time in front of an audience—no matter how much I might have practiced it—I have literally no idea if it’s any good. I honestly can’t tell whether what I’ve got is gold dust or dog shit (and during the talk prep, my opinion of it can vacillate within the space of five minutes). And so, even though I’ve been giving talks for many years now, if it’s brand new material, I get very nervous.

That’s one more reason to give the same talk more than once instead of creating a fresh hell each time.

The meaning of AMP

Ethan quite rightly points out some semantic sleight of hand by Google’s AMP team:

But when I hear AMP described as an open, community-led project, it strikes me as incredibly problematic, and more than a little troubling. AMP is, I think, best described as nominally open-source. It’s a corporate-led product initiative built with, and distributed on, open web technologies.

But so what, right? Tom-ay-to, tom-a-to. Well, here’s a pernicious example of where it matters: in a recent announcement of their intent to ship a new addition to HTML, the Google Chrome team cited the mood of the web development community thusly:

Web developers: Positive (AMP team indicated desire to start using the attribute)

If AMP were actually the product of working web developers, this justification would make sense. As it is, we’ve got one team at Google citing the preference of another team at Google but representing it as the will of the people.

This is just one example of AMP’s sneaky marketing where some finely-shaved semantics allows them to appear far more reasonable than they actually are.

At AMP Conf, the Google Search team were at pains to repeat over and over that AMP pages wouldn’t get any preferential treatment in search results …but they appear in a carousel above the search results. Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

The same semantic nit-picking can be found in their defence of caching. See, they’ve even got me calling it caching! It’s hosting. If I click on a search result, and I am taken to page that has a URL beginning with https://www.google.com/amp/s/... then that page is being hosted on the domain google.com. That is literally what hosting means. Now, you might argue that the original version was hosted on a different domain, but the version that the user gets sent to is the Google copy. You can call it caching if you like, but you can’t tell me that Google aren’t hosting AMP pages.

That’s a particularly low blow, because it’s such a bait’n’switch. One of the reasons why AMP first appeared to be different to Facebook Instant Articles or Apple News was the promise that you could host your AMP pages yourself. That’s the very reason I first got interested in AMP. But if you actually want the benefits of AMP—appearing in the not-search-results carousel, pre-rendered performance, etc.—then your pages must be hosted by Google.

So, to summarise, here are three statements that Google’s AMP team are currently peddling as being true:

  1. AMP is a community project, not a Google project.
  2. AMP pages don’t receive preferential treatment in search results.
  3. AMP pages are hosted on your own domain.

I don’t think those statements are even truthy, much less true. In fact, if I were looking for the right term to semantically describe any one of those statements, the closest in meaning would be this:

A statement used intentionally for the purpose of deception.

That is the dictionary definition of a lie.

Update: That last part was a bit much. Sorry about that. I know it’s a bit much because The Register got all gloaty about it.

I don’t think the developers working on the AMP format are intentionally deceptive (although they are engaging in some impressive cognitive gymnastics). The AMP ecosystem, on the other hand, that’s another story—the preferential treatment of Google-hosted AMP pages in the carousel and in search results; that’s messed up.

Still, I would do well to remember that there are well-meaning people working on even the fishiest of projects.

Except for the people working at the shitrag that is The Register.

(The other strong signal that I overstepped the bounds of decency was that this post attracted the pond scum of Hacker News. That’s another place where the “well-meaning people work on even the fishiest of projects” rule definitely doesn’t apply.)

Pattern Libraries, Performance, and Progressive Web Apps

Ever since its founding in 2005, Clearleft has been laser-focused on user experience design.

But we’ve always maintained a strong front-end development arm. The front-end development work at Clearleft is always in service of design. Over the years we’ve built up a wealth of expertise on using HTML, CSS, and JavaScript to make better user experiences.

Recently we’ve been doing a lot of strategic design work—the really in-depth long-term engagements that begin with research and continue through to design consultancy and collaboration. That means we’ve got availability for front-end development work. Whether it’s consultancy or production work you’re looking for, this could be a good opportunity for us to work together.

There are three particular areas of front-end expertise we’re obsessed with…

Pattern Libraries

We caught the design systems bug years ago, way back when Natalie started pioneering pattern libraries as our primary deliverable (or pattern portfolios, as we called them then). This approach has proven effective time and time again. We’ve spent years now refining our workflow and thinking around modular design. Fractal is the natural expression of this obsession. Danielle and Mark have been working flat-out on version 2. They’re very eager to share everything they’ve learned along the way …and help others put together solid pattern libraries.

Danielle Huntrods Mark Perkins

Performance

Thinking about it, it’s no surprise that we’re crazy about performance at Clearleft. Like I said, our focus on user experience, and when it comes to user experience on the web, nothing but nothing is more important than performance. The good news is that the majority of performance fixes can be done on the front end—images, scripts, fonts …it’s remarkable how much a good front-end overhaul can make to the bottom line. That’s what Graham has been obsessing over.

Graham Smith

Progressive Web Apps

Over the years I’ve found myself getting swept up in exciting new technologies on the web. When Clearleft first formed, my head was deep into DOM Scripting and Ajax. Half a decade later it was HTML5. Now it’s service workers. I honestly think it’s a technology that could be as revolutionary as Ajax or HTML5 (maybe I should write a book to that effect).

I’ve been talking about service workers at conferences this year, and I can’t hide my excitement:

There’s endless possibilities of what you can do with this technology. It’s very powerful.

Combine a service worker with a web app manifest and you’ve got yourself a Progressive Web App. It’s not just a great marketing term—it’s an opportunity for the web to truly excel at delivering the kind of user experiences previously only associated with native apps.

Jeremy Keith

I’m very very keen to work with companies and organisations that want to harness the power of service workers and Progressive Web Apps. If that’s you, get in touch.

Whether it’s pattern libraries, performance, or Progressive Web Apps, we’ve got the skills and expertise to share with you.

Notifications

I’ve written before about how I use apps on my phone:

If I install an app on my phone, the first thing I do is switch off all notifications. That saves battery life and sanity.

The only time my phone is allowed to ask for my attention is for phone calls, SMS, or FaceTime (all rare occurrences). I initiate every other interaction—Twitter, Instagram, Foursquare, the web. My phone is a tool that I control, not the other way around.

To me, this seems like a perfectly sensible thing to do. I was surprised by how others thought it was radical and extreme.

I’m always shocked when I’m out and about with someone who has their phone set up to notify them of any activity—a mention on Twitter, a comment on Instagram, or worst of all, an email. The thought of receiving a notification upon receipt of an email gives me the shivers. Allowing those kinds of notifications would feel like putting shackles on my time and attention. Instead, I think I’m applying an old-school RSS mindset to app usage: pull rather than push.

Don’t get me wrong: I use apps on my phone all the time: Twitter, Instagram, Swarm (though not email, except in direst emergency). Even without enabling notifications, I still have to fight the urge to fiddle with my phone—to check to see if anything interesting is happening. I’d like to think I’m in control of my phone usage, but I’m not sure that’s entirely true. But I do know that my behaviour would be a lot, lot worse if notifications were enabled.

I was a bit horrified when Apple decided to port this notification model to the desktop. There doesn’t seem to be any way of removing the “notification tray” altogether, but I can at least go into System Preferences and make sure that absolutely nothing is allowed to pop up an alert while I’m trying to accomplish some other task.

It’s the same on iOS—you can control notifications from Settings—but there’s an added layer within the apps themselves. If you have notifications disabled, the apps encourage you to enable them. That’s fine …at first. Being told that I could and should enable notifications is a perfectly reasonable part of the onboarding process. But with some apps I’m told that I should enable notifications Every. Single. Time.

Instagram Swarm

Of the apps I use, Instagram and Swarm are the worst offenders (I don’t have Facebook or Snapchat installed so I don’t know whether they’re as pushy). This behaviour seems to have worsened recently. The needling has been dialed up in recent updates to the apps. It doesn’t matter how often I dismiss the dialogue, it reappears the next time I open the app.

Initially I thought this might be a bug. I’ve submitted bug reports to Instagram and Swarm, but I’m starting to think that they see my bug as their feature.

In the grand scheme of things, it’s not a big deal, but I would appreciate some respect for my deliberate choice. It gets pretty wearying over the long haul. To use a completely inappropriate analogy, it’s like a recovering alcoholic constantly having to rebuff “friends” asking if they’re absolutely sure they don’t want a drink.

I don’t think there’s malice at work here. I think it’s just that I’m an edge-case scenario. They’ve thought about the situation where someone doesn’t have notifications enabled, and they’ve come up with a reasonable solution: encourage that person to enable notifications. After all, who wouldn’t want notifications? That question, if it’s asked at all, is only asked rhetorically.

I’m trying to do the healthy thing here (or at least the healthier thing) in being mindful of my app usage. They sure aren’t making it easy.

The model that web browsers use for notifications seems quite sensible in comparison. If you arrive on a site that asks for permission to send you notifications (without even taking you out to dinner first) then you have three options: allow, block, or dismiss. If you choose “block”, that site will never be able to ask that browser for permission to enable notifications. Ever. (Oh, how I wish I could apply that browser functionality to all those sites asking me to sign up for their newsletter!)

That must seem like the stuff of nightmares for growth-hacking disruptive startups looking to make their graphs go up and to the right, but it’s a wonderful example of truly user-centred design. In that situation, the browser truly feels like a user agent.

Акула

Myself and Jessica were on our way over to Ireland for a few days to visit my mother. It’s a straightforward combination of three modes of transport: a car to Brighton train station; a train to Gatwick airport; a plane to Cork.

We got in the taxi to start the transport relay. “Going anywhere nice?” asked the taxi driver. “Ireland”, I said. He mentioned that he had recently come back from a trip to Crete. “Lovely place”, he said. “Great food.” That led to a discussion of travel destinations, food, and exchange rates. The usual taxi banter. We mentioned that we were in Iceland recently, where the exchange rate was eye-watering. “Iceland?”, he said, “Did you see the Northern Lights?” We hadn’t, but we mentioned some friends of ours who travelled to Sweden recently just to see the Aurorae. That led to a discussion of the weirdness of the midnight sun. “Yeah”, he said, “I was in the Barents Sea once and it was like broad daylight in the middle of the night.” We mentioned being in Alaska in Summer, and how odd the daylight at night was, but now my mind was preoccupied. As soon as there was a lull in the conversation I asked “So …what brought you to the Barents Sea?”

He paused. Then said, “You wouldn’t believe me if I told you.”

Then he told us.

“We were on a secret mission. It was the ’80s, the Cold War. The Russians had a new submarine, the Typhoon. Massive, it was. Bigger than anything the Americans had. We were there with the Americans. They had a new camera that could see through smoke and cloud. The Russians wouldn’t know we were filming them. I was on a support ship. But one time, at four in the morning, the Russians shot at us—warning shots across the bow. I remember waking up and it was still so light, and there were this explosions of water right by the ship.”

“Wow!” was all I could say.

“It was so secret, that mission”, he said, “that if you didn’t go on it, you’d have to spend the duration in prison.”

By this time we had reached the station. “Do you believe me?” he asked us. “Yes”, we said. We paid him, and thanked him. Then I added, “And thanks for the story.”

Singapore

I was in Singapore last week. It was most relaxing. Sure, it’s Disneyland With The Death Penalty but the food is wonderful.

chicken rice fishball noodles laksa grilled pork

But I wasn’t just there to sample the delights of the hawker centres. I had been invited by Mozilla to join them on the opening leg of their Developer Roadshow. We assembled in the PayPal offices one evening for a rapid-fire round of talks on emerging technologies.

We got an introduction to Quantum, the new rendering engine in Firefox. It’s looking good. And fast. Oh, and we finally get support for input type="date".

But this wasn’t a product pitch. Most of the talks were by non-Mozillians working on the cutting edge of technologies. I kicked things off with a slimmed-down version of my talk on evaluating technology. Then we heard from experts in everything from CSS to VR.

The highlight for me was meeting Hui Jing and watching her presentation on CSS layout. It was fantastic! Entertaining and informative, it was presented with gusto. I think it got everyone in the room very excited about CSS Grid.

The Singapore stop was the only I was able to make, but Hui Jing has been chronicling the whole trip. Sounds like quite a whirlwind tour. I’m so glad I was able to join in even for a portion. Thanks to Sandra and Ali for inviting me along—much appreciated.

I’ll also be speaking at Mozilla’s View Source in London in a few weeks, where I’ll be talking about building blocks of the Indie Web:

In these times of centralised services like Facebook, Twitter, and Medium, having your own website is downright disruptive. If you care about the longevity of your online presence, independent publishing is the way to go. But how can you get all the benefits of those third-party services while still owning your own data? By using the building blocks of the Indie Web, that’s how!

‘Twould be lovely to see you there.

Sonic sparklines

I’ve seen some lovely examples of the Web Audio API recently.

At the Material conference, Halldór Eldjárn demoed his Poco Apollo project. It generates music on the fly in the browser to match a random image from NASA’s Apollo archive on Flickr. Brian Eno, eat your heart out!

At Codebar Brighton a little while back, local developer Luke Twyman demoed some of his audio-visual work, including the gorgeous Solarbeat—an audio orrery.

The latest issue of the Clearleft newsletter has some links on sound design in interfaces:

I saw Ruth give a fantastic talk on the Web Audio API at CSS Day this year. It had just the right mixture of code and inspiration. I decided there and then that I’d have to find some opportunity to play around with web audio.

As ever, my own website is the perfect playground. I added an audio Easter egg to adactio.com a while back, and so far, no one has noticed. That’s good. It’s a very, very silly use of sound.

In her talk, Ruth emphasised that the Web Audio API is basically just about dealing with numbers. Lots of the examples of nice usage are the audio equivalent of data visualisation. Data sonification, if you will.

I’ve got little bits of dataviz on my website: sparklines. Each one is a self-contained SVG file. I added a script element to the SVG with a little bit of JavaScript that converts numbers into sound (I kind of wish that the script were scoped to the containing SVG but that’s not the way JavaScript in SVG works—it’s no different to putting a script element directly in the body). Clicking on the sparkline triggers the sound-playing function.

It sounds terrible. It’s like a theremin with hiccups.

Still, I kind of like it. I mean, I wish it sounded nicer (and I’m open to suggestions on how to achieve that—feel free to fork the code), but there’s something endearing about hearing a month’s worth of activity turned into a wobbling wave of sound. And it’s kind of fun to hear how a particular tag is used more frequently over time.

Anyway, it’s just a silly little thing, but anywhere you spot a sparkline on my site, you can tap it to hear it translated into sound.

Problem space

Adam Wathan wrote an article recently called CSS Utility Classes and “Separation of Concerns”. In it, he documents his journey through different ways of thinking about CSS. A lot of it is really familiar.

Phase 1: “Semantic” CSS

Ah, yes! If you’ve been in the game for a while then this will be familiar to you. The days when we used to strive to keep our class names to a minimum and use names that described the content. But, as Adam points out:

My markup wasn’t concerned with styling decisions, but my CSS was very concerned with my markup structure.

Phase 2: Decoupling styles from structure

This is the work pioneered by Nicole with OOCSS, and followed later by methodologies like BEM and SMACSS.

This felt like a huge improvement to me. My markup was still “semantic” and didn’t contain any styling decisions, and now my CSS felt decoupled from my markup structure, with the added bonus of avoiding unnecessary selector specificity.

Amen!

But then Adam talks about the issues when you have two visually similar components that are semantically very different. He shows a few possible solutions and asks this excellent question:

For the project you’re working on, what would be more valuable: restyleable HTML, or reusable CSS?

For many projects reusable CSS is the goal. But not all projects. On the Code For America project, the HTML needed to be as clean as possible, even if that meant more brittle CSS.

Phase 3: Content-agnostic CSS components

Naming things is hard:

The more a component does, or the more specific a component is, the harder it is to reuse.

Adam offers some good advice on naming things for maximum reusability. It’s all good stuff, and this would be the point at which I would stop. At this point there’s a nice balance between reusability, readability, and semantic meaning.

But Adam goes further…

Phase 4: Content-agnostic components + utility classes

Okay. The occasional utility class (for alignment and clearing) can be very handy. This is definitely the point to stop though, right?

Phase 5: Utility-first CSS

Oh God, no!

Once this clicked for me, it wasn’t long before I had built out a whole suite of utility classes for common visual tweaks I needed, things like:

  • Text sizes, colors, and weights
  • Border colors, widths, and positions
  • Background colors
  • Flexbox utilities
  • Padding and margin helpers

If one drink feels good, then ten drinks must be better, right?

At this point there is no benefit to even having an external stylesheet. You may as well use inline styles. Ah, but Adam has anticipated this and counters with this difference between inline styles and having utility classes for everything:

You can’t just pick any value want; you have to choose from a curated list.

Right. But that isn’t a technical solution, it’s a cultural one. You could just as easily have a curated list of allowed inline style properties and values. If you are in an environment where people won’t simply create a new utility class every time they want to style something, then you are also in an environment where people won’t create new inline style combinations every time they want to style something.

I think Adam has hit on something important here, but it’s not about utility classes. His suggestion of “utility-first CSS” will only work if the vocabulary is strictly adhered to. For that to work, everyone touching the code needs to understand the system and respect the boundaries of it. That understanding and respect is far, far more important than any particular way of structuring HTML and CSS. No technical solution can replace that sort of agreement …not even slapping !important on every declaration to make them immutable.

I very much appreciate the efforts that people have put into coming up with great naming systems and methodologies, even the ones I don’t necessarily agree with. They’re all aiming to make that overlap of HTML and CSS less painful. But the really hard problem is where people overlap.

60 seconds over Idaho

I lived in Germany for the latter half of the nineties. On August 11th, 1999, parts of Germany were in the path of a total eclipse of the sun. Freiburg—the town where I was living—wasn’t in the path, so Jessica and I travelled north with some friends to Karlsruhe.

The weather wasn’t great. There was quite a bit of cloud coverage, but at the moment of totality, the clouds had thinned out enough for us to experience the incredible sight of a black sun.

(The experience was only slightly marred by the nearby idiot who took a picture with the flash on right before totality. Had my eyesight not adjusted in time, he would still be carrying that camera around with him in an anatomically uncomfortable place.)

Eighteen years and eleven days later, Jessica and I climbed up a hill to see our second total eclipse of the sun. The hill is in Sun Valley, Idaho.

Here comes the sun.

Travelling thousands of miles just to witness something that lasts for a minute might seem disproportionate, but if you’ve ever been in the path of totality, you’ll know what an awe-inspiring sight it is (if you’ve only seen a partial eclipse, trust me—there’s no comparison). There’s a primitive part of your brain screaming at you that something is horribly, horribly wrong with the world, while another part of your brain is simply stunned and amazed. Then there’s the logical part of your brain which is trying to grasp the incredible good fortune of this cosmic coincidence—that the sun is 400 times bigger than the moon and also happens to be 400 times the distance away.

This time viewing conditions were ideal. Not a cloud in the sky. It was beautiful. We even got a diamond ring.

I like to think I can be fairly articulate, but at the moment of totality all I could say was “Oh! Wow! Oh! Holy shit! Woah!”

Totality

Our two eclipses were separated by eighteen years, but they’re connected. The Saros 145 cycle has been repeating since 1639 and will continue until 3009, although the number of total eclipses only runs from 1927 to 2648.

Eighteen years and twelve days ago, we saw the eclipse in Germany. Yesterday we saw the eclipse in Idaho. In eighteen years and ten days time, we plan to be in Japan or China.

Material 2017

I’m in Iceland. Everything you’ve heard is true. It’s a beautiful fascinating place, and I had a wonderful day of exploration yesterday.

But I didn’t just come to the land of ice and snow—of the midnight sun where the hot springs blow—just to take in the scenery. I’m also here for the Material conference, which just wrapped up. It was very small, and very, very good.

Reading the description of the event, it would definitely be a tough sell trying to get your boss to send you to this. And yet I found it to be one of the most stimulating conferences I’ve attended in a while. It featured talks about wool, about art, about psychology, about sound, about meditation, about photography, about storytelling, and yes, about the web.

That sounds like a crazy mix of topics, but what was really crazy was the way it all slotted together. Brian weaved together a narrative throughout the day, drawing together strands from all of the talks and injecting his own little provocations into the mix too. Is the web like sound? Is the web like litmus paper? Is the web like the nervous system of a blue whale? (you kinda had to be there)

I know it’s a cliché to talk about a conference as being inspirational, but I found myself genuinely inspired by what I heard today. I don’t mean inspired in the self-help feel-good kind of way; I mean the talks inspired thoughts, ideas, and questions.

I think the small-scale intimacy of the event really added something. There were about fifty of us in attendance, and we all ate lunch together, which added to the coziness. I felt some of the same vibe that Brooklyn Beta and Reboot used to generate—a place for people to come together that isn’t directly connected to day-to-day work, but not entirely disconnected either; an adjacent space where seemingly unconnected disciplines get threaded together.

If this event happens again next year, I’ll be back.

Intolerable

I really should know better than to 386 myself, but this manifesto from a (former) Googler has me furious.

Oh, first of all, let me just get past any inevitable whinging that I’m not bothering to refute the bullshit contained therein. In the spirit of Brandolini’s law, here are some thorough debunkings:

Okay, with that out of the way, let me get to what really grinds my gears about this.

First off, there’s the contents of the document itself. It is reprehensible. It sets out to prove a biological link between a person’s gender and their ability to work at Google. It fails miserably, as shown in the links above, but it is cleverly presented as though it were an impartial scientific evaluation (I’m sure it’s complete coincidence that the author just happens to be a man). It begins by categorically stating that the author is all for diversity. This turns out to be as accurate as when someone starts a sentence with “I’m not a racist, but…”

The whole thing is couched in scientism that gives it a veneer of respectability. That leads me to the second thing I’m upset about, and that’s the reaction to the document.

Y’know, it’s one thing when someone’s clearly a troll. It’s easy—and sensible—to dismiss their utterances and move on. But when you see seemingly-smart people linking to the manifestbro and saying “he kind of has a point”, it’s way more infuriating. If you are one of those people (and when I say people, I mean men), you should know that you have been played.

The memo is clearly not a screed. It is calm, clear, polite, and appears perfectly reasonable. “Look,” it says, “I’m just interested in the objective facts here. I’m being reasonable, and if you’re a reasonable person, then you will give this a fair hearing.”

That’s a very appealing position. What reasonable person would reject it? And so, plenty of men who consider themselves to be reasonable and objective are linking to the document and saying it deserves consideration. Strangely, those same men aren’t considering the equally reasonable rebuttals (linked to above). That’s confirmation bias.

See? I can use terms like that to try to make myself sound smart too. Mind you, confirmation bias is not the worst logical fallacy in the memo. That would the Texas sharpshooter fallacy (which, admittedly, is somewhat related to confirmation bias). And, yes, I know that by even pointing out the logical fallacies, I run the risk of committing the fallacy fallacy. The memo is reprehensible not for the fallacies it contains, but for the viewpoint it sets out to legitimise.

The author cleverly wraps a disgusting viewpoint in layers of reasonable-sounding arguments. “Can’t we have a reasonable discussion about this? Like reasonable people? Shouldn’t we tolerate other points of view?” Those are perfectly sensible questions to ask if the discussion is about tabs vs. spaces or Star Wars vs. Star Trek. But those questions cease to be neutral if the topic under discussion is whether some human beings are genetically unsuited to coding.

This is how we get to a situation where men who don’t consider themselves to be sexist in any way—who consider themselves to be good people—end up posting about the Google memo in their workplace Slack channels as though it were a topic worthy of debate. It. Is. Not.

“A-ha!” cry the oh-so-logical and thoroughly impartial men, “If a topic cannot even be debated, you must be threatened by the truth!”

That is one possible conclusion, yes. Or—and this is what Occam’s razor would suggest—it might just be that I’m fucking sick of this. Sick to my stomach. I am done. I am done with even trying to reason with people who think that they’re the victimised guardians of truth and reason when they’re actually just threatened by the thought of a world that doesn’t give them special treatment.

I refuse to debate this. Does that make me inflexible? Yep, sure does. But, y’know, not everything is worthy of debate. When the very premise of the discussion is harmful, all appeals to impartiality ring hollow.

If you read the ex-Googler’s memo and thought “seems reasonable to me”, I hope you can see how you have been played like a violin. Your most virtuous traits—being even-handed and open-minded—have been used against you. I hope that you will try to use those same traits to readdress what has been done. If you read through the rebuttals linked to above and still think that the original memo was reasonable, I fear the damage is quite deep.

It may seem odd that a document that appears to be so reasonable is proving to be so very divisive. But it’s that very appearance of impartiality that gives it its power. It is like an optical illusion for the mind. Some people—like me—read it and think, “this is clearly wrong and harmful.” Other people—who would never self-identify as sexist in any way—read it and think, “seems legit.”

I’m almost—almost—glad that it was written. It’s bringing a lot of buried biases into the light.

By the way, if you are one of those people who still thinks that the memo was “perfectly reasonable” or “made some good points”, and we know each other, please get in touch so that I can re-evaluate our relationship.

The saddest part about all of this is that there are men being incredibly hurtful and cruel to the women they work with, without even realising what they’re doing. They may even think think they are actively doing good.

Take this tweet to Jen which was no doubt intended as a confidence boost:

See how it is glibly passed off as though it were some slight disagreement, like which flavour of ice cream is best? “Well, we’ll agree to disagree about half the population being biologically unsuitable for this kind of work.” And then that’s followed by what is genuinely—in good faith—intended as a compliment. But the juxtaposition of the two results in the message “Hey, you’re really good …for a woman.”

That’s what I find so teeth-grindingly frustrating about all this. I don’t think that guy is a troll. If he were, I could just block and move on. He genuinely thinks he’s a good person who cares about objective truth. He has been played.

A nasty comment from a troll is bad. It’s hurtful in a blunt, shocking way. But there’s a different kind of hurt that comes from a casual, offhand, even well-meaning comment that’s cruel in a more deep-rooted way.

This casual cruelty. This insidious, creeping, never-ending miasma of sexism. It is well and truly intolerable.

This is not up for debate.

Seamfulness

I was listening to some items in my Huffduffer feed when I noticed a little bit of synchronicity.

First of all, I was listening to Tom talking about Thington, and he mentioned seamful design—the idea that “seamlessness” is not necessarily a desirable quality. I think that’s certainly true in the world of connected devices.

Then I listened to Jeff interviewing Matt about hardware startups. They didn’t mention seamful design specifically (it was more all cricket and cables), but again, I think it’s a topic that’s lurking behind any discussion of the internet of things.

I’ve written about seams before. I really feel there’s value—and empowerment—in exposing the points of connection in a system. When designers attempt to airbrush those seams away, I worry that they are moving from “Don’t make me think” to “Don’t allow me to think”.

In many ways, aiming for seamlessness in design feels like the easy way out. It’s a surface-level approach that literally glosses over any deeper problems. I think it might be driven my an underlying assumption that seams are, by definition, ugly. Certainly there are plenty of daily experiences where the seams are noticeable and frustrating. But I don’t think it needs to be this way. The real design challenge is to make those seams beautiful.

Putting on a conference

It’s been a few weeks now since Patterns Day and I’m still buzzing from it. I might be biased, but I think it was a great success all ‘round—for attendees, for speakers, and for us at Clearleft organising the event.

I first had the idea for Patterns Day quite a while back. To turn the idea into reality meant running some numbers. Patterns Day wouldn’t have been possible without Alis. She did all the logistical work—the hard stuff—which freed me up to concentrate on the line-up. I started to think about who I could invite to speak, and at the same time, started looking for a venue.

I knew from the start that I wanted it to be one-day single-track conference in Brighton, much like Responsive Day Out. I knew I wouldn’t be able to use the Corn Exchange again—there’s extensive rebuilding going on there this year. I put together a shortlist of Brighton venues and Alis investigated their capacities and costs, but to be honest, I knew that I wanted to have it in the Duke Of York’s. I love that place, and I knew from attending FFconf that it makes for an excellent conference venue.

The seating capacity of the Duke Of York’s is quite a bit less than the Corn Exchange, so I knew the ticket price would have to be higher than that of Responsive Day Out. The Duke Of York’s isn’t cheap to rent for the day either (but worth every penny).

To calculate the ticket price, I had to figure out the overall costs:

  • Venue hire,
  • A/V hire,
  • Printing costs (for name badges, or in this case, stickers),
  • Payment provider commission—we use Stripe through the excellent Ti.to,
  • Speaker’s travel,
  • Speaker’s accommodation,
  • Speaker’s dinner the evening before the event,
  • Speaker’s payment.

Some conference organisers think they can skimp on that last part. Those conference organisers are wrong. A conference is nothing without its speakers. They are literally the reason why people buy tickets.

Because the speakers make or break a conference, there’s a real temptation to play it safe and only book people who are veterans. But then you’re missing out on a chance to boost someone when they’re just starting out with public speaking. I remember taking a chance on Alla a few years back for Responsive Day Out 3—she had never given a conference talk before. She, of course, gave a superb talk. Now she’s speaking at events all over the world, and I have to admit, it gives me a warm glow inside. When it came time for Patterns Day, Alla had migrated into the “safe bet” category—I knew she’d deliver the perfect closing keynote.

I understand why conference organisers feel like they need to play it safe. From their perspective, they’re already taking on a lot of risk in putting on a conference in the first place. It’s easy to think of yourself as being in a position of vulnerability—”If I don’t sell enough tickets, I’m screwed!” But I think it’s important to realise that you’re also in a position of power, whether you like it or not. If you’re in charge of putting together the line-up of a conference, that’s a big responsibility, not just to the attendees on the day, but to the community as a whole. It’s like that quote by Eliel Saarinen:

Always design a thing by considering it in its next larger context. A chair in a room, a room in a house, a house in an environment, an environment in a city plan.

Part of that responsibility to the wider community is representation. That’s why I fundamentally disagree with ppk when he says:

The other view would be that there should be 50% woman speakers. Although that sounds great I personally never believed in this argument. It’s based on the general population instead of the population of web developers, and if we’d extend that argument to its logical conclusion then 99.9% of the web development conference speakers should know nothing about web development, since that’s the rough ratio in the general population.

That makes it sound like a conference’s job is to represent the status quo. By that logic, the line-up should include plenty of bad speakers—after all, the majority of web developers aren’t necessarily good speakers. But of course that’s not how conferences work. They don’t represent typical ideas—quite the opposite. What’s the point of having an event that simply reinforces the general consensus? This isn’t Harrison Bergeron. You want a line-up that’s exceptional.

I don’t think conference organisers can shirk this issue and say “It’s out of my hands; I’m just reflecting the way things are.” The whole point of having a conference in the first place is to trigger some kind of change. If you’re not happy with the current make-up of the web community (and I most definitely am not), then a conference is the perfect opportunity to try to demonstrate an alternative. We do it with the subject matter of the talks—”Our code/process/tooling doesn’t have to be this way!”—and I think we should also apply that to the wider context: “Our culture doesn’t have to be this way!”

Passing up that chance isn’t just a missed opportunity, I think it’s also an abdication of responsibility. Believe me, I know that organising a conference is a lot of work, but that’s not a reason to cop out. On the contrary, it’s all the more reason to step up to the plate and try your damnedest to make a difference. Otherwise, why even have a conference?

Whenever the issue of diversity at conferences comes up, there is inevitably someone who says “All I care about is having the best speakers.” But if that were true, shouldn’t your conference (and every other conference) have exactly the same line-up every year?

The truth is that there are all sorts of factors that play into the choice of speakers. I think representation should be a factor, but that’s all it is—one factor of many. Is the subject matter relevant? That’s a factor. Do we already have someone on the line-up covering similar subject matter? That’s a factor. How much will it cost to get this speaker? That’s a factor. Is the speaker travelling from very far away? That’s a factor.

In the case of Patterns Day, I had to factor in the range of topics. I wanted a mixture of big-picture talks as well as hands-on nitty-gritty case studies. I also didn’t want it to be too developer-focused or too design-focused. I was aiming for a good mix of both.

In the end, I must admit that I am guilty of doing exactly what I’ve been railing against. I played it safe. I put together a line-up of speakers that I wanted to see, and that I knew with absolute certainty would deliver great presentations. There were plenty of potential issues for me to get stressed about in the run-up to the event, but the quality of the talks wasn’t one of them. On the one hand, I wish I had taken more chances with the line-up, but honestly, if I could do it over again, I wouldn’t change a thing.

Because I was trying to keep the ticket price as low as possible—and the venue hire was already a significant cost—I set myself the constraint of only having speakers from within the UK (Jina was the exception—she was going to come anyway as an attendee, so of course I asked her to speak). Knowing that the speaker’s travel costs would be low, I could plug the numbers into an algebraic formula for figuring out the ticket price:

costs ÷ seats = price

Add up all the costs and divide that total by the number of available seats to get the minimum ticket price.

In practice, you probably don’t want to have to sell absolutely every single ticket just to break even, so you set the price for a sales figure lower than 100%—maybe 80%, or 50% if you’re out to make a tidy profit (although if you’re out to make a tidy profit, I don’t think conferences are the right business to be in—ask any conference organiser).

Some conferences factor in money for sponsorship to make the event happen. I prefer to have sponsors literally sponsoring additions to the conference. In the case of Patterns Day, the coffee and pastries were sponsored by Deliveroo, and the videos were sponsored by Amazon. But sponsorship didn’t affect the pricing formula.

The Duke Of York’s has around 280 seats. I factored in about 30 seats for speakers, Clearlefties, and other staff. That left 250 seats available for attendees. But that’s not the number I plugged into the pricing formula. Instead, I chose to put 210 tickets on sale and figured out the ticket price accordingly.

What happened to the remaining 40 seats? The majority of them went to Codebar students and organisers. So if you bought a ticket for Patterns Day, you directly subsidised the opportunity for people under-represented in technology to attend. Thank you.

Speaking personally, I found that having the Codebar crew in attendance really made my day. They’re my heroes, and it meant the world to me that they were able to be there.

Zara, Alice, and Amber Patterns Day Anwen, Zara, Alice, Dot, and Amber Eden, Zara, Alice, and Chloe

Container queries

Every single browser maker has the same stance when it comes to features—they want to hear from developers at the coalface.

“Tell us what you want! We’re listening. We want to know which features to prioritise based on real-world feedback from developers like you.”

“How about container quer—”

“Not that.”

I don’t think it’s an exaggeration to say that literally every web developer I know would love to have container queries. If you’ve worked on any responsive project of any size, you’re bound to have bumped up against the problem of only being able to respond to viewport size, rather than the size of the containing element. Without container queries, our design systems can never be truly modular.

But there’s a divide growing between what our responsive designs need to do, and the tools CSS gives us to meet those needs. We’re making design decisions at smaller and smaller levels, but our code asks us to bind those decisions to a larger, often-irrelevant abstraction of a “page.”

But the message from browser makers has consistently been “it’s simply too hard.”

At the Frontend United conference in Athens a little while back, Jonathan gave a whole talk on the need for container queries. At the same event, Serg gave a talk on Houdini.

Now, as I understand it, Houdini is the CSS arm of the extensible web. Just as web components will allow us to create powerful new HTML without lobbying browser makers, Houdini will allow us to create powerful new CSS features without going cap-in-hand to standards bodies.

At this year’s CSS Day there were two Houdini talks. Tab gave a deep dive, and Philip talked specifically about Houdini as a breakthrough for polyfilling.

During the talks, you could send questions over Twitter that the speaker could be quizzed on afterwards. As Philip was talking, I began to tap out a question: “Could this be used to polyfill container queries?” My thumb was hovering over the tweet button at the very moment that Philip said in his talk, “This could be used to polyfill container queries.”

For that happen, browsers need to implement the layout API for Houdini. But I’m betting that browser makers will be far more receptive to calls to implement the layout API than calls for container queries directly.

Once we have that, there are two possible outcomes:

  1. We try to polyfill container queries and find out that the browser makers were right—it’s simply too hard. This certainty is itself a useful outcome.
  2. We successfully polyfill container queries, and then instead of asking browser makers to figure out implementation, we can hand it to them for standardisation.

But, as Eric Portis points out in his talk on container queries, Houdini is still a ways off (by the way, browser makers, that’s two different conference talks I’ve mentioned about container queries, just in case you were keeping track of how much developers want this).

Still, there are some CSS features that are Houdini-like in their extensibility. Custom properties feel like they could be wrangled to help with the container query problem. While it’s easy to think of custom properties as being like Sass variables, they’re much more powerful than that—the fact they can be a real-time bridge between JavaScript and CSS makes them scriptable. Alas, custom properties can’t be used in media queries but maybe some clever person can figure out a way to get the effect of container queries without a query-like syntax.

However it happens, I’d just love to see some movement on container queries. I’m not alone.

I know container queries would revolutionize my design practice, and better prepare responsive design for mobile, desktop, tablet—and whatever’s coming next.