Tags: 3d

71

sparkline

Thursday, June 11th, 2020

CSS custom properties and the cascade

When I wrote about programming CSS to perform Sass colour functions I said this about the brilliant Lea Verou:

As so often happens when I’m reading something written by Lea—or seeing her give a talk—light bulbs started popping over my head (my usual response to Lea’s knowledge bombs is either “I didn’t know you could do that!” or “I never thought of doing that!”).

Well, it happened again. This time I was reading her post about hybrid positioning with CSS variables and max() . But the main topic of the post wasn’t the part that made go “Huh! I never knew that!”. Towards the end of her article she explained something about the way that browsers evaluate CSS custom properties:

The browser doesn’t know if your property value is valid until the variable is resolved, and by then it has already processed the cascade and has thrown away any potential fallbacks.

I’m used to being able to rely on the cascade. Let’s say I’m going to set a background colour on paragraphs:

p {
  background-color: red;
  background-color: color(display-p3 1 0 0);
}

First I’ve set a background colour using a good ol’ fashioned keyword, supported in browsers since day one. Then I declare the background colour using the new-fangled color() function which is supported in very few browsers. That’s okay though. I can confidently rely on the cascade to fall back to the earlier declaration. Paragraphs will still have a red background colour.

But if I store the background colour in a custom property, I can no longer rely on the cascade.

:root {
  --myvariable: color(display-p3 1 0 0);
}
p {
  background-color: red;
  background-color: var(--myvariable);
}

All I’ve done is swapped out the hard-coded color() value for a custom property but now the browser behaves differently. Instead of getting a red background colour, I get the browser default value. As Lea explains:

…it will make the property invalid at computed value time.

The spec says:

When this happens, the computed value of the property is either the property’s inherited value or its initial value depending on whether the property is inherited or not, respectively, as if the property’s value had been specified as the unset keyword.

So if a browser doesn’t understand the color() function, it’s as if I’ve said:

background-color: unset;

This took me by surprise. I’m so used to being able to rely on the cascade in CSS—it’s one of the most powerful and most useful features in this programming language. Could it be, I wondered, that the powers-that-be have violated the principle of least surprise in specifying this behaviour?

But a note in the spec explains further:

Note: The invalid at computed-value time concept exists because variables can’t “fail early” like other syntax errors can, so by the time the user agent realizes a property value is invalid, it’s already thrown away the other cascaded values.

Ah, right! So first of all browsers figure out the cascade and then they evaluate custom properties. If a custom property evaluates to gobbledygook, it’s too late to figure out what the cascade would’ve fallen back to.

Thinking about it, this makes total sense. Remember that CSS custom properties aren’t like Sass variables. They aren’t evaluated once and then set in stone. They’re more like let than const. They can be updated in real time. You can update them from JavaScript too. It’s entirely possible to update CSS custom properties rapidly in response to events like, say, the user scrolling or moving their mouse. If the browser had to recalculate the cascade every time a custom property didn’t evaluate correctly, I imagine it would be an enormous performance bottleneck.

So even though this behaviour surprised me at first, it makes sense on reflection.

I’ve probably done a terrible job explaining the behaviour here, so I’ve made a Codepen. Although that may also do an equally terrible job.

(Thanks to Amber for talking through this with me and encouraging me to blog about it. And thanks to Lea for expanding my mind. Again.)

Friday, May 15th, 2020

Saturday, April 4th, 2020

A bit of Blarney

I don’t talk that much on here about my life’s work. Contrary to appearances, my life’s work is not banging on about semantic markup, progressive enhancement, and service workers.

No, my life’s work is connected to Irish traditional music. Not as a musician, I hasten to clarify—while I derive enormous pleasure from playing tunes on my mandolin, that’s more of a release than a vocation.

My real legacy, it turns out, is being the creator and caretaker of The Session, an online community and archive dedicated to Irish traditional music. I might occassionally mention it here, but only when it’s related to performance, accessibility, or some other front-end aspect. I’ve never really talked about the history, meaning, and purpose of The Session.

Well, if you’re at all interested in that side of my life, you can now listen to me blather on about it for over an hour, thanks to the Blarney Pilgrims podcast.

I’ve been huffduffing episodes of this podcast for quite a while now. It’s really quite excellent. If you’re at all interested in Irish traditional music, the interviews with the likes of Kevin Burke, John Carty, Liz Carroll and Catherine McEvoy are hard to beat.

So imagine my surprise when they contacted me to ask me to chat and play some tunes! It really was an honour.

I was also a bit of guinea pig. Normally they’d record these kinds of intimate interviews face to face, but what with The Situation and all, my chat was the first remotely recorded episode.

I’ve been on my fair share of podcasts—most recently the Design Systems Podcast—but this one was quite different. Instead of talking about my work on the web, this focussed on what I was doing before the web came along. So if you don’t want to hear me talking about my childhood, give this a miss.

But if you’re interested in hearing my reminisce and discuss the origin and evolution of The Session, have a listen. The chat is interspersed with some badly-played tunes from me on the mandolin, but don’t let that put you off.

Wednesday, December 11th, 2019

Checked in at Plough & Stars. 🎶 map

Checked in at Plough & Stars. 🎶

Thursday, November 21st, 2019

Rams

I’ve made a few trips to Germany recently. I was in Berlin last week for the always-excellent Beyond Tellerrand. Marc did a terrific job of curating an entertaining and thought-provoking line-up of speakers. He also made sure that those speakers—myself included—were very well taken care of.

I was also in Frankfurt last month. It was for an event, but for once, it wasn’t an event that involved me in any way. Jessica was there for the Frankfurt Book Fair. I was tagging along for the ride.

While Jessica was out at the sprawling exhibition hall on the edge of town, I was exploring downtown Frankfurt. One lunch time, I found myself wandering around the town’s charming indoor market hall.

While I was perusing the sausages on display, I noticed an older gentleman also inspecting the meat wares. He looked familiar. That’s when the part of my brain responsible for facial recognition said “That’s Dieter Rams.” A more rational part of my brain said “It can’t be!”, but it seemed that my pattern matching was indeed correct.

As he began to walk away, the more impulsive part of my brain shouted “Say something!”, and before my calmer nature could intervene, I was opening my mouth to speak.

I think I would’ve been tongue-tied enough introducing myself to someone of Dieter Rams’s legendary stature, but it was compounded by having to do it in a second language.

Entschulding Sie!”, I said (“Excuse me”). “Sind Sie Dieter Rams?” (“Are you Dieter Rams?”)

“Ja, bin ich”, he said (“Yes, I am”).

At this point, my brain realised that it had nothing further planned and it left me to my own devices. I stumbled through a sentence saying something about what a pleasure it was to see him. I might have even said something stupid along the lines of “I’m a web designer!”

Anyway, he smiled politely as I made an idiot of myself, and then I said goodbye, reiterating that it was a real treat for me to meet him.

After I walked outside, I began questioning reality. Did that really just happen? It felt utterly surreal.

Of course afterwards I thought of all the things I could’ve said. L’esprit de l’escalier. Or as the Germans put it, Treppenwitz.

I could’ve told him that I collect design principles, of which his are probably the most well-known.

I could’ve told him about the time that Clearleft went on a field trip to the Design Museum in London to see an exhibition of his work, and how annoyed I was by the signs saying “Do Not Touch” …in front of household objects that were literally designed to be touched!

I could’ve told him how much I enjoyed the documentary that Gary Hustwit made about him.

But I didn’t say any of those things. I just spouted some inanity, like the starstruck fanboy I am.

There’ll be a lunchtime showing of the Rams documentary at An Event Apart in San Francisco, where I’ll be speaking in a few weeks. Now I wonder if rewatching it is just going to make me cringe as I’m reminded of my encounter in Frankfurt.

But I’m still glad I said something.

Monday, November 11th, 2019

Cat encounters

The latest episode of Ariel’s excellent Offworld video series (and podcast) is all about Close Encounters Of The Third Kind.

I have such fondness for this film. It’s one of those films that I love to watch on a Sunday afternoon (though that’s true of so many Spielberg films—Jaws, Raiders Of The Lost Ark, E.T.). I remember seeing it in the cinema—this would’ve been the special edition re-release—and feeling the seat under me quake with the rumbling of the musical exchange during the film’s climax.

Ariel invited Rose Eveleth and Laura Welcher on to discuss the film. They spent a lot of time discussing the depiction of first contact communication—Arrival being the other landmark film on this topic.

This is a timely discussion. There’s a new book by Daniel Oberhaus published by MIT Press called Extraterrestrial Languages:

If we send a message into space, will extraterrestrial beings receive it? Will they understand?

You can a read an article by the author on The Guardian, where he mentions some of the wilder ideas about transmitting signals to aliens:

Minsky, widely regarded as the father of AI, suggested it would be best to send a cat as our extraterrestrial delegate.

Don’t worry. Marvin Minsky wasn’t talking about sending a real live cat. Rather, we transmit instructions for building a computer and then we can transmit information as software. Software about, say, cats.

It’s not that far removed from what happened with the Voyager golden record, although that relied on analogue technology—the phonograph—and sent the message pre-compiled on hardware; a much slower transmission rate than radio.

But it’s interesting to me that Minsky specifically mentioned cats. There’s another long-term communication puzzle that has a cat connection.

The Yukka Mountain nuclear waste repository is supposed to store nuclear waste for 10,000 years. How do we warn our descendants to stay away? We can’t use language. We probably can’t even use symbols; they’re too culturally specific. A think tank called the Human Interference Task Force was convened to agree on the message to be conveyed:

This place is a message… and part of a system of messages… pay attention to it! Sending this message was important to us. We considered ourselves to be a powerful culture.

This place is not a place of honor…no highly esteemed deed is commemorated here… nothing valued is here.

What is here is dangerous and repulsive to us. This message is a warning about danger.

A series of thorn-like threatening earthworks was deemed the most feasible solution. But there was another proposal that took a two pronged approach with genetics and folklore:

  1. Breed cats that change colour in the presence of radioactive material.
  2. Teach children nursery rhymes about staying away from cats that change colour.

This is the raycat solution.

Tuesday, August 20th, 2019

Picture 1 Picture 2 Picture 3
map

Checked in at American Museum of Natural History. Getting a behind-the-scenes tour — with Jessica

Saturday, August 10th, 2019

Server Timing

Harry wrote a really good article all about the performance measurement Time To First Byte. Time To First Byte: What It Is and Why It Matters:

While a good TTFB doesn’t necessarily mean you will have a fast website, a bad TTFB almost certainly guarantees a slow one.

Time To First Byte has been the chink in my armour over at thesession.org, especially on the home page. Every time I ran Lighthouse, or some other performance testing tool, I’d get a high score …with some points deducted for taking too long to get that first byte from the server.

Harry’s proposed solution is to set up some Server Timing headers:

With a little bit of extra work spent implementing the Server Timing API, we can begin to measure and surface intricate timings to the front-end, allowing web developers to identify and debug potential bottlenecks previously obscured from view.

I rememberd that Drew wrote an excellent article on Smashing Magazine last year called Measuring Performance With Server Timing:

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

He even provides some PHP code, which I was able to take wholesale and drop into the codebase for thesession.org. Then I was able to put start/stop points in my code for measuring how long some operations were taking. Then I could output the results of these measurements into Server Timing headers that I could inspect in the “Network” tab of a browser’s dev tools (Chrome is particularly good for displaying Server Timing, so I used that while I was conducting this experiment).

I started with overall database requests. Sure enough, that was where most of the time in time-to-first-byte was being spent.

Then I got more granular. I put start/stop points around specific database calls. By doing this, I was able to zero in on which operations were particularly costly. Once I had done that, I had to figure out how to make the database calls go faster.

Spoiler: I did it by adding an extra index on one particular table. It’s almost always indexes, in my experience, that make the biggest difference to database performance.

I don’t know why it took me so long to get around to messing with Server Timing headers. It has paid off in spades. I wish I had done it sooner.

And now thesession.org is positively zipping along!

Monday, July 8th, 2019

Checked in at Armada Hotel. 🎶 map

Checked in at Armada Hotel. 🎶

Checked in at Hillery's. 🎵 — with Jessica map

Checked in at Hillery’s. 🎵 — with Jessica

Saturday, June 29th, 2019

Checked in at Royal Pavilion Gardens. with Jessica map

Checked in at Royal Pavilion Gardens. with Jessica

Wednesday, May 29th, 2019

Tuesday, May 21st, 2019

Checked in at Community Base. Cooking masterclass with Jamie from Cin Cin — with Jessica map

Checked in at Community Base. Cooking masterclass with Jamie from Cin Cin — with Jessica

Wednesday, May 15th, 2019

Checked in at Curry. Post-A11yClub currywurst. — with aaronpk, Joschi, Tantek map

Checked in at Curry. Post-A11yClub currywurst. — with aaronpk, Joschi, Tantek

Saturday, May 11th, 2019

Checked in at sipgate GmbH. Indie Web Camp — with Tantek, aaronpk, Joschi, Rosemary map

Checked in at sipgate GmbH. Indie Web Camp — with Tantek, aaronpk, Joschi, Rosemary

Friday, February 8th, 2019

Checked in at Dino's Tomato Pie. Pizza pie! 🍕 map

Checked in at Dino’s Tomato Pie. Pizza pie! 🍕

Sunday, January 20th, 2019

Picture 1 Picture 2 Picture 3 Picture 4

Dim sum. 🥟

Monday, October 8th, 2018

Checked in at Live Arts. Fireside chat with Margot Lee Shetterly. map

Checked in at Live Arts. Fireside chat with Margot Lee Shetterly.

Wednesday, July 4th, 2018

Checked in at Jericho Tavern. Getting ready to speak at Oxford Geek Night map

Checked in at Jericho Tavern. Getting ready to speak at Oxford Geek Night

Tuesday, June 26th, 2018

Building More Expressive Products by Val Head

It’s day two of An Event Apart in Boston and Val is giving a new talk about building expressive products:

The products we design today must connect with customers across different screen sizes, contexts, and even voice or chat interfaces. As such, we create emotional expressiveness in our products not only through visual design and language choices, but also through design details such as how interface elements move, or the way they sound. By using every tool at our disposal, including audio and animation, we can create more expressive products that feel cohesive across all of today’s diverse media and social contexts. In this session, Val will show how to harness the design details from different media to build overarching themes—themes that persist across all screen sizes and user and interface contexts, creating a bigger emotional impact and connection with your audience.

I’m going to attempt to live blog her talk. Here goes…

This is about products that intentionally express personality. When you know what your product’s personality is, you can line up your design choices to express that personality intentionally (as opposed to leaving it to chance).

Tunnel Bear has a theme around a giant bear that will product you from all the bad things on the internet. It makes a technical product very friendly—very different from most VPN companies.

Mailchimp have been doing this for years, but with a monkey (ape, actually, Val), not a bear—Freddie. They’ve evolved and changed it over time, but it always has personality.

But you don’t need a cute animal to express personality. Authentic Weather is a sarcastic weather app. It’s quite sweary and that stands out. They use copy, bold colours, and giant type.

Personality can be more subtle, like with Stripe. They use slick animations and clear, concise design.

Being expressive means conveying personality through design. Type, colour, copy, layout, motion, and sound can all express personality. Val is going to focus on the last two: motion and sound.

Expressing personality with motion

Animation can be used to tell your story. We can do that through:

  • Easing choices (ease-in, ease-out, bounce, etc.),
  • Duration values, and offsets,
  • The properties we animate.

Here are four personality types…

Calm, soft, reassuring

You can use opacity, soft blurs, small movements, and easing curves with gradual changes. You can use:

  • fade,
  • scale + fade,
  • blur + fade,
  • blur + scale + fade.

Pro tip for blurs: the end of blurs always looks weird. Fade out with opacity before your blur gets weird.

You can use Penner easing equations to do your easings. See them in action on easings.net. They’re motion graphs plotting animation against time. The flatter the curve, the more linear the motion. They have a lot more range than the defaults you get with CSS keyword values.

For calm, soft, and reassuring, you could use easeInQuad, easeOutQuad, or easeInOutQuad. But that’s like saying “you could use dark blue.” These will get you close, but you need to work on the detail.

Confident, stable, strong

You can use direct movements, straight lines, symmetrical ease-in-outs. You should avoid blurs, bounces, and overshoots. You can use:

  • quick fade,
  • scale + fade,
  • direct start and stops.

You can use Penner equations like easeInCubic, easeOutCubic and easeInOutCubic.

Lively, energetic, friendly

You can use overshoots, anticipation, and “snappy” easing curves. You can use:

  • overshoot,
  • overshoot + scale,
  • anticipation,
  • anticipation + overshoot

To get the sense of overshoots and anticipations you can use easing curves like easeInBack, easeOutBack, and easeInOutBack. Those aren’t the only ones though. Anything that sticks out the bottom of the graph will give you anticipation. Anything that sticks out the top of the graph will give you overshoot.

If cubic bezier curves don’t get you quite what you’re going for, you can add keyframes to your animation. You could have keyframes for: 0%, 90%, and 100% where the 90% point is past the 100% point.

Stripe uses a touch of overshoot on their charts and diagrams; nice and subtle. Slack uses a bit of overshoot to create a sense of friendliness in their loader.

Playful, fun, lighthearted

You can use bounces, shape morphs, squashes and stretches. This is probably not the personality for a bank. But it could be for a game, or some other playful product. You can use:

  • bounce,
  • elastic,
  • morph,
  • squash and stretch (springs.

You can use easing equations for the first two, but for the others, they’re really hard to pull off with just CSS. You probably need JavaScript.

The easing curve for elastic movement is more complicated Penner equation that can’t be done in CSS. GreenSock will help you visual your elastic easings. For springs, you probably need a dedicated library for spring motions.

Expressing personality with sound

We don’t talk about sound much in web design. There are old angry blog posts about it. And not every website should use sound. But why don’t we even consider it on the web?

We were burnt by those terrible Flash sites with sound on every single button mouseover. And yet the Facebook native app does that today …but in a much more subtle way. The volume is mixed lower, and the sound is flatter; more like a haptic feel. And there’s more variation in the sounds. Just because we did sound badly in the past doesn’t mean we can’t do it well today.

People say they don’t want their computers making sound in an office environment. But isn’t responsive design all about how we don’t just use websites on our desktop computers?

Amber Case has a terrific book about designing products with sound, and she’s all about calm technology. She points out that the larger the display, the less important auditive and tactile feedback becomes. But on smaller screens, the need increases. Maybe that’s why we’re fine with mobile apps making sound but not with our desktop computers doing it?

People say that sound is annoying. That’s like saying siblings are annoying. Sound is annoying when it’s:

  • not appropriate for the situation,
  • played at the wrong time,
  • too loud,
  • lacks user control.

But all of those are design decisions that we can control.

So what can we do with sound?

Sound can enhance what we perceive from animation. The “breathe” mode in the Calm meditation app has some lovely animation, and some great sound to go with it. The animation is just a circle getting smaller and bigger—if you took the sound away, it wouldn’t be very impressive.

Sound can also set a mood. Sirin Labs has an extreme example for the Solarin device with futuristic sounds. It’s quite reminiscent of the Flash days, but now it’s all done with browser technologies.

Sound is a powerful brand differentiator. Val now plays sounds (without visuals) from:

  • Slack,
  • Outlook Calendar.

They have strong associations for us. These are earcons: icons for the ears. They can be designed to provoke specific emotions. There was a great explanation on the Blackberry website, of all places (they had a whole design system around their earcons).

Here are some uses of sounds…

Alerts and notifications

You have a new message. You have new email. Your timer is up. You might not be looking at the screen, waiting for those events.

Navigating space

Apple TV has layers of menus. You go “in” and “out” of the layers. As you travel “in” and “out”, the animation is reinforced with sound—an “in” sound and an “out” sound.

Confirming actions

When you buy with Apple Pay, you get auditory feedback. Twitter uses sound for the “pull to refresh” action. It gives you confirmation in a tactile way.

Marking positive moments

This is a great way of making a positive impact in your user’s minds—celebrate the accomplishments. Clear—by Realmac software—gives lovely rising auditory feedback as you tick things off your to-do list. Compare that to hardware products that only make sounds when something goes wrong—they don’t celebrate your accomplishments.

Here are some best practices for user interface sounds:

  • UI sounds be short, less than 400ms.
  • End on an ascending interval for positive feedback or beginnings.
  • End on a descending interval for negative feedback, ending, or closing.
  • Give the user controls to top or customise the sound.

When it comes to being expressive with sounds, different intervals can evoke different emotions:

  • Consonant intervals feel pleasant and positive.
  • Dissonant intervals feel strong, active, or negative.
  • Large intervals feel powerful.
  • Octaves convey lightheartedness.

People have made sounds for you if you don’t want to design your own. Octave is a free library of UI sounds. You can buy sounds from motionsound.io, targetted specifically at sounds to go with motions.

Let’s wrap up by exploring where to find your product’s personality:

  • What is it trying to help users accomplish?
  • What is it like? (its mood and disposition)

You can workshops to answer these questions. You can also do research with your users. You might have one idea about your product’s personality that’s different to your customer’s. You need to project a believable personality. Talk to your customers.

Designing for Emotion has some great exercises for finding personality. Conversational Design also has some great exercises in it. Once you have the words to describe your personality, it gets easier to design for it.

So have a think about using motion and sound to express your product’s personality. Be intentional about it. It will also make the web a more interesting place.