Tags: back

146

sparkline

Thursday, October 15th, 2020

Ambient Reassurance – disambiguity

Ambient reassurance is the experience of small, unplanned moments of interaction with colleagues that provide reassurance that you’re on the right track. They provide encouragement and they help us to maintain self belief in those moments where we are liable to lapse into unproductive self doubt or imposter syndrome.

In hindsight I realise, these moments flowed naturally in an office environment.

Thursday, October 8th, 2020

The Widening Responsibility for Front-End Developers | CSS-Tricks

Chris shares his thoughts on the ever-widening skillset required of a so-called front-end developer.

Interestingly, the skillset he mentions half way through (which is what front-end devs used to need to know) really appeals to me: accessibility, performance, responsiveness, progressive enhancement. But the list that covers modern front-end dev sounds more like a different mindset entirely: APIs, Content Management Systems, business logic …the back of the front end.

And Chris doesn’t even touch on the build processes that front-end devs are expected to be familiar with: version control, build pipelines, package management, and all that crap.

I wish we could return to this:

The bigger picture is that as long as the job is building websites, front-enders are focused on the browser.

Thursday, October 1st, 2020

Uniting the team with Jamstack | Trys Mudford

This is a superb twenty minute presentation by Trys! It’s got everything: a great narrative, technical know-how, and a slick presentation style.

Conference organisers: you should get Trys to speak at your event!

Tuesday, September 29th, 2020

Unobtrusive feedback

Ten years ago I gave a talk at An Event Apart all about interaction design. It was called Paranormal Interactivity. You can watch the video, listen to the audio or read the transcript if you like.

I think it holds up pretty well. There’s one interaction pattern in particular that I think has stood the test of time. In the talk, I introduce this pattern as something you can see in action on Huffduffer:

I was thinking about how to tell the user that something’s happened without distracting them from their task, and I thought beyond the web. I thought about places that provide feedback mechanisms on screens, and I thought of video games.

So we all know Super Mario, right? And if you think about when you’re collecting coins in Super Mario, it doesn’t stop the game and pop up an alert dialogue and say, “You have just collected ten points, OK, Cancel”, right? It just does it. It does it in the background, but it does provide you with a feedback mechanism.

The feedback you get in Super Mario is about the number of points you’ve just gained. When you collect an item that gives you more points, the number of points you’ve gained appears where the item was …and then drifts upwards as it disappears. It’s unobtrusive enough that it won’t distract you from the gameplay you’re concentrating on but it gives you the reassurance that, yes, you have just gained points.

I think this a neat little feedback mechanism that we can borrow for subtle Ajax interactions on the web. These are actions that don’t change much of the content. The user needs to be able to potentially do lots of these actions on a single page without waiting for feedback every time.

On Huffduffer, for example, you might be looking at a listing of people that you can choose to follow or unfollow. The mechanism for doing that is a button per person. You might potentially be clicking lots of those buttons in quick succession. You want to know that each action has taken effect but you don’t want to be interrupted from your following/unfollowing spree.

You get some feedback in any case: the button changes. Maybe the text updates from “follow” to “unfollow” accompanied by a change in colour (this is what you’ll see on Twitter). The Super Mario style feedback is in addition to that, rather than instead of.

I’ve made a Codepen so you can see a reduced test case of the Super Mario feedback in action.

See the Pen Unobtrusive feedback by Jeremy Keith (@adactio) on CodePen.

Here’s the code available as a gist.

It’s a function that takes two arguments: the element that the feedback originates from (pass in a DOM node reference for this), and the contents of the feedback (this can be a string of text or it can be HTML …or SVG). When you call the function with those two arguments, this is what happens:

  1. The JavaScript generates a span element and puts the feedback contents inside it.
  2. Then it positions that element right over the element that the feedback originates from.
  3. Then there’s a CSS transform. The feedback gets a translateY applied so it drifts upward. At the same time it gets its opacity reduced from 1 to 0 so it’s fading away.
  4. Finally there’s a transitionend event that fires when the animation is over. Once that event fires, the generated span is destroyed.

When I first used this pattern on Huffduffer, I’m pretty sure I was using jQuery. A few years later I rewrote it in vanilla JavaScript. That was four years ago so I wonder if the code could be improved. Have a go if you fancy it.

Still, even if the code could benefit from an update, I’m pleased that the underlying pattern still holds true. I used it recently on The Session and it’s working a treat for a new Ajax interaction there (bookmarking or unbookbarking an item).

If you end up using this unobtrusive feedback pattern anyway, please let me know—I’d love to see more examples of it in the wild.

Thursday, August 13th, 2020

BetrayURL

Back in February, I wrote about an excellent proposal by Jake for how browsers could display URLs in a safer way. Crucially, this involved highlighting the important part of the URL, but didn’t involve hiding any part. It’s a really elegant solution.

Turns out it was a Trojan horse. Chrome are now running an experiment where they will do the exact opposite: they will hide parts of the URL instead of highlighting the important part.

You can change this behaviour if you’re in the less than 1% of people who ever change default settings in browsers.

I’m really disappointed to see that Jake’s proposal isn’t going to be implemented. It was a much, much better solution.

No doubt I will hear rejoinders that the “solution” that Chrome is experimenting with is pretty similar to what Jake proposed. Nothing could be further from the truth. Jake’s solution empowered users with knowledge without taking anything away. What Chrome will be doing is the opposite of that, infantalising users and making decisions for them “for their own good.”

Seeing a complete URL is going to become a power-user feature, like View Source or user style sheets.

I’m really sad about that because, as Jake’s proposal demonstrates, it doesn’t have to be that way.

Friday, July 31st, 2020

Pinboard is Eleven (Pinboard Blog)

I probably need to upgrade the Huffduffer server but Maciej nails why that’s an intimidating prospect:

Doing this on a live system is like performing kidney transplants on a playing mariachi band. The best case is that no one notices a change in the music; you chloroform the players one at a time and try to keep a steady hand while the band plays on. The worst case scenario is that the music stops and there is no way to unfix what you broke, just an angry mob. It is very scary.

Thursday, July 23rd, 2020

4 Design Patterns That Violate “Back” Button Expectations – 59% of Sites Get It Wrong - Articles - Baymard Institute

Some interesting research in here around user expecations with the back button:

Generally, we’ve observed that if a new view is sufficiently different visually, or if a new view conceptually feels like a new page, it will be perceived as one — regardless of whether it technically is a new page or not. This has consequences for how a site should handle common product-finding and -exploration elements like overlays, filtering, and sorting. For example, if users click a link and 70% of the view changes to something new, most will perceive this to be a new page, even if it’s technically still the same page, just with a new view loaded in.

Friday, June 19th, 2020

Quotebacks and hypertexts (Interconnected)

What I love about the web is that it’s a hypertext. (Though in recent years it has mostly been used as a janky app delivery platform.)

I am very much enjoying Matt’s thoughts on linking, quoting, transclusion, and associative trails.

My blog is my laboratory workbench where I go through the ideas and paragraphs I’ve picked up along my way, and I twist them and turn them and I see if they fit together. I do that by narrating my way between them. And if they do fit, I try to add another piece, and then another. Writing a post is a process of experimental construction.

And then I follow the trail, and see where it takes me.

Saturday, June 13th, 2020

CSS Custom Properties Fail Without Fallback · Matthias Ott – User Experience Designer

Matthias has a good solution for dealing with the behaviour of CSS custom properties I wrote about: first set your custom properties with the fallback and then use feature queries (@supports) to override those values.

Quotebacks

This looks like a nifty tool for blogs:

Quotebacks is a tool that makes it easy to grab snippets of text from around the web and convert them into embeddable blockquote web components.

Thursday, June 11th, 2020

CSS custom properties and the cascade

When I wrote about programming CSS to perform Sass colour functions I said this about the brilliant Lea Verou:

As so often happens when I’m reading something written by Lea—or seeing her give a talk—light bulbs started popping over my head (my usual response to Lea’s knowledge bombs is either “I didn’t know you could do that!” or “I never thought of doing that!”).

Well, it happened again. This time I was reading her post about hybrid positioning with CSS variables and max() . But the main topic of the post wasn’t the part that made go “Huh! I never knew that!”. Towards the end of her article she explained something about the way that browsers evaluate CSS custom properties:

The browser doesn’t know if your property value is valid until the variable is resolved, and by then it has already processed the cascade and has thrown away any potential fallbacks.

I’m used to being able to rely on the cascade. Let’s say I’m going to set a background colour on paragraphs:

p {
  background-color: red;
  background-color: color(display-p3 1 0 0);
}

First I’ve set a background colour using a good ol’ fashioned keyword, supported in browsers since day one. Then I declare the background colour using the new-fangled color() function which is supported in very few browsers. That’s okay though. I can confidently rely on the cascade to fall back to the earlier declaration. Paragraphs will still have a red background colour.

But if I store the background colour in a custom property, I can no longer rely on the cascade.

:root {
  --myvariable: color(display-p3 1 0 0);
}
p {
  background-color: red;
  background-color: var(--myvariable);
}

All I’ve done is swapped out the hard-coded color() value for a custom property but now the browser behaves differently. Instead of getting a red background colour, I get the browser default value. As Lea explains:

…it will make the property invalid at computed value time.

The spec says:

When this happens, the computed value of the property is either the property’s inherited value or its initial value depending on whether the property is inherited or not, respectively, as if the property’s value had been specified as the unset keyword.

So if a browser doesn’t understand the color() function, it’s as if I’ve said:

background-color: unset;

This took me by surprise. I’m so used to being able to rely on the cascade in CSS—it’s one of the most powerful and most useful features in this programming language. Could it be, I wondered, that the powers-that-be have violated the principle of least surprise in specifying this behaviour?

But a note in the spec explains further:

Note: The invalid at computed-value time concept exists because variables can’t “fail early” like other syntax errors can, so by the time the user agent realizes a property value is invalid, it’s already thrown away the other cascaded values.

Ah, right! So first of all browsers figure out the cascade and then they evaluate custom properties. If a custom property evaluates to gobbledygook, it’s too late to figure out what the cascade would’ve fallen back to.

Thinking about it, this makes total sense. Remember that CSS custom properties aren’t like Sass variables. They aren’t evaluated once and then set in stone. They’re more like let than const. They can be updated in real time. You can update them from JavaScript too. It’s entirely possible to update CSS custom properties rapidly in response to events like, say, the user scrolling or moving their mouse. If the browser had to recalculate the cascade every time a custom property didn’t evaluate correctly, I imagine it would be an enormous performance bottleneck.

So even though this behaviour surprised me at first, it makes sense on reflection.

I’ve probably done a terrible job explaining the behaviour here, so I’ve made a Codepen. Although that may also do an equally terrible job.

(Thanks to Amber for talking through this with me and encouraging me to blog about it. And thanks to Lea for expanding my mind. Again.)

Saturday, June 6th, 2020

Hybrid positioning with CSS variables and max() – Lea Verou

Yet another clever technique from Lea. But I’m also bookmarking this one because of something she points out about custom properties:

The browser doesn’t know if your property value is valid until the variable is resolved, and by then it has already processed the cascade and has thrown away any potential fallbacks.

That explains an issue I was seeing recently! I couldn’t understand why an older browser wasn’t getting the fallback I had declared earlier in the CSS. Turns out that custom properties mess with that expectation.

Friday, May 8th, 2020

SofaConf 2020 - a technical write-up | Trys Mudford

Trys describes the backend architecture of the excellent Sofa Conf website. In short, it’s a Jamstack dream: all of the convenience and familiarity of using a database-driven CMS (Craft), combined with all the speed and resilience of using a static site generator (Eleventy).

I love the fact that anyone on the Clearleft events team can push to production with a Slack message.

I also love that the site is Lighthousetastically fast.

Monday, April 6th, 2020

Local-first software: You own your data, in spite of the cloud

The cloud gives us collaboration, but old-fashioned apps give us ownership. Can’t we have the best of both worlds?

We would like both the convenient cross-device access and real-time collaboration provided by cloud apps, and also the personal ownership of your own data embodied by “old-fashioned” software.

This is a very in-depth look at the mindset and the challenges involved in building truly local-first software—something that Tantek has also been thinking about.

Monday, March 23rd, 2020

Get Static – Eric’s Archived Thoughts

Performance matters …especially when the chips are down:

If you are in charge of a web site that provides even slightly important information, or important services, it’s time to get static. I’m thinking here of sites for places like health departments (and pretty much all government services), hospitals and clinics, utility services, food delivery and ordering, and I’m sure there are more that haven’t occurred to me. As much as you possibly can, get it down to static HTML and CSS and maybe a tiny bit of enhancing JS, and pare away every byte you can.

Friday, March 20th, 2020

What Does `playsinline` Mean in Web Video? | CSS-Tricks

I have to admit, I don’t think I even knew of the existence of the playsinline attribute on the video element. Here, Chris runs through all the attributes you can put in there.

Sunday, March 8th, 2020

The 3 Laws of Serverless - Burke Holland

“Serverless”, is a buzzword. We can’t seem to agree on what it actaully means, so it ends up meaning nothing at all. Much like “cloud” or “dynamic” or “synergy”. You just wait for the right time in a meeting to drop it, walk to the board and draw a Venn Diagram, and then just sit back and wait for your well-deserved promotion.

That’s very true, and I do not like the term “serverless” for the rather obvious reason that it’s all about servers (someone else’s servers, that is). But these three principles are handy for figuring out if you’re building with in a serverlessy kind of way:

  1. You have no knowledge of the underlying system where your code runs.
  2. Scaling is an intrinsic attribute of the technology; so much so that it just happens automatically.
  3. You only pay for what you use.

Abstraction; scale; consumption.

Monday, February 3rd, 2020

N26 and lack of JavaScript | Hugo Giraudel

JavaScript is fickle. It can fail to load. It can be disabled. It can be blocked. It can fail to run. It probably is fine most of the time, but when it fails, everything tends to go bad. And having such a hard point of failure is not ideal.

This is a very important point:

It’s important not to try making the no-JS experience work like the full one. The interface has to be revisited. Some features might even have to be removed, or dramatically reduced in scope. That’s also okay. As long as the main features are there and things work nicely, it should be fine that the experience is not as polished.

Tuesday, November 19th, 2019

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

Tuesday, October 29th, 2019

Periodic background sync

Yesterday I wrote about how much I’d like to see silent push for the web:

I’d really like silent push for the web—the ability to update a cache with fresh content as soon as it’s published; that would be nifty! At the same time, I understand the concerns. It feels more powerful than other permission-based APIs like notifications.

Today, John Holt Ripley responded on Twitter:

hi there, just read your blog post about Silent Push for acthe web, and wondering if Periodic Background Sync would cover a few of those use cases?

Periodic background sync looks very interesting indeed!

It’s not the same as silent push. As the name suggests, this is about your service worker waking up periodically and potentially fetching (and caching) fresh content from the network. So the service worker is polling rather than receiving a push. But I’ll take it! It’s definitely close enough for the kind of use-cases I’ve been thinking about.

Interestingly, periodic background sync also ties into the other part of what I was writing about: permissions. I mentioned that adding a site the home screen could be interpreted as a signal to potentially allow more permissions (or at least allow prompts for more permissions).

Well, Chromium has a document outlining metrics for attempting to gauge site engagement. There’s some good thinking in there.