Tags: html

95

sparkline

Move Fast and Don’t Break Things by Scott Jehl

Scott Jehl is speaking at An Event Apart in Seattle—yay! His talk is called Move Fast and Don’t Break Things:

Performance is a high priority for any site of scale today, but it can be easier to make a site fast than to keep it that way. As a site’s features and design evolves, its performance is often threatened for a number of reasons, making it hard to ensure fast, resilient access to services. In this session, Scott will draw from real-world examples where business goals and other priorities have conflicted with page performance, and share some strategies and practices that have helped major sites overcome those challenges to defend their speed without compromises.

The title is a riff on the “move fast and break things” motto, which comes from a more naive time on the web. But Scott finds part of it relatable. Things break. We want to move fast without breaking things.

This is a performance talk, which is another kind of moving fast. Scott starts with a brief history of not breaking websites. He’s been chipping away at websites for 20 years now. Remember Positioning Is Everything? How about Quirksmode? That one's still around.

In the early days, building a website that was "not broken" was difficult, but it was difficult for different reasons. We were focused on consistency. We had deal with differences between browsers. There were two ways of dealing with browsers: browser detection and feature detection.

The feature-based approach was more sustainable but harder. It fits nicely with the practice of progressive enhancement. It's a good mindset for dealing with the explosion of devices that kicked off later. Touch screens made us rethink our mouse and hover-centric matters. That made us realise how much keyboard-driven access mattered all along.

Browsers exploded too. And our data networks changed. With this explosion of considerations, it was clear that our early ideas of “not broken” didn’t work. Our notion of what constituted “not broken” was itself broken. Consistency just doesn’t cut it.

But there was a comforting part to this too. It turned out that progressive enhancement was there to help …even though we didn’t know what new devices were going to appear. This is a recurring theme throughout Scott’s career. So given all these benefits of progressive enhancement, it shouldn’t be surprising that it turns out to be really good for performance too. If you practice progressive enhancement, you’re kind of a performance expert already.

People started talking about new performance metrics that we should care about. We’ve got new tools, like Page Speed Insights. It gives tangible advice on how to test things. Web Page Test is another great tool. Once you prove you’re a human, Web Page Test will give you loads of details on how a page loaded. And you get this great visual timeline.

This is where we can start to discuss the metrics we want to focus on. Traditionally, we focused on file size, which still matters. But for goal-setting, we want to focus on user-perceived metrics.

First Meaningful Content. It’s about how soon appears to be useful to a user. Progressive enhancement is a perfect match for this! When you first make request to a website, it’s usually for a web page. But to render that page, it might need to request more files like CSS or JavaScript. All of this adds up. From a user perspective, if the HTML is downloaded, but the browser can’t render it, that’s broken.

The average time for this on the web right now is around six seconds. That’s broken. The render blockers are the problem here.

Consider assets like scripts. Can you get the browser to load them without holding up the rendering of the page? If you can add async or defer to a script element in the head, you should do that. Sometimes that’s not an option though.

For CSS, it’s tricky. We’ve delivered the HTML that we need but we’ve got to wait for the CSS before rendering it. So what can you bundle into that initial payload?

You can user server push. This is a new technology that comes with HTTP2. H2, as it’s called, is very performance-focused. Just turning on H2 will probably make your site faster. Server push allows the server to send files to the browser before the browser has even asked for them. You can do this with directives in Apache, for example. You could push CSS whenever an HTML file is requested. But we need to be careful not to go too far. You don’t want to send too much.

Server push is great in moderation. But it is new, and it may not even be supported by your server.

Another option is to inline CSS (well, actually Scott, this is technically embedding CSS). It’s great for first render, but isn’t it wasteful for caching? Scott has a clever pattern that uses the Cache API to grab the contents of the inlined CSS and put a copy of its contents into the cache. Then it’s ready to be served up by a service worker.

By the way, this isn’t just for CSS. You could grab the contents of inlined SVGs and create cached versions for later use.

So inlining CSS is good, but again, in moderation. You don’t want to embed anything bigger than 15 or 20 kilobytes. You might want separate out the critical CSS and only embed that on first render. You don’t need to go through your CSS by hand to figure out what’s critical—there are tools that to do this that integrate with your build process. Embed that critical CSS into the head of your document, and also start preloading the full CSS. Here’s a clever technique that turns a preload link into a stylesheet link:

<link rel="preload" href="site.css" as="style" onload="this.rel='stylesheet'">

Also include this:

<noscript><link rel="stylesheet" href="site.css"></noscript>

You can also optimise for return visits. It’s all about the cache.

In the past, we might’ve used a cookie to distinguish a returning visitor from a first-time visitor. But cookies kind of suck. Here’s something that Scott has been thinking about: service workers can intercept outgoing requests. A service worker could send a header that matches the current build of CSS. On the server, we can check for this header. If it’s not the latest CSS, we can server push the latest version, or inline it.

The neat thing about service workers is that they have to install before they take over. Scott makes use of this install event to put your important assets into a cache. Only once that is done to we start adding that extra header to requests.

Watch out for an article on the Filament Group blog on this technique!

With performance, more weight doesn’t have to mean more wait. You can have a heavy page that still appears to load quickly by altering the prioritisation of what loads first.

Web pages are very heavy now. There’s a real cost to every byte. Tim’s WhatDoesMySiteCost.com shows that the CNN home page costs almost fifty cents to load for someone in America!

Time to interactive. This is is the time before a user can use what’s on the screen. The issue is almost always with JavaScript. The page looks usable, but you can’t use it yet.

Addy Osmani suggests we should get to interactive in under five seconds on a 3G network on a median mobile device. Your iPhone is not a median mobile device. A typical phone takes six seconds to process a megabyte of JavaScript after it has downloaded. So even if the network is fast, the time to interactive can still be very long.

This all comes down to our industry’s increasing reliance on JavaScript just to render content. There seems to be pendulum shifts between client-side and server-side rendering. It’s been great to see libraries like Vue and Ember embrace server-side rendering.

But even with server-side rendering, there’s still usually a rehydration step where all the JavaScript gets parsed and that really affects time to interaction.

Code splitting can help. Webpack can do this. That helps with first-party JavaScript, but what about third-party JavaScript?

Scott believes easier to make a fast website than to keep a fast website. And that’s down to all the third-party scripts that people throw in: analytics, ads, tracking. They can wreak havoc on all your hard work.

These scripts apparently contribute to the business model, so it can be hard for us to make the case for removing them. Tools like SpeedCurve can help people stay informed on the impact of these scripts. It allows you to set up performance budgets and it shows you when pages go over budget. When that happens, we have leverage to step in and push back.

Assuming you lose that battle, what else can we do?

These days, lots of A/B testing and personalisation happens on the client side. The tooling is easy to use. But they are costly!

A typical problematic pattern is this: the server sends one version of the page, and once the page is loaded, the whole page gets replaced with a different layout targeted at the user. This leads to a terrifying new metric that Scott calls Second Meaningful Content.

Assuming we can’t remove the madness, what can we do? We could at least not do this for first-time visits. We could load the scripts asyncronously. We can preload the scripts at the top of the page. But ideally we want to move these things to the server. Server-side A/B testing and personalisation have existed for a while now.

Scott has been experimenting with a middleware solution. There’s this idea of server workers that Cloudflare is offering. You can manipulate the page that gets sent from the server to the browser—all the things you would do for an A/B test. Scott is doing this by using comments in the HTML to demarcate which portions of the page should be filtered for testing. The server worker then deletes a block for some users, and deletes a different block for other users. Scott has written about this approach.

The point here isn’t about using Cloudflare. The broader point is that it’s much faster to do these things on the server. We need to defend our user’s time.

Another issue, other than third-party scripts, is the page weight on home pages and landing pages. Marketing teams love to fill these things with enticing rich imagery and carousels. They’re really difficult to keep performant because they change all the time. Sometimes we’re not even in control of the source code of these pages.

We can advocate for new best practices like responsive images. The srcset attribute on the img element; the picture element for when you need more control. These are great tools. What’s not so great is writing the markup. It’s confusing! Ideally we’d have a CMS drive this, but a lot of the time, landing pages fall outside of the purview of the CMS.

Scott has been using Vue.js to make a responsive image builder—a form that people can paste their URLs into, which spits out the markup to use. Anything we can do by creating tools like these really helps to defend the performance of a site.

Another thing we can do is lazy loading. Focus on the assets. The BBC homepage uses some lazy loading for images—they blink into view as your scroll down the page. They use LazySizes, which you can find on Github. You use data- attributes to list your image sources. Scott realises that LazySizes is not progressive enhancement. He wouldn’t recommend using it on all images, just some images further down the page.

But thankfully, we won’t need these workarounds soon. Soon we’ll have lazy loading in browsers. There’s a lazyload attribute that we’ll be able to set on img and iframe elements:

<img src=".." alt="..." lazyload="on">

It’s not implemented yet, but it’s coming in Chrome. It might be that this behaviour even becomes the default way of loading images in browsers.

If you dig under the hood of the implementation coming in Chrome, it actually loads all the images, but the ones being lazyloaded are only sent partially with a 206 response header. That gives enough information for the browser to lay out the page without loading the whole image initially.

To wrap up, Scott takes comfort from the fact that there are resilient patterns out there to help us. And remember, it is our job to defend the user’s experience.

How to Think Like a Front-End Developer by Chris Coyier

Alright! It’s day two of An Event Apart in Seattle. The first speaker of the day is Chris Coyier. His talk is called How to Think Like a Front-End Developer. From the website:

The job title “front-end developer” is very real: job boards around the world confirm that. But what is that job, exactly? What do you need to know to do it? You might think those answers are pretty cut and dried, but they’re anything but; front-end development is going through something of an identity crisis. In this engaging talk, Chris will explore this identity through the lens of someone who has self-identified as a front-end developer for a few decades, but more interestingly, through many conversations he’s had with other successful front-end developers. You’ll see just how differently this job can be done and how differently people and companies can think of this role—not just for the sake of doing so, but because you’ll learn to be better at your own jobs by understanding how other people are good at theirs.

I’m going to see if I can keep up with Chris’s frenetic pace…

Chris has his own thoughts about what front-end dev is but he wants to share other ideas too. First of all, some grammar:

I work as a front-end developer.

I work on the front end.

Those are correct. These are not:

I work as a front end developer.

I work on the front-end.

And this is just not a word:

Frontend.

Lots of people are hiring front-end developers. So it’s definitely a job and a common job title. But what does it mean. Chris and Dave talked to eight different people on their Shop Talk Show podcast. Some highlights:

Eric feels that the term “front-end developer” is newer than the CSS Zen Garden. Everyone was a webmaster, or as we’d say now, a full-stack developer. But if someone back then used the term “front-end developer”, he’d know what it meant.

Mina says it deals with things you can see. If it’s a user-facing interface, that’s front-end development.

Trent says that he thinks of himself as a web designer and web builder. He doesn’t feel he has the deep expertise of a developer, and yet he spends all of his time in the browser.

So our job is in the browser. You deal with the browser (moreso than other roles). And by the way, there are a lot of browsers out there.

Maybe the user is what differentiates front-end work. Monica says that a back-end developer is allowed not to care about the user if their job is putting a database together. It’s totally fine not to call yourself a front-end developer, but if you do, you need to care about the user.

There are tons of different devices and browsers. It’s overwhelming. So we just gave up.

So, a front-end developer:

  • Is a job and a job title.
  • It deals with browsers, devices, and users.
  • But what skills does it involve?

It’s taken for granted that you can use a computer. There’s also the soft skills of interacting with co-workers. Then there are the language-specific core skills. Finally, there are the bonus skills—all the stuff that makes you you.

Core skills

The languages you need to strongly understand to read, write and maintain them.

HTML and CSS. Definitely. You don’t come across front-end developers who don’t do those languages. But what about JavaScript?

Eric says it’s fine if you know lots of JavaScript but it’s also fine if you don’t write everything from scratch. But you can’t be oblivious to it. You need to understand what it can do.

So let’s put JavaScript into the bucket of core skills too.

Peggy believes that as a front-end developer you need to have a basic proficiency in accessibility too. This is, after all, about user-facing interfaces.

Bonus skills

The Figma team have a somewhat over-engineered graphic of all the skillsets that people might have, between “baseline” and “supplementary”.

Perhaps we all share a common trunk of skills, and then we branch in different directions.

Right now though, it feels like front-end development is having an identity crisis. It’s all about JavaScript, which is eating the planet.

JavaScript

JavaScript is crazy popular now. It’s unignorable. Yes, it’s the language in the browser, but now it’s also the language in loads of other places too.

Steven Davis says maybe we need to fork the term front-end development. Maybe we need to have UX engineers and JavaScript engineers. Can one person be great at both? Maybe the trunk of skills forks in two very different directions.

Vernon Joyce called this an identity crisis. The concepts in JavaScript frameworks are very alien to people with a background in HTML, CSS, and basic interactive JavaScript.

You could imagine two people called front-end developers meeting, and having nothing in common to talk about. Maybe sports.

Brad says he doesn’t want to be configuring build tools. He thinks of himself as being at the front of front-end development, whereas other people are at the back of front-end development.

This divide is super frustrating to people right now.

Hiring

Michael Schnarnagl brings up the point about how it’s affecting hiring. Back-end developers are being replaced with JavaScript engineers. Lots of things that used to be back-end tasks are now happening on the client side. Component-driven design, site-level architecture, routing, getting data from the back end, mutating data, talking to APIs, and managing state—all of those things are now largely a front-end concern.

Let’s look at CodePen. There’s a little heart icon on each pen. It’s an icon component. And the combination of the heart and the overall count is also a component. And the bar of items altogether—that’s also a component. And the pen it sits under is a component. And the page it’s in is a component. And the URL for that page is a component. Now the whole site is a front-end developer’s concern.

In the past, a front-end developer would ask a back-end developer for an API endpoint. Now with GraphQL, the front-end developer can craft a query to get exactly what they need. Sure, the GraphQL stuff had to be set up in the first place, but that’s one-time task. Once it’s set up, the front-end developer has everything they need.

All the old work hasn’t gone away either. Semantics, accessibility, styling—that’s still the work of a front-end developer as well as all of the new stuff listed above.

Hiring is a big part of this. Lara Schenk talks about going for an interview where she met 90% of the skills listed. Then in an interview, she was asked to do a fizzbuzz test. That’s not the way that Lara thinks. She would’ve been great for that job, but this single task derailed her. She wrote about it, and got snarky comments from people who thought she should’ve been able to do the task. But Lara’s main point was the mismatch between what was advertised and what was actually being hired for.

You see a job posting for front-end developer. Who is that for? Is it for someone into React, webpack, and GraphQL? Or is it for someone into SVG, interaction design, and accessibility? They’re both front-end developers. And remember, they can learn one another’s skills, but when it comes to hiring, it has to be about the skills people have right now.

Peggy talks about how specialised your work can be. You can specialise in SVG. You can specialise in APIs and data.

We’re probably not going to solve this right now. The hiring part is definitely the worst part right now. One solution is to use plain language in job posts. Make it clear what you’re looking for right now and explain what background you’re coming from. Use words instead of a laundry list of requirements.

Heydon Pickering talks about full-stack developers. Their core skills are hardcore computer science skills.

Brad Frost concurs. It tends not to be the other way around. The output tends to be the badly-sketched front of the horse.

Even if there is a divide, that doesn’t absolve any of us from doing a good job. That’s true whether it’s computer science tasks or markup and CSS.

Despite the divide, performance, accessibility, and user experience are all our jobs.

Maybe this term “front-end developer” needs rethinking.

The brain game

Let’s peak into the minds of very different front-end developers. Chris and Dave went to Dribbble, pulled up a bunch of designs and put them in front of their guests on the Shop Talk Show.

Here’s a design of a page.

  • Brad looks at the design and sees a lot of components of different sizes and complexity.
  • Mina sees a bunch of media objects.
  • Eric sees HTML structures. That’s a heading. That’s a list. Over there is an unordered list.
  • Sam sees a lot of typography. She sees a type system.
  • Trent immediately starts thinking about how the design is supposed to work in different screen sizes.

Here’s a different, more image-heavy design.

  • Mina would love to tackle the animations.
  • Trent sees interesting textures and noise. He wonders how he could achieve those effects without exporting giant image files.
  • Brad, unsurprisingly, sees components, even in a seemingly bespoke layout.
  • Eric immediately sees a lot of SVG.
  • Sam needs to know what the HTML is.

Here’s a more geometric design.

  • Sam is drawn to the typography.
  • Mina sees an opportunity to use writing modes.
  • Trent sees a design that would reflow and reshape itself well.
  • Eric sees something with writing mode, grid, and custom fonts.

Here’s a financial mobile UI.

  • Trent wants to run it through a colour-contrast analyser, and he wants to know if the font size is too small.

Here’s a crazy festival website.

  • Mina wonders if it needs a background video, but worries about the performance.

Here’s an on-trend mobile design.

  • Monica sees something that looks like every other website.
  • Ben wonders whether it will work in other parts of the world. How will the interactions work? Separate pages or transitions? How will it feel?

Here’s an image-heavy design.

  • Monica wonders about the priority of which images to load first.

Here’s an extreme navigation with big images.

  • Ben worries about the performance on slow connections.
  • Monica gets stressed out about how much happens when you just click on a link.
  • Peggy sees something static and imagines using Gatsby for it.

Here’s a design that’s map-based.

  • Ben worries about the size of the touch targets.
  • Monica sees an opportunity to use SVGs.

Here’s a card UI.

  • Ben wonders what the browser support is. Can we use CSS grid or do we have to use something older?
  • Monica worries that this needs drag’n’drop. Now you’ve got a nightmare scenario.

Chris has been thinking about and writing about this topic of what makes someone a front-end developer, and what makes someone a good front-end developer. The debate will continue…

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

Declaration

I like the robustness that comes with declarative languages. I also like the power that comes with imperative languages. Best of all, I like having the choice.

Take the video and audio elements, for example. If you want, you can embed a video or audio file into a web page using a straightforward declaration in HTML.

<audio src="..." controls><!-- fallback goes here --></audio>

Straightaway, that covers 80%-90% of use cases. But if you need to do more—like, provide your own custom controls—there’s a corresponding API that’s exposed in JavaScript. Using that API, you can do everything that you can do with the HTML element, and a whole lot more besides.

It’s a similar story with animation. CSS provides plenty of animation power, but it’s limited in the events that can trigger the animations. That’s okay. There’s a corresponding JavaScript API that gives you more power. Again, the CSS declarations cover 80%-90% of use cases, but for anyone in that 10%-20%, the web animation API is there to help.

Client-side form validation is another good example. For most us, the HTML attributes—required, type, etc.—are probably enough most of the time.

<input type="email" required />

When we need more fine-grained control, there’s a validation API available in JavaScript (yes, yes, I know that the API itself is problematic, but you get the point).

I really like this design pattern. Cover 80% of the use cases with a declarative solution in HTML, but also provide an imperative alternative in JavaScript that gives more power. HTML5 has plenty of examples of this pattern. But I feel like the history of web standards has a few missed opportunities too.

Geolocation is a good example of an unbalanced feature. If you want to use it, you must use JavaScript. There is no declarative alternative. This doesn’t exist:

<input type="geolocation" />

That’s a shame. Anyone writing a form that asks for the user’s location—in order to submit that information to a server for processing—must write some JavaScript. That’s okay, I guess, but it’s always going to be that bit more fragile and error-prone compared to markup.

(And just in case you’re thinking of the fallback—which would be for the input element to be rendered as though its type value were text—and you think it’s ludicrous to expect users with non-supporting browsers to enter latitude and longitude coordinates by hand, I direct your attention to input type="color": in non-supporting browsers, it’s rendered as input type="text" and users are expected to enter colour values by hand.)

Geolocation is an interesting use case because it only works on HTTPS. There are quite a few JavaScript APIs that quite rightly require a secure context—like service workers—but I can’t think of a single HTML element or attribute that requires HTTPS (although that will soon change if we don’t act to stop plans to create a two-tier web). But that can’t have been the thinking behind geolocation being JavaScript only; when geolocation first shipped, it was available over HTTP connections too.

Anyway, that’s just one example. Like I said, it’s not that I’m in favour of declarative solutions instead of imperative ones; I strongly favour the choice offered by providing declarative solutions as well as imperative ones.

In recent years there’s been a push to expose low-level browser features to developers. They’re inevitably exposed as JavaScript APIs. In most cases, that makes total sense. I can’t really imagine a declarative way of accessing the fetch or cache APIs, for example. But I think we should be careful that it doesn’t become the only way of exposing new browser features. I think that, wherever possible, the design pattern of exposing new features declaratively and imperatively offers the best of the both worlds—ease of use for the simple use cases, and power for the more complex use cases.

Previously, it was up to browser makers to think about these things. But now, with the advent of web components, we developers are gaining that same level of power and responsibility. So if you’re making a web component that you’re hoping other people will also use, maybe it’s worth keeping this design pattern in mind: allow authors to configure the functionality of the component using HTML attributes and JavaScript methods.

Robustness and least power

There’s a great article by Steven Garrity over on A List Apart called Design with Difficult Data. It runs through the advantages of using unusual content to stress-test interfaces, referencing Postel’s Law, AKA the robustness principle:

Be conservative in what you send, be liberal in what you accept.

Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:

Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.

In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:

It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.

In other words, be liberal in what you accept from users.

I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:

Choose the least powerful language suitable for a given purpose.

On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?

Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.

Take the front-end stack, for example: HTML, CSS, and JavaScript. HTML and CSS are declarative, so you don’t get as much precise control as you get with an imperative language like JavaScript. But JavaScript comes with a steeper learning curve and a stricter error-handling model than HTML or CSS.

As a general rule, it’s always worth asking if you can accomplish something with a less powerful technology:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

  • Instead of using JavaScript to do animation, see if you can do it in CSS instead.
  • Instead of using JavaScript to do simple client-side form validation, try to use HTML input types and attributes like required.
  • Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.

It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.

There are some associated principles, like YAGNI: You Ain’t Gonna Need It. That helps you avoid picking a technology that’s too powerful for your current needs, but which might be suitable in the future: premature optimisation. Or, as Rachel put it, stop solving problems you don’t yet have:

So make sure every bit of code added to your project is there for a reason you can explain, not just because it is part of some standard toolkit or boilerplate.

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

After all, if they’re good enough for Tim Berners-Lee…

Components and concerns

We tend to like false dichotomies in the world of web design and web development. I’ve noticed one recently that keeps coming up in the realm of design systems and components.

It’s about separation of concerns. The web has a long history of separating structure, presentation, and behaviour through HTML, CSS, and JavaScript. It has served us very well. If you build in that order, ensuring that something works (to some extent) before adding the next layer, the result will be robust and resilient.

But in this age of components, many people are pointing out that it makes sense to separate things according to their function. Here’s the Diana Mounter in her excellent article about design systems at Github:

Rather than separating concerns by languages (such as HTML, CSS, and JavaScript), we’re are working towards a model of separating concerns at the component level.

This echoes a point made previously in a slidedeck by Cristiano Rastelli.

Separating interfaces according to the purpose of each component makes total sense …but that doesn’t mean we have to stop separating structure, presentation, and behaviour! Why not do both?

There’s nothing in the “traditonal” separation of concerns on the web (HTML/CSS/JavaScript) that restricts it only to pages. In fact, I would say it works best when it’s applied on smaller scales.

In her article, Pattern Library First: An Approach For Managing CSS, Rachel advises starting every component with good markup:

Your starting point should always be well-structured markup.

This ensures that your content is accessible at a very basic level, but it also means you can take advantage of normal flow.

That’s basically an application of starting with the rule of least power.

In chapter 6 of Resilient Web Design, I outline the three-step process I use to build on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

That chapter is filled with examples of applying those steps at the level of an entire site or product, but it doesn’t need to end there:

We can apply the three‐step process at the scale of individual components within a page. “What is the core functionality of this component? How can I make that functionality available using the simplest possible technology? Now how can I enhance it?”

There’s another shared benefit to separating concerns when building pages and building components. In the case of pages, asking “what is the core functionality?” will help you come up with a good URL. With components, asking “what is the core functionality?” will help you come up with a good name …something that’s at the heart of a good design system. In her brilliant Design Systems book, Alla advocates asking “what is its purpose?” in order to get a good shared language for components.

My point is this:

  • Separating structure, presentation, and behaviour is a good idea.
  • Separating an interface into components is a good idea.

Those two good ideas are not in conflict. Presenting them as though they were binary choices is like saying “I used to eat Italian food, but now I drink Italian wine.” They work best when they’re done in combination.

Registering service workers

In chapter two of Going Offline, I talk about registering your service worker wrapped up in some feature detection:

<script>
if (navigator.serviceworker) {
  navigator.serviceworker.register('/serviceworker.js');
}
</script>

But I also make reference to a declarative way of doing this that isn’t very widely supported:

<link rel="serviceworker" href="/serviceworker.js">

No need for feature detection there. Thanks to the liberal error-handling model of HTML (and CSS), browsers will just ignore what they don’t understand, which isn’t the case with JavaScript.

Alas, it looks like that nice declarative alternative isn’t going to be making its way into browsers anytime soon. It has been removed from the HTML spec. That’s a shame. I have a preference for declarative solutions where possible—they’re certainly easier to teach. But in this case, the JavaScript alternative isn’t too onerous.

So if you’re reading Going Offline, when you get to the bit about someday using the rel value, you can cast a wistful gaze into the distance, or shed a tiny tear for what might have been …and then put it out of your mind and carry on reading.

Timing

Apple Inc. is my accidental marketing department.

On April 29th, 2010, Steve Jobs published his infamous Thoughts on Flash. It thrust the thitherto geek phrase “HTML5” into the mainstream press:

HTML5, the new web standard that has been adopted by Apple, Google and many others, lets web developers create advanced graphics, typography, animations and transitions without relying on third party browser plug-ins (like Flash). HTML5 is completely open and controlled by a standards committee, of which Apple is a member.

Five days later, I announced the first title from A Book Apart: HTML5 For Web Designers. The timing was purely coincidental, but it definitely didn’t hurt that book’s circulation.

Fast forward eight years…

On March 29th, 2018, Apple released the latest version of iOS. Unmentioned in the press release, this update added service worker support to Mobile Safari.

Five days later, I announced the 26th title from A Book Apart: Going Offline.

For a while now, quite a few people have cited Apple’s lack of support as a reason why they weren’t investigating service workers. That excuse no longer holds water.

Once again, the timing is purely coincidental. But it can’t hurt.

Ubiquity and consistency

I keep thinking about this post from Baldur Bjarnason, Over-engineering is under-engineering. It took me a while to get my head around what he was saying, but now that (I think) I understand it, I find it to be very astute.

Let’s take a single interface element, say, a dropdown menu. This is the example Laura uses in her article for 24 Ways called Accessibility Through Semantic HTML. You’ve got two choices, broadly speaking:

  1. Use the HTML select element.
  2. Create your own dropdown widget using JavaScript (working with divs and spans).

The advantage of the first choice is that it’s lightweight, it works everywhere, and the browser does all the hard work for you.

But…

You don’t get complete control. Because the browser is doing the heavy lifting, you can’t craft the details of the dropdown to look identical on different browser/OS combinations.

That’s where the second option comes in. By scripting your own dropdown, you get complete control over the appearance and behaviour of the widget. The disadvantage is that, because you’re now doing all the work instead of the browser, it’s up to you to do all the work—that means lots of JavaScript, thinking about edge cases, and making the whole thing accessible.

This is the point that Baldur makes: no matter how much you over-engineer your own custom solution, there’ll always be something that falls between the cracks. So, ironically, the over-engineered solution—when compared to the simple under-engineered native browser solution—ends up being under-engineered.

Is it worth it? Rian Rietveld asks:

It is impossible to style select option. But is that really necessary? Is it worth abandoning the native browser behavior for a complete rewrite in JavaScript of the functionality?

The answer, as ever, is it depends. It depends on your priorities. If your priority is having consistent control over the details, then foregoing native browser functionality in favour of scripting everything yourself aligns with your goals.

But I’m reminded of something that Eric often says:

The web does not value consistency. The web values ubiquity.

Ubiquity; universality; accessibility—however you want to label it, it’s what lies at the heart of the World Wide Web. It’s the idea that anyone should be able to access a resource, regardless of technical or personal constraints. It’s an admirable goal, and what’s even more admirable is that the web succeeds in this goal! But sometimes something’s gotta give, and that something is control. Rian again:

The days that a website must be pixel perfect and must look the same in every browser are over. There are so many devices these days, that an identical design for all is not doable. Or we must take a huge effort for custom form elements design.

So far I’ve only been looking at the micro scale of a single interface element, but this tension between ubiquity and consistency plays out at larger scales too. Take page navigations. That’s literally what browsers do. Click on a link, and the browser fetches that URL, displaying progress at it goes. The alternative, as exemplified by single page apps, is to do all of that for yourself using JavaScript: figure out the routing, show some kind of progress, load some JSON, parse it, convert it into HTML, and update the DOM.

Personally, I tend to go for the first option. Partly that’s because I like to apply the rule of least power, but mostly it’s because I’m very lazy (I also have qualms about sending a whole lotta JavaScript down the wire just so the end user gets to do something that their browser would do for them anyway). But I get it. I understand why others might wish for greater control, even if it comes with a price tag of fragility.

I think Jake’s navigation transitions proposal is fascinating. What if there were a browser-native way to get more control over how page navigations happen? I reckon that would cover the justification of 90% of single page apps.

That’s a great way of examining these kinds of decisions and questioning how this tension could be resolved. If people are frustrated by the lack of control in browser-native navigations, let’s figure out a way to give them more control. If people are frustrated by the lack of styling for select elements, maybe we should figure out a way of giving them more control over styling.

Hang on though. I feel like I’ve painted a divisive picture, like you have to make a choice between ubiquity or consistency. But the rather wonderful truth is that, on the web, you can have your cake and eat it. That’s what I was getting at with the three-step approach I describe in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Like, say…

  1. The user needs to select an item from a list of options.
  2. Use a select element.
  3. Use JavaScript to replace that native element with a widget of your own devising.

Or…

  1. The user needs to navigate to another page.
  2. Use an a element with an href attribute.
  3. Use JavaScript to intercept that click, add a nice transition, and pull in the content using Ajax.

The pushback I get from people in the control/consistency camp is that this sounds like more work. It kinda is. But honestly, in my experience, it’s not that much more work. Also, and I realise I’m contradicting the part where I said I’m lazy, but that’s why it’s called work. This is our job. It’s not about what we prefer; it’s about serving the needs of the people who use what we build.

Anyway, if I were to rephrase my three-step process in terms of under-engineering and over-engineering, it might look something like this:

  1. Start with user needs.
  2. Build an under-engineered solution—one that might not offer you much control, but that works for everyone.
  3. Layer on a more over-engineered solution—one that might not work for everyone, but that offers you more control.

Ubiquity, then consistency.

Problem space

Adam Wathan wrote an article recently called CSS Utility Classes and “Separation of Concerns”. In it, he documents his journey through different ways of thinking about CSS. A lot of it is really familiar.

Phase 1: “Semantic” CSS

Ah, yes! If you’ve been in the game for a while then this will be familiar to you. The days when we used to strive to keep our class names to a minimum and use names that described the content. But, as Adam points out:

My markup wasn’t concerned with styling decisions, but my CSS was very concerned with my markup structure.

Phase 2: Decoupling styles from structure

This is the work pioneered by Nicole with OOCSS, and followed later by methodologies like BEM and SMACSS.

This felt like a huge improvement to me. My markup was still “semantic” and didn’t contain any styling decisions, and now my CSS felt decoupled from my markup structure, with the added bonus of avoiding unnecessary selector specificity.

Amen!

But then Adam talks about the issues when you have two visually similar components that are semantically very different. He shows a few possible solutions and asks this excellent question:

For the project you’re working on, what would be more valuable: restyleable HTML, or reusable CSS?

For many projects reusable CSS is the goal. But not all projects. On the Code For America project, the HTML needed to be as clean as possible, even if that meant more brittle CSS.

Phase 3: Content-agnostic CSS components

Naming things is hard:

The more a component does, or the more specific a component is, the harder it is to reuse.

Adam offers some good advice on naming things for maximum reusability. It’s all good stuff, and this would be the point at which I would stop. At this point there’s a nice balance between reusability, readability, and semantic meaning.

But Adam goes further…

Phase 4: Content-agnostic components + utility classes

Okay. The occasional utility class (for alignment and clearing) can be very handy. This is definitely the point to stop though, right?

Phase 5: Utility-first CSS

Oh God, no!

Once this clicked for me, it wasn’t long before I had built out a whole suite of utility classes for common visual tweaks I needed, things like:

  • Text sizes, colors, and weights
  • Border colors, widths, and positions
  • Background colors
  • Flexbox utilities
  • Padding and margin helpers

If one drink feels good, then ten drinks must be better, right?

At this point there is no benefit to even having an external stylesheet. You may as well use inline styles. Ah, but Adam has anticipated this and counters with this difference between inline styles and having utility classes for everything:

You can’t just pick any value want; you have to choose from a curated list.

Right. But that isn’t a technical solution, it’s a cultural one. You could just as easily have a curated list of allowed inline style properties and values. If you are in an environment where people won’t simply create a new utility class every time they want to style something, then you are also in an environment where people won’t create new inline style combinations every time they want to style something.

I think Adam has hit on something important here, but it’s not about utility classes. His suggestion of “utility-first CSS” will only work if the vocabulary is strictly adhered to. For that to work, everyone touching the code needs to understand the system and respect the boundaries of it. That understanding and respect is far, far more important than any particular way of structuring HTML and CSS. No technical solution can replace that sort of agreement …not even slapping !important on every declaration to make them immutable.

I very much appreciate the efforts that people have put into coming up with great naming systems and methodologies, even the ones I don’t necessarily agree with. They’re all aiming to make that overlap of HTML and CSS less painful. But the really hard problem is where people overlap.

Teaching in Porto, day one

Today was the first day of the week long “masterclass” I’m leading here at The New Digital School in Porto.

When I was putting together my stab-in-the-dark attempt to provide an outline for the week, I labelled day one as “How the web works” and gave this synopsis:

The internet and the web; how browsers work; a history of visual design on the web; the evolution of HTML and CSS.

There ended up being less about the history of visual design and CSS (we’ll cover that tomorrow) and more about the infrastructure that the web sits upon. Before diving into the way the web works, I thought it would be good to talk about how the internet works, which led me back to the history of communication networks in general. So the day started from cave drawings and smoke signals, leading to trade networks, then the postal system, before getting to the telegraph, and then telephone networks, the ARPANET, and eventually the internet. By lunch time we had just arrived at the birth of the World Wide Web at CERN.

It wasn’t all talk though. To demonstrate a hub-and-spoke network architecture I had everyone write down someone else’s name on a post-it note, then stand in a circle around me, and pass me (the hub) those messages to relay to their intended receiver. Later we repeated this exercise but with a packet-switching model: everyone could pass a note to their left or to their right. The hub-and-spoke system took almost a minute to relay all six messages; the packet-switching version took less than 10 seconds.

Over the course of the day, three different laws came up that were relevant to the history of the internet and the web:

Metcalfe’s Law
The value of a network is proportional to the square of the number of users.
Postel’s Law
Be conservative in what you send, be liberal in what you accept.
Sturgeon’s Law
Ninety percent of everything is crap.

There were also references to the giants of hypertext: Ted Nelson, Vannevar Bush, and Douglas Engelbart—for a while, I had the mother of all demos playing silently in the background.

After a most-excellent lunch in a nearby local restaurant (where I can highly recommend the tripe), we started on the building blocks of the web: HTTP, URLs, and HTML. I pulled up the first ever web page so that we could examine its markup and dive into the wonder of the A element. That led us to the first version of HTML which gave us enough vocabulary to start marking up documents: p, h1-h6, ol, ul, li, and a few others. We went around the room looking at posters and other documents pinned to the wall, and starting marking them up by slapping on post-it notes with opening and closing tags on them.

At this point we had covered the anatomy of an HTML element (opening tags, closing tags, attribute names and attribute values) as well as some of the history of HTML’s expanding vocabulary, including elements added in HTML5 like section, article, and nav. But so far everything was to do with marking up static content in a document. Stepping back a bit, we returned to HTTP, and talked about difference between GET and POST requests. That led in to ways of sending data to a server, which led to form fields and the many types of inputs at our disposal: text, password, radio, checkbox, email, url, tel, datetime, color, range, and more.

With that, the day drew to a close. I feel pretty good about what we covered. There was a lot of groundwork, and plenty of history, but also plenty of practical information about how browsers interpret HTML.

With the structural building blocks of the web in place, tomorrow is going to focus more on the design side of things.

Starting out

I had a really enjoyable time at Codebar Brighton last week, not least because Morty came along.

I particularly enjoy teaching people who have zero previous experience of making a web page. There’s something about explaining HTML and CSS from first principles that appeals to me. I especially love it when people ask lots of questions. “What does this element do?”, “Why do some elements have closing tags and others don’t?”, “Why is it textarea and not input type="textarea"?” The answer usually involves me going down a rabbit-hole of web archeology, so I’m in my happy place.

But there’s only so much time at Codebar each week, so it’s nice to be able to point people to other resources that they can peruse at their leisure. It turns out that’s it’s actually kind of tricky to find resources at that level. There are lots of great articles and tutorials out there for professional web developers—Smashing Magazine, A List Apart, CSS Tricks, etc.—but no so much for complete beginners.

Here are some of the resources I’ve found:

  • MarkSheet by Jeremy Thomas is a free HTML and CSS tutorial. It starts with an explanation of the internet, then the World Wide Web, and then web browsers, before diving into HTML syntax. Jeremy is the same guy who recently made CSS Reference.
  • Learn to Code HTML & CSS by Shay Howe is another free online book. You can buy a paper copy too. It’s filled with good, clear explanations.
  • Zero to Hero Coding by Vera Deák is an ongoing series. She’s starting out on her career as a front-end developer, so her perspective is particularly valuable.

If I find any more handy resources, I’ll link to them and tag them with “learning”.

Adoption

Tom wrote a post on Ev’s blog a while back called JavaScript Frameworks: Distribution Channels for Good Ideas (I’ve been hoping he’d publish it on his own site so I’d have a more permanent URL to point to, but so far, no joy). It’s well worth a read.

I don’t really have much of an opinion on his central point that browser makers should work more closely with framework makers. I’m not so sure I agree with the central premise that frameworks are going to be around for the long haul. I think good frameworks—like jQuery—should aim to make themselves redundant.

But anyway, along the way, Tom makes this observation:

Google has an institutional tendency to go it alone.

JavaScript not good enough? Let’s create Dart to replace it. HTML not good enough? Let’s create AMP to replace it. I’m just waiting for them to announce Google Style Sheets.

I don’t really mind these inventions. We’re not forced to adopt them, and generally, we don’t. Tom again:

They poured enormous time and money into Dart, even building an entire IDE, without much to show for it. Contrast Dart’s adoption with the adoption of TypeScript and Flow, which layer improvements on top of JavaScript instead of trying to replace it.

See, that’s a really, really good point. It’s so much easier to get people to adjust their behaviour than to change it completely.

Sass is a really good example of this. You can take any .css file, save it as a .scss file, and now you’re using Sass. Then you can start using features (or not) as needed. Very smart.

Incidentally, I’m very curious to know how many people use the scss syntax (which is the same as CSS) compared to how many people use the sass indented syntax (the one with significant whitespace). In his brilliant Sass for Web Designers book, I don’t think Dan even mentioned the indented syntax.

Or compare the adoption of Sass to the adoption of HAML. Now, admittedly, the disparity there might be because Sass adds new features, whereas HAML is a purely stylistic choice. But I think the more fundamental difference is that Sass—with its scss syntax—only requires you to slightly adjust your behaviour, whereas something like HAML requires you to go all in right from the start.

This is something that has been on my mind a lately while I’ve been preparing my new talk on evaluating technology (the talk went down very well at An Event Apart San Francisco, by the way—that’s a relief). In the talk, I made a reference to one of Grace Hopper’s famous quotes:

Humans are allergic to change.

Now, Grace Hopper subsequently says:

I try to fight that.

I contrast that with the approach that Tim Berners-Lee and Robert Cailliau took with their World Wide Web project. The individual pieces were built on what people were already familiar with. URLs use slashes so they’d be feel similar to UNIX file paths. And the first fledging version of HTML took its vocabulary almost wholesale from a version of SGML already in use at CERN. In fact, you could pretty much take an existing CERN SGML file and open it as an HTML file in a web browser.

Oh, and that browser would ignore any tags it didn’t understand—behaviour that, in my opinion, would prove crucial to the growth and success of HTML. Because of its familiarity, its simplicity, and its forgiving error handling, HTML turned to be more successful than Tim Berners-Lee expected, as he wrote in his book Weaving The Web:

I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content. It would turn out that HTML would become amazingly popular for the content as well.

HTML and SGML; Sass and CSS; TypeScript and JavaScript. The new technology builds on top of the existing technology instead of wiping the slate clean and starting from scratch.

Humans are allergic to change. And that’s okay.

Marking up help text in forms

Zoe asked a question on Twitter recently:

‘Sfunny—I had been pondering this exact question. In fact, I threw a CodePen together a couple of weeks ago.

Visually, both examples look the same; there’s a label, then a form field, then some extra text (in this case, a validation message).

The first example puts the validation message in an em element inside the label text itself, so I know it won’t be missed by a screen reader—I think I first learned this technique from Derek many years ago.

<div class="first error example">
 <label for="firstemail">Email
<em class="message">must include the @ symbol</em>
 </label>
 <input type="email" id="firstemail" placeholder="e.g. you@example.com">
</div>

The second example puts the validation message after the form field, but uses aria-describedby to explicitly associate that message with the form field—this means the message should be read after the form field.

<div class="second error example">
 <label for="secondemail">Email</label>
 <input type="email" id="secondemail" placeholder="e.g. you@example.com" aria-describedby="seconderror">
 <em class="message" id="seconderror">must include the @ symbol</em>
</div>

In both cases, the validation message won’t be missed by screen readers, although there’s a slight difference in the order in which things get read out. In the first example we get:

  1. Label text,
  2. Validation message,
  3. Form field.

And in the second example we get:

  1. Label text,
  2. Form field,
  3. Validation message.

In this particular example, the ordering in the second example more closely matches the visual representation, although I’m not sure how much of a factor that should be in choosing between the options.

Anyway, I was wondering whether one of these two options is “better” or “worse” than the other. I suspect that there isn’t a hard and fast answer.

Extensible web components

Adam Onishi has written up his thoughts on web components and progressive enhancements, following on from a discussion we were having on Slack. He shares a lot of the same frustrations as I do.

Two years ago, I said:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

I still feel that way. In theory, web components are very exciting. In practice, web components are very worrying. The worrying aspect comes from the treatment of backwards compatibility.

It all comes down to the way custom elements work. When you make up a custom element, it’s basically a span.

<fancy-select></fancy-select>

Then, using JavaScript with ShadowDOM, templates, and the other specs that together make up the web components ecosystem, you turn that inert span-like element into something all-singing and dancing. That’s great if the browser supports those technologies, and the JavaScript executes successfully. But if either of those conditions aren’t met, what you’re left with is basically a span.

One of the proposed ways around this was to allow custom elements to extend existing elements (not just spans). The proposed syntax for this was an is attribute.

<select is="fancy-select">...</select>

Browser makers responded to this by saying “Nah, that’s too hard.”

To be honest, I had pretty much given up on the is functionality ever seeing the light of day, but Monica has rekindled my hope:

Still, I’m not holding my breath for this kind of declarative extensibility landing in browsers any time soon. Instead, a JavaScript-based way of extending existing existing elements is currently the only way of piggybacking on all the accessible behavioural goodies you get with native elements.

class FancySelect extends HTMLSelectElement

But this imperative approach fails completely if custom elements aren’t supported, or if the JavaScript fails to execute. Now you’re back to having spans.

The presentation on web components at the Progressive Web Apps Dev Summit referred to this JavaScript-based extensibility as “progressively enhancing what’s already available”, which is a bit of a stretch, given how completely it falls apart in older browsers. It was kind of a weird talk, to be honest. After fifteen minutes of talking about creating elements entirely from scratch, there was a minute or two devoted to the is attribute and extending existing elements …before carrying as though those two minutes never happened.

But even without any means of extending existing elements, it should still be possible to define custom elements that have some kind of fallback in non-supporting browsers:

<fancy-select>
 <select>...</select>
</fancy-select>

In that situation, you at least get a regular ol’ select element in older browsers (or in modern browsers before the JavaScript kicks in and uplifts the custom element).

Adam has a great example of this in his post:

I’ve been thinking of a gallery component lately, where you’d have a custom element, say <o-gallery> for want of a better example, and simply populate it with images you want to display, with custom elements and shadow DOM you can add all the rest, controls/layout etc. Markup would be something like:

<o-gallery>
 <img src="">
 <img src="">
 <img src="">
</o-gallery>

If none of the extra stuff loads, what do we get? Well you get 3 images on the page. You still get the content, but just none of the fancy interactivity.

Yes! This, in my opinion, is how we should be approaching the design of web components. This is what gets me excited about web components.

Then I look at pretty much all the examples of web components out there and my nervousness kicks in. Hardly any of them spare a thought for backwards-compatibility. Take a look, for example, at the entire contents of the body element for the Polymer Shop demo site:

<shop-app unresolved="">SHOP</shop-app>

This seems really odd to me, because I don’t think it’s a good way to “sell” a technology.

Compare service workers to web components.

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Service workers work well and fail well. If a browser supports service workers, the user gets all the benefits. If a browser doesn’t support service workers, the user get the same experience they would have always had.

Web components (will) work well, but fail badly. If a browser supports web components, the user gets the experience that the developer has crafted using these new technologies. If a browser doesn’t support web components, the user gets …probably nothing. It depends on how the web components have been designed.

It’s so much easier to get excited about implementing service workers. You’ve literally got nothing to lose and everything to gain. That’s not the case with web components. Or at least not with the way they are currently being sold.

See, this is why I think it’s so important to put some effort into designing web components that have some kind of fallback. Those web components will work well and fail well.

Look at the way new elements are designed for HTML. Think of complex additions like canvas, audio, video, and picture. Each one has been designed with backwards-compatibility in mind—there’s always a way to provide fallback content.

Web components give us developers the same power that, up until now, only belonged to browser makers. Web components also give us developers the same responsibilities as browser makers. We should take that responsibility seriously.

Web components are supposed to be the poster child for The Extensible Web Manifesto. I’m all for an extensible web. But the way that web components are currently being built looks more like an endorsement of The Replaceable Web Manifesto. I’m not okay with a replaceable web.

Here’s hoping that my concerns won’t be dismissed as “piffle and tosh” again by the very people who should be thinking about these issues.

Unlabelled search fields

Adam Silver is writing a book on forms—you may be familiar with his previous book on maintainable CSS. In a recent article (that for some reason isn’t on his blog), he looks at markup patterns for search forms and advocates that we should always use a label. I agree. But for some reason, we keep getting handed designs that show unlabelled search forms. And no, a placeholder is not a label.

I had a discussion with Mark about this the other day. The form he was marking up didn’t have a label, but it did have a button with some text that would work as a label:

<input type="search" placeholder="…">
<button type="submit">
Search
</button>

He was wondering if there was a way of using the button’s text as the label. I think there is. Using aria-labelledby like this, the button’s text should be read out before the input field:

<input aria-labelledby="searchtext" type="search" placeholder="…">
<button type="submit" id="searchtext">
Search
</button>

Notice that I say “think” and “should.” It’s one thing to figure out a theoretical solution, but only testing will show whether it actually works.

The W3C’s WAI tutorial on labelling content gives an example that uses aria-label instead:

<input type="text" name="search" aria-label="Search">
<button type="submit">Search</button>

It seems a bit of a shame to me that the label text is duplicated in the button and in the aria-label attribute (and being squirrelled away in an attribute, it runs the risk of metacrap rot). But they know what they’re talking about so there may well be very good reasons to prefer duplicating the value with aria-label rather than pointing to the value with aria-labelledby.

I thought it would be interesting to see how other sites are approaching this pattern—unlabelled search forms are all too common. All the markup examples here have been simplified a bit, removing class attributes and the like…

The BBC’s search form does actually have a label:

<label for="orb-search-q">
Search the BBC
</label>
<input id="orb-search-q" placeholder="Search" type="text">
<button>Search the BBC</button>

But that label is then hidden using CSS:

position: absolute;
height: 1px;
width: 1px;
overflow: hidden;
clip: rect(1px, 1px, 1px, 1px);

That CSS—as pioneered by Snook—ensures that the label is visually hidden but remains accessible to assistive technology. Using something like display: none would hide the label for everyone.

Medium wraps the input (and icon) in a label and then gives the label a title attribute. Like aria-label, a title attribute should be read out by screen readers, but it has the added advantage of also being visible as a tooltip on hover:

<label title="Search Medium">
  <span class="svgIcon"><svg></svg></span>
  <input type="search">
</label>

This is also what Google does on what must be the most visited search form on the web. But the W3C’s WAI tutorial warns against using the title attribute like this:

This approach is generally less reliable and not recommended because some screen readers and assistive technologies do not interpret the title attribute as a replacement for the label element, possibly because the title attribute is often used to provide non-essential information.

Twitter follows the BBC’s pattern of having a label but visually hiding it. They also have some descriptive text for the icon, and that text gets visually hidden too:

<label class="visuallyhidden" for="search-query">Search query</label>
<input id="search-query" placeholder="Search Twitter" type="text">
<span class="search-icon>
  <button type="submit" class="Icon" tabindex="-1">
    <span class="visuallyhidden">Search Twitter</span>
  </button>
</span>

Here’s their CSS for hiding those bits of text—it’s very similar to the BBC’s:

.visuallyhidden {
  border: 0;
  clip: rect(0 0 0 0);
  height: 1px;
  margin: -1px;
  overflow: hidden;
  padding: 0;
  position: absolute;
  width: 1px;
}

That’s exactly the CSS recommended in the W3C’s WAI tutorial.

Flickr have gone with the aria-label pattern as recommended in that W3C WAI tutorial:

<input placeholder="Photos, people, or groups" aria-label="Search" type="text">
<input type="submit" value="Search">

Interestingly, neither Twitter or Flickr are using type="search" on the input elements. I’m guessing this is probably because of frustrations with trying to undo the default styles that some browsers apply to input type="search" fields. Seems a shame though.

Instagram also doesn’t use type="search" and makes no attempt to expose any kind of accessible label:

<input type="text" placeholder="Search">
<span class="coreSpriteSearchIcon"></span>

Same with Tumblr:

<input tabindex="1" type="text" name="q" id="search_query" placeholder="Search Tumblr" autocomplete="off" required="required">

…although the search form itself does have role="search" applied to it. Perhaps that helps to mitigate the lack of a clear label?

After that whistle-stop tour of a few of the web’s unlabelled search forms, it looks like the options are:

  • a visually-hidden label element,
  • an aria-label attribute,
  • a title attribute, or
  • associate some text using aria-labelledby.

But that last one needs some testing.

Update: Emil did some testing. Looks like all screen-reader/browser combinations will read the associated text.

Talking about hypertext

#CSSday starts off with a great history lesson of our industry by @adactio

I’ve just published a transcript of the talk I gave at the HTML Special that preceded CSS Day a couple of weeks back. I’ve also recorded an audio version for your huffduffing pleasure.

It’s not like the usual talks I give. The subject matter was assigned to me, Mission Impossible style. PPK wanted each speaker to give an entire talk on just one HTML element. He offered me the best element of them all: the A element.

There were a few different directions I could’ve taken it. I could’ve tried to make it practical, but I quickly dismissed that idea. Instead I went in the completely opposite direction, making it as pretentious as possible. I figured a talk about hypertext could afford to be winding and circuitous, building on some of the ideas I wrote about in my piece for The Manual a few years back. It’s quite self-indulgent of me, but I used it as an opportunity to geek out about some of my favourite things; from Borges, Babbage, and Bletchley to Leibniz, Lovelace, and Licklider.

I wouldn’t usually write out an entire talk word-for-word in advance, but somehow it felt right for this one. In fact, my talk preparation this time ‘round was very similar to the process Charlotte recently wrote about:

  1. Get everything out of my head and onto a mind map.
  2. Write chunks of content in short bursts—this was when I was buddying up with Paul.
  3. Put together a slide deck of visuals to support the narrative.
  4. Practice delivering the talk so I don’t look I’m just reading off a screen.

It takes me a long time to prepare talks. As the deadline for this one approached, I was getting quite panicked. It was touch and go there for a while, but I managed to get it done in time.

I’m pleased with how it turned out. On the day, I had fun delivering it. People seemed to like it too, which was gratifying.

Although with this kind of talk, it was inevitable that I wouldn’t be able to please everyone.

I guess this talk was a one-off affair. That said, if you’re putting on an event and you think this subject matter would be appropriate, let me know. I’d be more than happy to deliver it again.

Amsterdam Brighton Amsterdam

I’m about to have a crazy few days that will see me bouncing between Brighton and Amsterdam.

It starts tomorrow. I’m flying to Amsterdam in the morning and speaking at this Icons event in the afternoon about digital preservation and long-term thinking.

Then, the next morning, I’ll be opening up the inaugural HTML Special which is a new addition the CSS Day conference. Each talk on Thursday will cover one HTML element. I am honoured to speaking about the A element. Here’s the talk description:

The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Enquire within upon everything.

I’ve been working all out to get this talk done and I finally wrapped it up today. Right now, I feel pretty happy with it, but I bet I’ll change that opinion in the next 48 hours. I’m pretty sure that this will be one of those talks that people will either love or hate, kind of like my 2008 dConstruct talk, The System Of The World.

After CSS Day, I’ll be heading back to Brighton on Saturday, June 18th to play a Salter Cane gig in The Greys pub. If you’re around, you should definitely come along—not only is it free, but there will be some excellent support courtesy of Jon London, and Lucas and King.

Then, the next morning, I’ll be speaking at DrupalCamp Brighton, opening up day two of the event. I won’t be able to stick around long afterwards though, because I need to skidaddle to the airport to go back to Amsterdam!

Google are having their Progressive Web App Dev Summit there on Monday on Tuesday. I’ll be moderating a panel on the second day, so I’ll need to pay close attention to all the talks. I’ll be grilling representatives from Google, Samsung, Opera, Microsoft, and Mozilla. Considering my recent rants about some very bad decisions on the part of Google’s Chrome team, it’s very brave of them to ask me to be there, much less moderate a panel in public.

You can still register for the event, by the way. Like the Salter Cane gig, it’s free. But if you can’t make it along, I’d still like to know what you think I should be asking the panelists about.

Got a burning question for browser/device makers? Write it down, post it somewhere on the web with a link back to this post, and then send me a web mention (there’s a form for you to paste in the URL at the bottom of this post).

Conversational interfaces

Psst… Jeremy! Right now you’re getting notified every time something is posted to Slack. That’s great at first, but now that activity is increasing you’ll probably prefer dialing that down.

Slackbot, 2015

What’s happening?

Twitter, 2009

Why does everyone always look at me? I know I’m a chalkboard and that’s my job, I just wish people would ask before staring at me. Sometimes I don’t have anything to say.

Existentialist chalkboard, 2007

I’m Little MOO - the bit of software that will be managing your order with us. It will shortly be sent to Big MOO, our print machine who will print it for you in the next few days. I’ll let you know when it’s done and on it’s way to you.

Little MOO, 2006

It looks like you’re writing a letter.

Clippy, 1997

Your quest is to find the Warlock’s treasure, hidden deep within a dungeon populated with a multitude of terrifying monsters. You will need courage, determination and a fair amount of luck if you are to survive all the traps and battles, and reach your goal — the innermost chambers of the Warlock’s domain.

The Warlock Of Firetop Mountain, 1982

Welcome to Adventure!! Would you like instructions?

Colossal Cave, 1976

I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write.

I, Pencil, 1958

ÆLFRED MECH HET GEWYRCAN
Ælfred ordered me to be made

Ashmolean Museum, Oxford

The Ælfred Jewel, ~880

Technical note

I have marked up the protagonist of each conversation using the cite element. There is a long-running dispute over the use of this element. In HTML 4.01 it was perfectly fine to use cite to mark up a person being quoted. In the HTML Living Standard, usage has been narrowed:

The cite element represents the title of a work (e.g. a book, a paper, an essay, a poem, a score, a song, a script, a film, a TV show, a game, a sculpture, a painting, a theatre production, a play, an opera, a musical, an exhibition, a legal case report, a computer program, etc). This can be a work that is being quoted or referenced in detail (i.e. a citation), or it can just be a work that is mentioned in passing.

A person’s name is not the title of a work — even if people call that person a piece of work — and the element must therefore not be used to mark up people’s names.

I disagree.

In the examples above, it’s pretty clear that I, Pencil and Warlock Of Firetop Mountain are valid use cases for the cite element according to the HTML5 definition; they are titles of works. But what about Clippy or Little Moo or Slackbot? They’re not people …but they’re not exactly titles of works either.

If I were to mark up a dialogue between Eliza and a human being, should I only mark up Eliza’s remarks with cite? In text transcripts of conversations with Alexa, Siri, or Cortana, should only their side of the conversation get attributed as a source? Or should they also be written without the cite element because it must not be used to mark up people’s names …even though they are not people, according to conventional definition.

It’s downright botist.