Tags: standards

76

sparkline

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

Declaration

I like the robustness that comes with declarative languages. I also like the power that comes with imperative languages. Best of all, I like having the choice.

Take the video and audio elements, for example. If you want, you can embed a video or audio file into a web page using a straightforward declaration in HTML.

<audio src="..." controls><!-- fallback goes here --></audio>

Straightaway, that covers 80%-90% of use cases. But if you need to do more—like, provide your own custom controls—there’s a corresponding API that’s exposed in JavaScript. Using that API, you can do everything that you can do with the HTML element, and a whole lot more besides.

It’s a similar story with animation. CSS provides plenty of animation power, but it’s limited in the events that can trigger the animations. That’s okay. There’s a corresponding JavaScript API that gives you more power. Again, the CSS declarations cover 80%-90% of use cases, but for anyone in that 10%-20%, the web animation API is there to help.

Client-side form validation is another good example. For most us, the HTML attributes—required, type, etc.—are probably enough most of the time.

<input type="email" required />

When we need more fine-grained control, there’s a validation API available in JavaScript (yes, yes, I know that the API itself is problematic, but you get the point).

I really like this design pattern. Cover 80% of the use cases with a declarative solution in HTML, but also provide an imperative alternative in JavaScript that gives more power. HTML5 has plenty of examples of this pattern. But I feel like the history of web standards has a few missed opportunities too.

Geolocation is a good example of an unbalanced feature. If you want to use it, you must use JavaScript. There is no declarative alternative. This doesn’t exist:

<input type="geolocation" />

That’s a shame. Anyone writing a form that asks for the user’s location—in order to submit that information to a server for processing—must write some JavaScript. That’s okay, I guess, but it’s always going to be that bit more fragile and error-prone compared to markup.

(And just in case you’re thinking of the fallback—which would be for the input element to be rendered as though its type value were text—and you think it’s ludicrous to expect users with non-supporting browsers to enter latitude and longitude coordinates by hand, I direct your attention to input type="color": in non-supporting browsers, it’s rendered as input type="text" and users are expected to enter colour values by hand.)

Geolocation is an interesting use case because it only works on HTTPS. There are quite a few JavaScript APIs that quite rightly require a secure context—like service workers—but I can’t think of a single HTML element or attribute that requires HTTPS (although that will soon change if we don’t act to stop plans to create a two-tier web). But that can’t have been the thinking behind geolocation being JavaScript only; when geolocation first shipped, it was available over HTTP connections too.

Anyway, that’s just one example. Like I said, it’s not that I’m in favour of declarative solutions instead of imperative ones; I strongly favour the choice offered by providing declarative solutions as well as imperative ones.

In recent years there’s been a push to expose low-level browser features to developers. They’re inevitably exposed as JavaScript APIs. In most cases, that makes total sense. I can’t really imagine a declarative way of accessing the fetch or cache APIs, for example. But I think we should be careful that it doesn’t become the only way of exposing new browser features. I think that, wherever possible, the design pattern of exposing new features declaratively and imperatively offers the best of the both worlds—ease of use for the simple use cases, and power for the more complex use cases.

Previously, it was up to browser makers to think about these things. But now, with the advent of web components, we developers are gaining that same level of power and responsibility. So if you’re making a web component that you’re hoping other people will also use, maybe it’s worth keeping this design pattern in mind: allow authors to configure the functionality of the component using HTML attributes and JavaScript methods.

CSS grid in Internet Explorer 11

When I was in Boston, speaking on a lunchtime panel with Rachel at An Event Apart, we took some questions from the audience about CSS grid. Inevitably, a question about browser support came up—specifically about support in Internet Explorer 11.

(Technically, you can use CSS grid in IE11—in fact it was the first browser to ship a version of grid—but the prefixed syntax is different to the standard and certain features are missing.)

Rachel gave a great balanced response, saying that you need to look at your site’s stats to determine whether it’s worth the investment of your time trying to make a grid work in IE11.

My response was blunter. I said I just don’t consider IE11 as a browser that supports grid.

Now, that might sound harsh, but what I meant was: you’re already dividing your visitors into browsers that support grid, and browsers that don’t …and you’re giving something to those browsers that don’t support grid. So I’m suggesting that IE11 falls into that category and should receive the layout you’re giving to browsers that don’t support grid …because really, IE11 doesn’t support grid: that’s the whole reason why the syntax is namespaced by -ms.

You could jump through hoops to try to get your grid layout working in IE11, as detailed in a three-part series on CSS Tricks, but at that point, the amount of effort you’re putting in negates the time-saving benefits of using CSS grid in the first place.

Frankly, the whole point of prefixed CSS is that is not used after a reasonable amount of time (originally, the idea was that it would not be used in production, but that didn’t last long). As we’ve moved away from prefixes to flags in browsers, I’m seeing the amount of prefixed properties dropping, and that’s very, very good. I’ve stopped using autoprefixer on new projects, and I’ve been able to remove it from some existing ones—please consider doing the same.

And when it comes to IE11, I’ll continue to categorise it as a browser that doesn’t support CSS grid. That doesn’t mean I’m abandoning users of IE11—far from it. It means I’m giving them the layout that’s appropriate for the browser they’re using.

Remember, websites do not need to look exactly the same in every browser.

Registering service workers

In chapter two of Going Offline, I talk about registering your service worker wrapped up in some feature detection:

<script>
if (navigator.serviceworker) {
  navigator.serviceworker.register('/serviceworker.js');
}
</script>

But I also make reference to a declarative way of doing this that isn’t very widely supported:

<link rel="serviceworker" href="/serviceworker.js">

No need for feature detection there. Thanks to the liberal error-handling model of HTML (and CSS), browsers will just ignore what they don’t understand, which isn’t the case with JavaScript.

Alas, it looks like that nice declarative alternative isn’t going to be making its way into browsers anytime soon. It has been removed from the HTML spec. That’s a shame. I have a preference for declarative solutions where possible—they’re certainly easier to teach. But in this case, the JavaScript alternative isn’t too onerous.

So if you’re reading Going Offline, when you get to the bit about someday using the rel value, you can cast a wistful gaze into the distance, or shed a tiny tear for what might have been …and then put it out of your mind and carry on reading.

Expectations

I noticed something interesting recently about how I browse the web.

It used to be that I would notice if a site were responsive. Or, before responsive web design was a thing, I would notice if a site was built with a fluid layout. It was worthy of remark, because it was exceptional—the default was fixed-width layouts.

But now, that has flipped completely around. Now I notice if a site isn’t responsive. It feels …broken. It’s like coming across an embedded map that isn’t a slippy map. My expectations have reversed.

That’s kind of amazing. If you had told me ten years ago that liquid layouts and media queries would become standard practice on the web, I would’ve found it very hard to believe. I spent the first decade of this century ranting in the wilderness about how the web was a flexible medium, but I felt like the laughable guy on the street corner with an apocalyptic sandwich board. Well, who’s laughing now

Anyway, I think it’s worth stepping back every now and then and taking stock of how far we’ve come. Mind you, in terms of web performance, the trend has unfortunately been in the wrong direction—big, bloated websites have become the norm. We need to change that.

Now, maybe it’s because I’ve been somewhat obsessed with service workers lately, but I’ve started to notice my expectations around offline behaviour changing recently too. It’s not that I’m surprised when I can’t revisit an article without an internet connection, but I do feel disappointed—like an opportunity has been missed.

I really notice it when I come across little self-contained browser-based games like

Those games are great! I particularly love Battleship Solitaire—it has a zen-like addictive quality to it. If I load it up in a browser tab, I can then safely go offline because the whole game is delivered in the initial download. But if I try to navigate to the game while I’m offline, I’m out of luck. That’s a shame. This snack-sized casual games feel like the perfect use-case for working offline (or, even if there is an internet connection, they could still be speedily served up from a cache).

I know that my expectations about offline behaviour aren’t shared by most people. The idea of visiting a site even when there’s no internet connection doesn’t feel normal …yet.

But perhaps that expectation will change. It’s happened before.

(And if you want to be ready when those expectations change, I’ve written a Going Offline for you.)

Timing

Apple Inc. is my accidental marketing department.

On April 29th, 2010, Steve Jobs published his infamous Thoughts on Flash. It thrust the thitherto geek phrase “HTML5” into the mainstream press:

HTML5, the new web standard that has been adopted by Apple, Google and many others, lets web developers create advanced graphics, typography, animations and transitions without relying on third party browser plug-ins (like Flash). HTML5 is completely open and controlled by a standards committee, of which Apple is a member.

Five days later, I announced the first title from A Book Apart: HTML5 For Web Designers. The timing was purely coincidental, but it definitely didn’t hurt that book’s circulation.

Fast forward eight years…

On March 29th, 2018, Apple released the latest version of iOS. Unmentioned in the press release, this update added service worker support to Mobile Safari.

Five days later, I announced the 26th title from A Book Apart: Going Offline.

For a while now, quite a few people have cited Apple’s lack of support as a reason why they weren’t investigating service workers. That excuse no longer holds water.

Once again, the timing is purely coincidental. But it can’t hurt.

Ends and means

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA.

I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause?

Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo:

Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum?

Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?”

There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut.

Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple.

By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete.

If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart:

We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy.

There’s real hubris and audacity in thinking that one company should be able to tackle fixing the whole web. I think the AMP team are genuinely upset and hurt that people aren’t cheering them on. Perhaps they will dismiss the criticisms as outpourings of “Why wasn’t I consulted?” But that would be a mistake. The many thoughtful people who are extremely critical of AMP are on the same side as the AMP team when it comes the end-goal of better, faster websites. But burning the web to save it? No thanks.

Ben Thompson goes into more detail on the tension between the ends and the means in The Aggregator Paradox:

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

From that perspective, there’s a moral argument to be made for wielding monopoly power like Google is doing. No doubt the AMP team feel it would be morally wrong for Google not to use its influence in search to give preferential treatment to AMP pages.

Going back to the opening examples of online blackouts, was it morally wrong for companies to use their power to influence politics? Or would it have been morally wrong for them not to have used their influence?

When do the ends justify the means?

Here’s a more subtle example than Google AMP, but one which has me just as worried for the future of the web. Mozilla announced that any new web features they add to their browser will require HTTPS.

The end-goal here is one I agree with: HTTPS everywhere. On the face of it, the means of reaching that goal seem reasonable. After all, we already require HTTPS for sensitive JavaScript APIs like geolocation or service workers. But the devil is in the details:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

Emphasis mine.

This is a step too far. Again, I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement. In some ways, I think it might actually be worse.

There’s an assumption in this decision that websites are being made by professionals who will know how to switch to HTTPS. But the web is for everyone. Not just for everyone to use. It’s for everyone to build.

One of my greatest fears for the web is that building it becomes the domain of a professional priesthood. Anything that raises the bar to writing some HTML or CSS makes me very worried. Usually it’s toolchains that make things more complex, but in this case the barrier to entry is being brought right into the browser itself.

I’m trying to imagine future Codebar evenings, helping people to make their first websites, but now having to tell them that some CSS will be off-limits until they meet the entry requirements of HTTPS …even though CSS and HTTPS have literally nothing to do with one another. (And yes, there will be an exception for localhost and I really hope there’ll be an exception for file: as well, but that’s simply postponing the disappointment.)

No doubt Mozilla (and the W3C Technical Architecture Group) believe that they are doing the right thing. Perhaps they think it would be morally wrong if browsers didn’t enforce HTTPS even for unrelated features like new CSS properties. They believe that, in this particular case, the ends justify the means.

I strongly disagree. If you also disagree, I encourage you to make your voice heard. Remember, this isn’t about whether you think that we should all switch to HTTPS—we’re all in agreement on that. This is about whether it’s okay to create collateral damage by deliberately denying people access to web features in order to further a completely separate agenda.

This isn’t about you or me. This is about all those people who could potentially become makers of the web. We should be welcoming them, not creating barriers for them to overcome.

Container queries

Every single browser maker has the same stance when it comes to features—they want to hear from developers at the coalface.

“Tell us what you want! We’re listening. We want to know which features to prioritise based on real-world feedback from developers like you.”

“How about container quer—”

“Not that.”

I don’t think it’s an exaggeration to say that literally every web developer I know would love to have container queries. If you’ve worked on any responsive project of any size, you’re bound to have bumped up against the problem of only being able to respond to viewport size, rather than the size of the containing element. Without container queries, our design systems can never be truly modular.

But there’s a divide growing between what our responsive designs need to do, and the tools CSS gives us to meet those needs. We’re making design decisions at smaller and smaller levels, but our code asks us to bind those decisions to a larger, often-irrelevant abstraction of a “page.”

But the message from browser makers has consistently been “it’s simply too hard.”

At the Frontend United conference in Athens a little while back, Jonathan gave a whole talk on the need for container queries. At the same event, Serg gave a talk on Houdini.

Now, as I understand it, Houdini is the CSS arm of the extensible web. Just as web components will allow us to create powerful new HTML without lobbying browser makers, Houdini will allow us to create powerful new CSS features without going cap-in-hand to standards bodies.

At this year’s CSS Day there were two Houdini talks. Tab gave a deep dive, and Philip talked specifically about Houdini as a breakthrough for polyfilling.

During the talks, you could send questions over Twitter that the speaker could be quizzed on afterwards. As Philip was talking, I began to tap out a question: “Could this be used to polyfill container queries?” My thumb was hovering over the tweet button at the very moment that Philip said in his talk, “This could be used to polyfill container queries.”

For that happen, browsers need to implement the layout API for Houdini. But I’m betting that browser makers will be far more receptive to calls to implement the layout API than calls for container queries directly.

Once we have that, there are two possible outcomes:

  1. We try to polyfill container queries and find out that the browser makers were right—it’s simply too hard. This certainty is itself a useful outcome.
  2. We successfully polyfill container queries, and then instead of asking browser makers to figure out implementation, we can hand it to them for standardisation.

But, as Eric Portis points out in his talk on container queries, Houdini is still a ways off (by the way, browser makers, that’s two different conference talks I’ve mentioned about container queries, just in case you were keeping track of how much developers want this).

Still, there are some CSS features that are Houdini-like in their extensibility. Custom properties feel like they could be wrangled to help with the container query problem. While it’s easy to think of custom properties as being like Sass variables, they’re much more powerful than that—the fact they can be a real-time bridge between JavaScript and CSS makes them scriptable. Alas, custom properties can’t be used in media queries but maybe some clever person can figure out a way to get the effect of container queries without a query-like syntax.

However it happens, I’d just love to see some movement on container queries. I’m not alone.

I know container queries would revolutionize my design practice, and better prepare responsive design for mobile, desktop, tablet—and whatever’s coming next.

Variable fonts

We have a tradition here at Clearleft of having the occasional lunchtime braindump. They’re somewhat sporadic, but it’s always a good day when there’s a “brown bag” gathering.

When Google’s AMP format came out and I had done some investigating, I led a brown bag playback on that. Recently Mark did one on Fractal so that everyone knew how work on that was progressing.

Today Richard gave us a quick brown bag talk on variable web fonts. He talked us through how these will work on the web and in operating systems. We got a good explanation of how these fonts would get designed—the type designer designs the “extreme” edges of size, weight, or whatever, and then the file format itself can extrapolate all the in-between stages. So, in theory, one single font file can hold hundreds, thousands, or hundreds of thousands of potential variations. It feels like switching from bitmap images to SVG—there’s suddenly much greater flexibility.

A variable font is a single font file that behaves like multiple fonts.

There were a couple of interesting tidbits that Rich pointed out…

While this is a new file format, there isn’t going to be a new file extension. These will be .ttf files, and so by extension, they can be .woff and .woff2 files too.

This isn’t some proposed theoretical standard: an unprecedented amount of co-operation has gone into the creation of this format. Adobe, Apple, Google, and Microsoft have all contributed. Agreement is the hardest part of any standards process. Once that’s taken care of, the technical solution follows quickly. So you can expect this to land very quickly and widely.

This technology is landing in web browsers before it lands in operating systems. It’s already available in the Safari Technology Preview. That means that for a while, the very best on-screen typography will be delivered not in eBook readers, but in web browsers. So if you want to deliver the absolute best reading experience, look to the web.

And here’s the part that I found fascinating…

We can currently use numbers for the font-weight property in CSS. Those number values increment in hundreds: 100, 200, 300, etc. Now with variable fonts, we can start using integers: 321, 417, 183, etc. How fortuitous that we have 99 free slots between our current set of values!

Well, that’s no accident. The reason why the numbers were originally specced in increments of 100 back in 1996 was precisely so that some future sci-fi technology could make use of the ranges in between. That’s some future-friendly thinking! And as Håkon wrote:

One of the reasons we chose to use three-digit numbers was to support intermediate values in the future. And the future is now :)

Needless to say, variable fonts will be covered in Richard’s forthcoming book.

Taking an online book offline

Application Cache is—as Jake so infamously described—not a good API. It was specced and shipped before developers had a chance to figure out what they really needed, and so AppCache turned out to be frustrating at best and downright dangerous in some situations. Its over-zealous caching combined with its byzantine cache invalidation ensured it was never going to become a mainstream technology.

There are very few use-cases for AppCache, but I think I hit upon one of them. Six years ago, A Book Apart published HTML5 For Web Designers. A year and a half later, I put the book online. The contents are never going to change. There’s a second edition of the book out now but if you want to read all the extra bits that Rachel added, you’re going to have to buy the book. The website for the original book is static and unchanging. That’s what made it such a good candidate for using AppCache. I could just set it and forget.

Except that’s no longer true. AppCache is being deprecated and browsers are starting to withdraw support. Chrome is already making sure that AppCache—like geolocation—no longer works on sites that aren’t served over HTTPS. That’s for the best. In retrospect, those APIs should never have been allowed over unsecured HTTP.

I mentioned that I spent the weekend switching all my book websites over to HTTPS, so AppCache should continue to work …for now. It’s only a matter of time before AppCache is removed completely from many of the browsers that currently support it.

Seeing as I’ve got the HTML5 For Web Designers site running on HTTPS now, I might as well go all out and make it a progressive web app. By far the biggest barrier to making a progressive web app is that first step of setting up HTTPS. It’s gotten cheaper—thanks to Let’s Encrypt Certbot—but it still involves mucking around in the command line with root access; I never wanted to become a sysadmin. But once that’s finally all set up, the other technological building blocks—a Service Worker and a manifest file—are relatively easy.

In this case, the Service Worker is using a straightforward bit of logic:

  • On installation, cache absolutely everything: HTML, CSS, images.
  • When anything is requested, grab it from the cache.
  • If it isn’t in the cache, try the network.
  • If the network doesn’t work, show an offline page (or image).

Basically I’m reproducing AppCache’s overzealous approach. It works for this site because the content is never going to change. I hope that this time, I really can just set it and forget it. I want the site to be an historical artefact, available at the same URL for at least my lifetime. I don’t want to have to maintain it or revisit it every few years to swap out one API for another.

Which brings me back to the way AppCache is being deprecated…

The Firefox team are very eager to ditch AppCache as soon as possible. On the one hand, that’s commendable. They’re rightly proud of shipping Service Workers and they want to encourage people to use the better technology instead. But it sure stings for the suckers (like me) who actually went and built stuff using AppCache.

In a weird way, I think this rush to deprecate AppCache might actually hurt the adoption of Service Workers. Let me explain…

At last year’s Edge Conference, Nolan Lawson gave a great presentation on storing data in the browser. He enumerated the many ways—past and present—that we could store data locally: WebSQL, Local Storage, IndexedDB …the list goes on. He also posed the question: why aren’t more people using insert-name-of-latest-API-here? To me it seemed obvious why more people weren’t diving into using the latest and greatest option for local data storage. It was because they had been burned before. The developers who rushed into trying previous solutions end up being mocked for their choice. “Still using that ol’ thing? Pffftt!”

You can see that same attitude on display from Mozilla as they push towards removing AppCache. Like in a comment that refers to developers using AppCache in production as “the angry hordes”. Reminds me of something Tom said:

In that same Mozilla thread, Soledad echoes Tom’s point:

As a member of the devrel team: I think that this should be better addressed in a blog post that someone from the team responsible for switching AppCache off should write, so everyone can understand the reasons and ask questions to those people.

I’d rather warn people beforehand, pointing them to that post and help them with migration paths than apply emergency mitigation strategies when a lot of people find their stuff stopped working in the newer Firefox…

Bravo! That same approach should have also been taken by the Chrome team when it came to their thread about punishing display:browser in manifest files. There was absolutely no communication with developers about this major decision. I only found out about it because Paul happened to mention it to me.

I was genuinely shocked by this:

Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

I can confirm that smell. When I was making the manifest file for HTML5 For Web Designers, I really wanted to put display: browser because I want people to be able to copy and paste URLs (for the book, for individual chapters, and for sections within chapters). But knowing that if I did that, Android users would never see the “add to home screen” prompt made me question that decision. I felt strong-armed into declaring display: standalone. And no, I’m not mollified by hand-waving reassurances that the Chrome team will figure out some solution for this. Figure out the solution first, then punish the saps like me who want to use display: browser to allow people to share URLs.

Anyway, the website for HTML5 For Web Designers is now using AppCache and Service Workers. The AppCache part will probably be needed for quite a while yet to provide offline support on iOS. Apple are really dragging their heels on Service Worker support, with at least one WebKit engineer actively looking for reasons not to implement it.

There’s a lot of talk about making apps work offline, but I think it’s just as important that we consider making information work offline. Books are a great example of this. To use the tired transport tropes, the website for a book is something you might genuinely want to access when you’re on a plane, or in the underground, or out at sea.

I really, really like progressive web apps. But I also think it’s important that we don’t fall into the trap of just trying to imitate native apps on the web. I love the idea of taking the best of the web—like information being permanently available at a URL—and marrying that up with the best of native—like offline access. I also like the idea of taking the best of books—a tome of thought—and marrying it up with the best of the web—hypertext.

I’d love to see more experimentation around online/offline hypertext/books. For now, you can visit HTML5 For Web Designers, add it to your home screen, and revisit it whenever and wherever you like.

A brief history of the World Wide Web by web developers

The web will be so much better when we have images.

The web will be so much better when we can use more than 216 colours.

The web will be so much better when we have Cascading Style Sheets.

The web will be so much better when we have Cascading Style Sheets that work the same way in different browsers.

The web will be so much better when we have JavaScript.

The web will be so much better when we have JavaScript that works the same way in different browsers.

The web will be so much better when people stop using Netscape Navigator 4.

The web will be so much better when people stop using Internet Explorer 6.

The web will be so much better when we can access it on our mobile phones.

The web will be so much better when we have native video support.

The web will be so much better when we have native video support that works the same way in different browsers.

The web will be so much better when Flash dies.

The web will be so much better when we have more than a handful of fonts.

The web will be so much better when nobody is running Windows XP anymore.

The web will be so much better when nobody is running Android 2 anymore.

The web will be so much better when we have smooth animations.

The web will be so much better when websites can still work offline.

The web will be so much better when we get push notifications.

The web will be so much better when…

Everything is amazing right now and nobody’s happy.

The web on my phone

It’s funny how times have changed. Remember back in the 90s when Microsoft—quite rightly—lost an anti-trust case? They were accused of abusing their monopolistic position in the OS world to get an unfair advantage in the browser world. By bundling a copy of Internet Explorer with every copy of Windows, they were able to crush the competition from Netscape.

Mind you, it was still possible to install a Netscape browser on a Windows machine. Could you imagine if Microsoft had tried to make that impossible? There would’ve been hell to pay! They wouldn’t have had a legal leg to stand on.

Yet here we are two decades later and that’s exactly what an Operating System vendor is doing. The Operating System is iOS. It’s impossible to install a non-Apple browser onto an Apple mobile computer. For some reason, the fact that it’s a mobile device (iPhone, iPad) makes it different from a desktop-bound device running OS X. Very odd considering they’re all computers.

“But”, I hear you say, “What about Chrome for iOS? Firefox for iOS? Opera for iOS?”

Chrome for iOS is not Chrome. Firefox for iOS is not Firefox. Opera for iOS is not Opera. They are all using WebKit. They’re effectively the same as Mobile Safari, just with different skins.

But there won’t be any anti-trust case here.

I think it’s a real shame. Partly, I think it’s a shame because as a developer, I see an Operating System being let down by its browser. But mostly, I think it’s a shame because I use an iPhone and I’m being let down by its browser.

It’s kind of ironic, because when the iPhone first launched, it was all about the web apps. Remember, there was no App Store for the first year of the iPhone’s life. If you wanted to build an app, you had to use web technologies. Apple were ahead of their time. Alas, the web technologies weren’t quite up to the task back in 2007. These days, though, there are web technologies landing in browsers that are truly game-changing.

In case you hadn’t noticed, I’m very excited about Service Workers. It’s doubly exciting to see the efforts the Chrome on Android team are making to make the web a first-class citizen. As Remy put it:

If I add this app to my home screen, it will work when I open it.

I’d like to be able to use Chrome, Firefox, or Opera on my iPhone—real Chrome, real Firefox, or real Opera; not a skinned version of Safari. Right now the only way for me to switch browsers is to switch phones. Switching phones is a pain in the ass, but I’m genuinely considering it.

Whereas I’m all talk, Henrik has taken action. Like me, he doesn’t actually care about the Operating System. He cares about the browser:

Android itself bores me, honestly. There’s nothing all that terribly new or exciting here.’

save one very important detail…

IT’S CURRENTLY THE BEST MOBILE WEB APP PLATFORM

That’s true for now. The pole position for which browser is “best” is bound to change over time. The point is that locking me into one particular browser on my phone doesn’t sit right with me. It’s not very …webby.

I’m sure that Apple are not quaking in their boots at the thought of myself or Henrik switching phones. We are minuscule canaries in a very niche mine.

But what should give Apple pause for thought is the user experience they can offer for using the web. If they gain a reputation for providing a sub-par web experience compared to the competition, then maybe they’ll have to make the web a first-class citizen.

If I want to work towards that, switching phones probably won’t help. But what might help is following Alex’s advice in his answer to the question “What do we do about Safari?”:

What we do about Safari is we make websites amazing …and then they can’t not implement.

I’ll be doing that here on adactio, over on The Session (and Huffduffer when I get around to overhauling it), making progressively enhanced, accessible, offline-first, performant websites.

I’ll also be doing it at Clearleft. If you work at an organisation that wants a progressively-enhanced, accessible, offline-first, performant website, we should talk.

AMPed up

Apple has Apple News. Facebook has Instant Articles. Now Google has AMP: Accelerated Mobile Pages.

The big players sure are going to a lot of effort to reinvent RSS.

That may sound like a flippant remark, but it’s not too far from the truth. In the case of Apple News, its current incarnation appears to be quite literally an RSS reader, at least until the unveiling of the forthcoming Apple News Format.

Google’s AMP project looks a little bit different to the offerings from Facebook and Apple. Rather than creating a proprietary format from scratch, it mandates a subset of HTML …with some proprietary elements thrown in (or, to use the more diplomatic parlance of the extensible web, custom elements).

The idea is that alongside the regular HTML version of your document, you provide a corresponding AMP HTML version. Because the AMP HTML version will be leaner and meaner, user agents can then grab the AMP HTML version and present that to the end user for a faster browsing experience.

So if an RSS feed is an alternate representation of a homepage or a listing of articles, then an AMP document is an alternate representation of a single article.

Now, my own personal take on providing alternate representations of documents is “Sure. Why not?” Here on adactio.com I provide RSS feeds. On The Session I provide RSS, JSON, and XML. And on Huffduffer I provide RSS, Atom, JSON, and XSPF, adding:

If you would like to see another format supported, share your idea.

Also, each individual item on Huffduffer has a corresponding oEmbed version (and, in theory, an RDF version)—an alternate representation of that item …in principle, not that different from AMP. The big difference with AMP is that it’s using HTML (of sorts) for its format.

All of this sounds pretty reasonable: provide an alternate representation of your canonical HTML pages so that user-agents (Twitter, Google, browsers) can render a faster-loading version …much like an RSS reader.

So should you start providing AMP versions of your pages? My initial reaction is “Sure. Why not?”

The AMP Project website comes with a list of frequently asked questions, which of course, nobody has asked. My own list of invented frequently asked questions might look a little different.

Will this kill advertising?

We live in hope.

Alas, AMP pages will still be able to carry advertising, but in a restricted form. No more scripts that track your movement across the web …unless the script is from an authorised provider, like say, Google.

But it looks like the worst performance offenders won’t be able to get their grubby little scripts into AMP pages. This is a good thing.

Won’t this kill journalism?

Of all the horrid myths currently in circulation, the two that piss me off the most are:

  1. Journalism requires advertising to survive.
  2. Advertising requires invasive JavaScript.

Put the two together and you get the gist of most of the chicken-littling articles currently in circulation: “Journalism requires invasive JavaScript to survive.”

I could argue against the first claim, but let’s leave that for another day. Let’s suppose for now that, sure, journalism requires advertising to survive. Fine.

It’s that second point that is fundamentally wrong. The idea that the current state of advertising is the only way of advertising is incredibly short-sighted and misguided. Invasive JavaScript is not a requirement for showing me an ad. Setting a cookie is not a requirement for showing me an ad. Knowing where I live, who my friends are, what my income level is, and where I’ve been on the web …none of these are requirements for showing me an ad.

It is entirely possible to advertise to me and treat me with respect at the same time. The Deck already does this.

And you know what? Ad networks had their chance. They had their chance to treat us with respect with the Do Not Track initiative. We asked them to respect our wishes. They told us get screwed.

Now those same ad providers are crying because we’re installing ad blockers. They can get screwed.

Anyway.

It is entirely possible to advertise within AMP pages …just not using blocking JavaScript.

For a nicely nuanced take on what AMP could mean for journalism, see Joshua Benton’s article on Nieman Lab—Get AMP’d: Here’s what publishers need to know about Google’s new plan to speed up your website.

Why not just make faster web pages?

Excellent question!

For a site like adactio.com, the difference between the regular HTML version of an article and the corresponding AMP version of the same article is pretty small. It’s a shame that I can’t just say “Hey, the current version of the article is the AMP version”, but that would require that I only use a subset of HTML and that I add some required guff to my page (including an unnecessary JavaScript file).

But for most of the news sites out there, the difference between their regular HTML pages and the corresponding AMP versions will be pretty significant. That’s because the regular HTML versions are bloated with third-party scripts, oversized assets, and cruft around the actual content.

Now it is in theory possible for these news sites to get rid of all those things, and I sincerely hope that they will. But that’s a big political struggle. I am rooting for developers—like the good folks at VOX—who have to battle against bosses who honestly think that journalism requires invasive JavaScript. Best of luck.

Along comes Google saying “If you want to play in our sandbox, you’re going to have to abide by our rules.” Those rules include performance best practices (for the most part—I take issue with some of the requirements, and I’ll go into that in more detail in a moment).

Now when the boss says “Slap a three megabyte JavaScript library on it so we can show a carousel”, the developers can only respond with “Google says No.”

When the boss says “Slap a ton of third-party trackers on it so we can monetise those eyeballs”, the developers can only respond with “Google says No.”

Google have used their influence like this before and it has brought them accusations of monopolistic abuse. Some people got very upset when they began labelling (and later ranking) mobile-friendly pages. Personally, I’ve got no issue with that.

In this particular case, Google aren’t mandating what you can and can’t do on your regular HTML pages; only what you can and can’t do on the corresponding AMP page.

Which brings up another question…

Will the AMP web kill the open web?

If we all start creating AMP versions of our pages, and those pages are faster than our regular HTML versions, won’t everyone just see the AMP versions without ever seeing the “full” versions?

Tim articulates a legitimate concern:

This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.

That troubles me. Using a very specific tool to build a tailored version of my page in order to “reach everyone” doesn’t fit any definition of the “open web” that I’ve ever heard.

Fair point. But I also remember that a lot of people were upset by RSS. They didn’t like that users could go for months at a time without visiting the actual website, and yet they were reading every article. They were reading every article in non-browser user agents in a format that wasn’t HTML. On paper that sounds like the antithesis of the open web, but in practice there was always something very webby about RSS, and RSS feed readers—it put the power back in the hands of the end users.

Some people chose not to play ball. They only put snippets in their RSS feeds, not the full articles. Maybe some publishers will do the same with the AMP versions of their articles: “To read more, click here…”

But I remember what generally tended to happen to the publishers who refused to put the full content in their RSS feeds. We unsubscribed.

Still, I share the concern that any one company—whether it’s Facebook, Apple, or Google—should wield so much power over how we publish on the web. I don’t think you have to be a conspiracy theorist to view the AMP project as an attempt to replace the existing web with an alternate web, more tightly controlled by Google (albeit a faster, more performant, tightly-controlled web).

My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

I’ve been playing around with the AMP HTML spec. It has some issues. The good news is that it’s open source and the project owners seem receptive to feedback.

JavaScript

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web. I’m happy to leave them at the door (although weirdly, web fonts—another big performance killer—are allowed in).

But then there’s a bit of an about-face. In order to have a valid AMP HTML page, you must include a piece of third-party JavaScript. In this case, the third party is Google and the JavaScript file is what handles the loading of assets.

This seems a bit strange to me; on the one hand claiming that third-party JavaScript is bad for performance and on the other, requiring some third-party JavaScript. As Justin says:

For me this is loading one thing too many… the AMP JS library. Surely the document itself is going to be faster than loading a library to try and make it load faster.

On the plus side, this third-party JavaScript is loaded asynchronously. It seems to mostly be there to handle the rendering of embedded content: images, videos, audio, etc.

Embedded content

If you want audio, video, or images on your page, you must use propriet… custom elements like amp-audio, amp-video, and amp-img. In the case of images, I can see how this is a way of getting around the browser’s lookahead pre-parser (although responsive images also solve this problem). In the case of audio and video, the standard audio and video elements already come with a way of specifying preloading behaviour using the preload attribute. Very odd.

Justin again:

I’m not sure if this is solving anything at the moment that we’re not already fixing with something like responsive images.

To use amp-img for images within the flow of a document, you’ll need to specify the dimensions of the image. This makes sense from a rendering point of view—knowing the width and height ahead of time avoids repaints and reflows. Alas, in many of the cases here on adactio.com, I don’t know the dimensions of the images I’m including. So any of my AMP HTML pages that include images will be invalid.

Overall, the way that AMP HTML handles embedded content looks like a whole lot of wheel reinvention. I like the idea of providing custom elements as an option for authors. I hate the idea of making them a requirement.

Metadata

If you want to provide metadata about your document, AMP HTML currently requires the use of Google’s Schema.org vocabulary. This has a big whiff of vendor lock-in to it. I’ve flagged this up as an issue and Aaron is pushing a change so hopefully this will be resolved soon.

Accessibility

In its initial release, the AMP HTML spec came with some nasty surprises for accessibility. The biggest is probably the requirement to include this in your viewport meta element:

maximum-scale=1,user-scalable=no

Yowzers! That’s some slap in the face to decent web developers everywhere. Fortunately this has been flagged up and I’m hoping it will be fixed soon.

If it doesn’t get fixed, it’s quite a non-starter. It beggars belief that Google would mandate to authors that they must make their pages inaccessible to pinch/zoom. I would hope that many developers would rebel against such a draconian injunction. If that happens, it’ll be interesting to see what becomes of those theoretically badly-formed AMP HTML documents. Technically, they will fail validation, but for very good reason. Will those accessible documents be rejected?

Please get involved on this issue if this is important to you (hint: this should be important to you).

There are a few smaller issues. Initially the :focus pseudo-class was disallowed in author CSS, but that’s being fixed.

Currently AMP HTML documents must have this line:

<style>body {opacity: 0}</style><noscript><style>body {opacity: 1}</style></noscript>

shudders

That’s a horrible conflation of JavaScript availability and CSS. It’s being fixed though, and soon all the opacity jiggery-pokery will only happen via JavaScript, which will be a big improvement: it should either all happen in CSS or all happen in JavaScript, but not the current mixture of the two.

Discovery

The AMP HTML version of your page is not the canonical version. You can specify where the real HTML version of your document is by using rel="canonical". Great!

But how do you link from your canonical page out to the AMP HTML version? Currently you’re supposed to use rel="amphtml". No, they haven’t checked the registry. Again. I’ll go in and add it.

In the meantime, I’m also requesting that the amphtml value can be combined with the alternate value, seeing as rel values can be space separated:

rel="alternate amphtml" type="text/html"

See? Not that different to RSS:

rel="alterate" type="application/rss+xml"

POSSE

When I publish something on adactio.com in HTML, it already gets syndicated to different places. This is the Indie Web idea of POSSE: Publish (on your) Own Site, Syndicate Elsewhere. As well as providing RSS feeds, I’ve also got Twitter bots that syndicate to Twitter. An If This, Then That script pushes posts to Facebook. And if I publish a photo, it goes to Flickr. Now that Medium is finally providing a publishing API, I’ll probably start syndicating articles there as well. The more, the merrier.

From that perspective, providing AMP HTML pages feels like just one more syndication option. If it were the only option, and I felt compelled to provide AMP versions of my content, I’d be very concerned. But for now, I’ll give it a whirl and see how it goes.

Here’s a bit of PHP I’m using to convert a regular piece of HTML into AMP HTML—it’s horrible code; it uses regular expressions on HTML which, as we all know, will summon the Elder Gods.

Extending

Contrary to popular belief, web standards aren’t created by a shadowy cabal and then handed down to browser makers to implement. Quite the opposite. Browser makers come together in standards bodies and try to come to an agreement about how to collectively create and implement standards. That keeps them very busy. They don’t tend to get out very often, but when they do, the browser/standards makers have one message for developers: “We want to make your life better, so tell us what you want and that’s what we’ll work on!”

In practice, this turns out not to be the case.

Case in point: responsive images. For years, this was the number one feature that developers were crying out for. And yet, the standards bodies—and, therefore, browser makers—dragged their heels. First they denied that it was even a problem worth solving. Then they said it was simply too hard. Eventually, thanks to the herculean efforts of the Responsive Images Community Group, the browser makers finally began to work on what developers had been begging for.

Now that same community group is representing the majority of developers once again. Element queries—or container queries—have been top of the wish list of working devs for quite a while now. The response from browser makers is the same as it was for responsive images. They say it’s simply too hard.

Here’s a third example: web components. There are many moving parts to web components, but one of the most exciting to developers who care about accessibility and backwards-compatibility is the idea of extending existing elements:

It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.

So instead of having to create a whole new element from scratch like this:

<taco-button>Click me!</taco-button>

…you could piggy-back on an existing element like this:

<button is="taco-button">Click me!</button>

That way, you get the best of both worlds: the native semantics of button topped with all the enhancements you want to add with your taco-button custom element. Brilliant! Github is using this to extend the time element, for example.

I’m not wedded to the is syntax, but I do think it’s vital that there is some declarative mechanism to extend existing elements instead of creating every custom element from scratch each time.

Now it looks like that’s the bit of web components that isn’t going to make the cut. Why? Because browser makers say it’s simply too hard.

As Bruce points out, this is in direct conflict with the design principles that are supposed to be driving the creation and implementation of web standards.

It probably wouldn’t bother me so much except that browser makers still trot out the party line, “We want to hear what developers want!” Their actions demonstrate that this claim is somewhat hollow.

I don’t hold out much hope that we’ll get the ability to extend existing elements for web components. I think we can still find ways to piggy-back on existing semantics, but it’s going to take more work:

<taco-button><button>Click me!</button></taco-button>

That isn’t very elegant and I can foresee a lot of trickiness trying to sift the fallback content (the button tags) from the actual content (the “Click me!” text).

But I guess that’s what we’ll be stuck with. The alternative is simply too hard.

Cerf rocks

After I wrote about digital preservation and the need to save everything, not just the so-called “important” stuff, Jason wrote a lovely piece with his own thoughts on the matter:

In order to write a history, you need evidence of what happened. When we talk about preserving the stuff we make on the web, it isn’t because we think a Facebook status update, or those GeoCities sites have such significance now. It’s because we can’t know.

In a timely coincidence, Vint Cerf also spoke about the importance of digital preservation:

When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history.

He warns of the dangers of rapidly-obsoleting file formats:

We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised.

It was a little weird that the Guardian headline refers to Vint Cerf as “Google boss”. On the BBC he’s labelled as “Google’s Vint Cerf”. Considering he’s one of the creators of the internet itself, it’s a bit like referring to Neil Armstrong as a NASA employee.

I have to say, I just love listening to him talk. He’s so smooth. I’m sure that the character of The Architect from The Matrix Reloaded is modelled on him.

Vint Cerf knows a thing or two about long-term thinking when it comes to data formats. He has written many RFCs for the IETF (my favourite being RFC 2468). Back in 1969, he wrote RFC 20, proposing the ASCII format for network interchange. If you’ve ever used the keypress event in JavaScript and wondered why, for example, the number 13 corresponds to a carriage return, this is where all those numbers come from.

Last month, over 45 years after the RFC’s original publication, it became an official standard.

So when Vint Cerf warns about the dangers of digitising into file formats that could become unreadable, I think we should pay attention to him.

Extensibility

I’ve said it before, but I’m going to reiterate my conflicted feelings about Web Components:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

There are broadly two ways that they could potentially be used:

  1. Web Components are used by developers to incrementally add more powerful elements to their websites. This evolutionary approach feels very much in line with the thinking behind the extensible web manifesto. Or:
  2. Web Components are used by developers as a monolithic platform, much like Angular or Ember is used today. The end user either gets everything or they get nothing.

The second scenario is a much more revolutionary approach—sweep aside the web that has come before, and usher in a new golden age of Web Components. Personally, I’m not comfortable with that kind of year-zero thinking. I prefer evolution over revolution:

Revolutions sometimes change the world to the better. Most often, however, it is better to evolve an existing design rather than throwing it away. This way, authors don’t have to learn new models and content will live longer. Specifically, this means that one should prefer to design features so that old content can take advantage of new features without having to make unrelated changes. And implementations should be able to add new features to existing code, rather than having to develop whole separate modes.

The evolutionary model is exemplified by the design of HTML 5.

The revolutionary model is exemplified by the design of XHTML 2.

I really hope that the Web Components model goes down the first route.

Up until recently, my inner Web Components pendulum was swinging towards the hopeful end of my spectrum of anticipation. That was mainly driven by the ability of custom elements to extend existing HTML elements.

So, for example, instead of creating a new element like this:

<taco-button>...</taco-button>

…you can piggyback off the existing semantics of the button element like this:

<button is="taco-button">...</button>

For a real-world example, see Github’s use of <time is="time-ago">.

I wrote about creating responsible Web Components:

That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Peter Gasston also has a great post on best practice for creating custom elements:

It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.

But now it looks like this superpower of custom elements is being nipped in the bud:

It also does not address subclassing normal elements. Again, while that seems desirable the current ideas are not attractive long term solutions. Punting on it in order to ship a v1 available everywhere seems preferable.

Now, I’m not particularly wedded to the syntax of using the is="" attribute to extend existing elements …but I do think that the ability to extend existing elements declaratively is vital. I’m not alone, although I may very well be in the minority.

Bruce has outlined some use cases and Steve Faulkner has enumerated the benefits of declarative extensibility:

I think being able to extend existing elements has potential value to developers far beyond accessibility (it just so happens that accessibility is helped a lot by re-use of existing HTML features.)

Bruce concurs:

Like Steve, I’ve no particularly affection (or enmity) towards the <input type="radio" is="luscious-radio"> syntax. But I’d like to know, if it’s dropped, how progressive enhancement can be achieved so we don’t lock out users of browsers that don’t have web components capabilities, JavaScript disabled or proxy browsers. If there is a concrete plan, please point me to it. If there isn’t, it’s irresponsible to drop a method that we can see working in the example above with nothing else to replace it.

He adds:

I also have a niggling worry that this may affect the uptake of web components.

I think he’s absolutely right. I think there are many developers out there in a similar position to me, uncertain exactly what to make of this new technology. I was looking forward to getting really stuck into Web Components and figuring out ways of creating powerful little extensions that I could start using now. But if Web Components turn out to be an all-or-nothing technology—a “platform”, if you will—then I will not only not be using them, I’ll be actively arguing against their use.

I really hope that doesn’t happen, but I must admit I’m not hopeful—my inner pendulum has swung firmly back towards the nervous end of my anticipation spectrum. That’s because I’m getting the distinct impression that the priorities being considered for Web Components are those of JavaScript framework creators, rather than web developers looking to add incremental improvements while maintaining backward compatibility.

If that’s the case, then Web Components will be made in the image of existing monolithic MVC frameworks that require JavaScript to do anything, even rendering content. To me, that’s a dystopian vision, one I can’t get behind.

Responsible Web Components

Bruce has written a great article called On the accessibility of web components. Again. In it, he takes issue with the tone of a recent presentation on web components, wherein Dimitri Glazkov declares:

Custom elements is really neat. It basically says, “HTML it’s been a pleasure”.

Bruce paraphrases this as:

Bye-bye HTML; you weren’t useful enough. Hello, brave new world of custom elements.

Like Bruce, I’m worried about this year-zero thinking. First of all, I think it’s self-defeating. In my experience, the web technologies that succeed are the ones that build upon what already exists, rather than sweeping everything aside. Evolution, not revolution.

Secondly, web components—or more specifically, custom elements—already allow us to extend existing HTML elements. That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

But, as Bruce asks:

Snarking aside, why do so few people talk about extending existing HTML elements with web components? Why’s all the talk about brand new custom elements? I don’t know.

Patrick leaves a comment with his answer:

The issue of not extending existing HTML elements is exactly the same that we’ve seen all this time, even before web components: developers who are tip-top JavaScripters, who already plan on doing all the visual feedback/interactions (for mouse users like themselves) in script anyway themselves, so they just opt for the most neutral starting point…a div or a span. Web components now simply gives the option of then sweeping all that non-semantic junk under a nice, self-contained rug.

That’s a depressing thought. But it might very well be true.

Stuart also comments:

Why aren’t web components required to be created with is=“some-component” on an existing HTML element? This seems like an obvious approach; sure, someone who wants to make something meaningless will just do <div is=my-thing> or (worse) <body is=my-thing> but it would provide a pretty heavy hint that you’re supposed to be doing things The Right Way, and you’d get basic accessibility stuff thrown in for free.

That’s a good question. After all, writing <new-shiny></new-shiny> is basically the same as <span is=“new-shiny”></span>. It might not make much of a difference in the case of a span or div, but it could make an enormous difference in the case of, say, form elements.

Take a look at IBM’s library of web components. They’re well-written and they look good, but time and time again, they create new custom elements instead of extending existing HTML.

Although, as Bruce points out:

Of course, not every new element you’ll want to make can extend an existing HTML element.

But I still think that the majority of web components could, and should, extend existing elements. Addy Osmani has put together some design principles for web components and Steve Faulkner has created a handy punch-list for web components, but I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Rather than just complain about this kind of thing, I figured I’d try my hand at putting it into practice…

Dave recently made a really nice web component for playing back podcast audio files. I could imagine using something like this on Huffduffer. It’s called podcast-player and you use it thusly:

<podcast-player src="my.mp3"></podcast-player>

One option for providing fallback content would be to include it within the custom element tags:

<podcast-player src="my.mp3">
    <a href="my.mp3">Listen</a>
</podcast-player>

That would require minimum change to Dave’s code. I’d just need to make sure that the fallback content within podcast-player elements is removed in supporting browsers.

I forked Dave’s code to try out another idea. I figured that if the starting point was a regular link to the audio file, that would also be a way of providing fallback for browsers that don’t cut the web component mustard:

<a href="my.mp3" is="podcast-player">Listen</a>

It required surprisingly few changes to the code. I needed to remove the fallback content (that “Listen” text), and I needed to prevent the default behaviour (following the href), but it was fairly straightforward.

However, I’m sure it could be improved in one of two ways:

  1. I should probably supply an ARIA role to the extended link. I’m not sure what would be the right one, though …menu or menubar perhaps?
  2. Perhaps a link isn’t the right element to extend. Really I should be extending an audio element (which itself allows for fallback content). When I tried that, I found it too hard to overcome the default browser rules for hiding anything between the opening and closing tags. But you’re smarter than me, so I bet you could create <audio is=“podcast-player”>.

Fork the code and have at it.

Celebrating CSS

Cascading Style Sheets turned 20 years old this week. Happy birthtime, CeeSusS!

Bruce interviewed Håkon about the creation of CSS, and it makes for fascinating reading. If you want to dig even deeper, here’s Håkon’s 1994 thesis comparing competing approaches to style sheets.

CSS gets a tough rap. I remember talking to Douglas Crockford about CSS. I’ll paraphrase his stance as “Kill it with fire!” To be fair, he was mostly talking about the lack of a decent layout system in CSS—something that’s only really getting remedied now.

Most of the flak directed at CSS comes from smart programmers, decrying its lack of power. As a declarative language, it lacks even the most basic features of even the simplest procedural language. How are serious programmers supposed to write their serious programmes with such a primitive feature set?

But I think this mindset misses out a crucial facet of understanding CSS: it’s not about us. By us, I mean professional web developers. And when I say it’s not about us, I mean it’s not only about us.

The web is for everyone. That doesn’t just mean that it’s for everyone to use—the web is for everyone to create. That means that the core building blocks of the web need to be learnable by everyone, not just programmers.

I get nervous when I see web browsers gaining powerful features that can only be accessed via a JavaScript API. Geolocation is one example: it doesn’t have any declarative equivalent to its JavaScript implementation. Counter-examples would be video and audio: you can use the JavaScript API to get exactly the behaviour you want, if you’ve got that level of knowledge …or you can use the video and audio elements if you’re okay with letting web browsers handle the complexity of display and playback.

I think that CSS hits a nice sweet spot, balancing learnability and power. I love the fact that every bit of CSS ever written comes down to the same basic pattern:

selector {
    property: value;
}

That’s it!

How amazing is it that one simple pattern can scale to encompass a whole wide world of visual design variety?

Think about the revolution that CSS has gone through in recent years: OOCSS, SMACSS, BEM …these are fundamentally new ways of approaching front-end development, and yet none of these approaches required any changes to be made to the CSS specification. The power and flexibility was already available within its simple selector-property-value pattern.

Mind you, that modularity was compromised when we got things like named animations; a pattern that breaks out of the encapsulation model of CSS. Variables in CSS also break out of the modularity pattern.

Personally, I don’t think there’s any reason to have variables in the CSS language; it’s enough to have them in pre-processing tools. Variables add enormous value for developers, and no value at all for end users. As long as developers can use variables—and they can, with Sass and LESS—I don’t think we need to further complicate CSS.

Bert Bos wrote an exhaustive list of design principles for web standards. There’s some crossover with Tim Berners-Lee’s principles of design, with ideas such as modularity and robustness. Personally, I think that Bert and Håkon did a pretty damn good job of balancing principles like learnability, extensibility, longevity, interoperability and a host of other factors while still producing something powerful enough to scale for the whole web.

There’s one important phrase I want to highlight in the abstract of the 20 year old CSS proposal:

The proposed scheme provides a simple mapping between HTML elements and presentation hints.

Hints.

Every line of CSS you write is a suggestion. You are not dictating how the HTML should be rendered; you are suggesting how the HTML should be rendered. I find that to be a very liberating and empowering idea.

My only regret is that—twenty years on from the birth of CSS—web browsers are killing the very idea of user stylesheets. Along with “view source”, this feature really drove home the idea that professional web developers are not the only ones who have a say in what gets rendered in web browsers …and that the web truly is for everyone.

Web Components

The Extensible Web Summit is taking place in Berlin today because Web Components are that important. I wish I could be there, but I’ll make do with the live notes, the IRC channel, and the octothorpe tag.

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous. That’s probably a good sign.

Here’s what I wrote after the last TAG meetup in London:

This really is a radically new and different way of adding features to browsers. In theory, it shifts the balance of power much more to developers (who currently have to hack together everything using JavaScript). If it works, it will be A Good Thing and result in expanding HTML’s vocabulary with genuinely useful features. I fear there may be a rocky transition to this new way of thinking, and I worry about backwards compatibility, but I can’t help but admire the audacity of the plan.

And here’s what I wrote after the Edge conference:

If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once.

To explain…

The exciting thing about Web Components is that they give developers as much power as browser makers.

The frightening thing about Web Components is that they give developers as much power as browser makers.

When browser makers—and other contributors to web standards—team up to hammer out new features in HTML, they have design principles to guide them …at least in theory. First and foremost—because this is the web, not some fly-by-night “platform”—is the issue of compatability:

Support existing content

Degrade gracefully

You can see those principles at work with newly-minted elements like canvas, audio, video where fallback content can be placed between the opening and closing tags so that older user agents aren’t left high and dry (which, in turn, encourages developers to start using these features long before they’re universally supported).

You can see those principles at work in the design of datalist.

You can see those principles at work in the design of new form features which make use of the fact that browsers treat unknown input types as type="text" (again, encouraging developers to start using the new input long before they’re supported in every browser).

When developers are creating new Web Components, they could apply that same amount of thought and care; Chris Scott has demonstrated just such a pattern. Switching to Web Components does not mean abandoning progressive enhancement. If anything they provide the opportunity to create whole new levels of experience.

Web developers could ensure that their Web Components degrade gracefully in older browsers that don’t support Web Components (and no, “just polyfill it” is not a sustainable solution) or, for that matter, situations where JavaScript—for whatever reason—is not available.

Web developers could ensure that their Web Components are accessible, using appropriate ARIA properties.

But I fear that Sturgeon’s Law is going to dominate Web Components. The comparison that’s often cited for Web Components is the creation of jQuery plug-ins. And let’s face it, 90% of jQuery plug-ins are crap.

This wouldn’t matter so much if developers were only shooting themselves in the foot, but because of the wonderful spirit of sharing on the web, we might well end up shooting others in the foot too:

  1. I make something (to solve a problem).
  2. I’m excited about it.
  3. I share it.
  4. Others copy and paste what I’ve made.

Most of the time, that’s absolutely fantastic. But if the copying and pasting happens without critical appraisal, a lot of questionable decisions can get propagated very quickly.

To give you an example…

When Apple introduced the iPhone, it provided a mechanism to specify that a web page shouldn’t be displayed in a zoomed-out view. That mechanism, which Apple pulled out of their ass without going through any kind of standardisation process, was to use the meta element with a name of “viewport”:

<meta name="viewport" value="...">

The value attribute of a meta element takes a comma-separated list of values (think of name="keywords": you provide a comma-separated list of keywords). But in an early tutorial about the viewport value, code was provided which showed values separated with semicolons (like CSS declarations). People copied and pasted that code (which actually did work in Mobile Safari) and so every browser must support that usage:

Many other mobile browsers now support this tag, although it is not part of any web standard. Apple’s documentation does a good job explaining how web developers can use this tag, but we had to do some detective work to figure out exactly how to implement it in Fennec. For example, Safari’s documentation says the content is a “comma-delimited list,” but existing browsers and web pages use any mix of commas, semicolons, and spaces as separators.

Anyway, that’s just one illustration of how code gets shared, copied and pasted. It’s especially crucial during the introduction of a new technology to try to make sure that the code getting passed around is of a high quality.

I feel kind of bad saying this because the introductory phase of any new technology should be a time to say “Hey, go crazy! Try stuff out! See what works and what doesn’t!” but because Web Components are so powerful I think that mindset could end up doing a lot of damage.

Web developers have been given powerful features in the past. Vendor prefixes in CSS were a powerful feature that allowed browsers to push the boundaries of CSS without creating a Tower of Babel of propietary properties. But because developers just copied and pasted code, browser makers are now having to support prefixes that were originally scoped to different rendering engines. That’s not the fault of the browser makers. That’s the fault of web developers.

With Web Components, we are being given a lot of rope. We can either hang ourselves with it, or we can make awesome …rope …structures …out of rope this analogy really isn’t working.

I’m not suggesting we have some kind of central authority that gets to sit in judgement on which Web Components pass muster (although Addy’s FIRST principles are a great starting point). Instead I think a web of trust will emerge.

If I see a Web Component published by somebody at Paciello Group, I can be pretty sure that it will be accessible. Likewise, if Christian publishes a Web Component, it’s a good bet that it will use progressive enhancement. And if any of the superhumans at Filament Group share a Web Component, it’s bound to be accessible, performant, and well thought-out.

Because—as is so often the case on the web—it’s not really about technologies at all. It’s about people.

And it’s precisely because it’s about people that I’m so excited about Web Components …and simultaneously so nervous about Web Components.