Tags: eme

462

sparkline

Tuesday, October 9th, 2018

AddyOsmani.com - Start Performance Budgeting

Great ideas from Addy on where to start with creating a performance budget that can act as a red line you don’t want to cross.

If it’s worth getting fast, it’s worth staying fast.

Saturday, October 6th, 2018

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

Monday, September 24th, 2018

Salty JavaScript analogy - HankChizlJaw

JavaScript is like salt. If you add just enough salt to a dish, it’ll help make the flavour awesome. Add too much though, and you’ll completely ruin it.

Thursday, September 20th, 2018

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

Wednesday, September 19th, 2018

HTML elements, unite! The Voltron-like powers of combining elements. | CSS-Tricks

This great post by Mandy ticks all my boxes! It’s a look at the combinatorial possibilities of some of the lesser-known HTML elements: abbr, cite, code, dfn, figure, figcaption, kbd, samp, and var.

Friday, September 14th, 2018

Breaking the Deadlock Between User Experience and Developer Experience · An A List Apart Article

Yes! Yes! Yes!

Our efforts to measure and improve UX are packed with tragically ironic attempts to love our users: we try to find ways to improve our app experiences by bloating them with analytics, split testing, behavioral analysis, and Net Promoter Score popovers. We stack plugins on top of third-party libraries on top of frameworks in the name of making websites “better”—whether it’s something misguided, like adding a carousel to appease some executive’s burning desire to get everything “above the fold,” or something truly intended to help people, like a support chat overlay. Often the net result is a slower page load, a frustrating experience, and/or (usually “and”) a ton of extra code and assets transferred to the browser.

Even tools that are supposed to help measure performance in order to make improvements—like, say, Real User Monitoring—require you to add a script to your web pages …thereby increasing the file size and degrading performance! It’s ironic, in that Alanis Morissette sense of not understanding what irony is.

Stacking tools upon tools may solve our problems, but it’s creating a Jenga tower of problems for our users.

This is a great article about evaluating technology.

Sunday, September 9th, 2018

Removing jQuery from GitHub.com frontend | GitHub Engineering

You really don’t need jQuery any more …and that’s thanks to jQuery.

Here, the Github team talk through their process of swapping out jQuery for vanilla JavaScript, as well as their forays into web components (or at least the custom elements bit).

The Web Is Agreement

This presentation on web standards was delivered at the State Of The Browser conference in London in September 2018.

Web standards don’t exist.

At least, they don’t physically exist. They are intangible.

They’re in good company.

Feelings are intangible, but real. Hope. Despair.

Ideas are intangible: liberty, justice, socialism, capitalism.

The economy. Currency. All intangible. I’m sure we’ve all had those “college thoughts”:

Money isn’t real, man! They’re just bits of metal and pieces of paper ! Wake up, sheeple!

Nations are intangible. Geographically, France is a tangible, physical place. But France, the Republic, is an idea. Geographically, North America is a real, tangible, physical land mass. But ideas like “Canada” and “The United States” only exist in our minds.

Faith—the feeling—is intangible.

God—the idea—is intangible.

Art—the concept—is intangible.

A piece of art is an insantiation of the intangible concept of what art is.

Incidentally, I quite like Brian Eno’s working definition of what art is. Art is anything we don’t have to do. We don’t have to make paintings, or sculptures, or films, or music. We have to clothe ourselves for practical reasons, but we don’t have to make clothes beautiful. We have to prepare food to eat it, but don’t have to make it a joyous event.

By this definition, sports are also art. We don’t have to play football. Sports are also intangible.

A game of football is an instantiation of the intangible idea of what football is.

Football, chess, rugby, quiditch and rollerball are equally (in)tangible. But football, chess and rugby have more consensus.

(Christianity, Islam, Judaism, and The Force are equally intangible, but Christianity, Islam, and Judaism have a bit more consensus than The Force).

HTML is intangible.

Markup.

A web page is an instantiation of the intangible idea of what HTML is.

But we can document our shared consensus.

A rule book for football is like a web standard specification. A documentation of consensus.

By the way, economics, religions, sports and laws are all examples of intangibles that can’t be proven, because they all rely on their own internal logic—there is no outside data that can prove football or Hinduism or capitalism to be “true”. That’s very different to ideas like gravity, evolution, relativity, or germ theory—they are all intangible but provable. They are discovered, rather than created. They are part of objective reality.

Consensus reality is the collection of intangibles that we collectively agree to be true: economy, religion, law, web standards.

We treat consensus reality much the same as we treat objective reality: in our minds, football, capitalism, and Christianity are just as real as buildings, trees, and stars.

Sometimes consensus reality and objective reality get into fights.

Some people have tried to make a consensus reality around the accuracy of astrology or the efficacy of homeopathy, or ideas like the Earth being flat, 9-11 being an inside job, the moon landings being faked, the holocaust never having happened, or vaccines causing autism. These people are unfazed by objective reality, which disproves each one of these ideas.

For a long time, the consensus reality was that the sun revolved around the Earth. Copernicus and Galileo demonstrated that the objective reality was that the Earth (and all the other planets in our solar system) revolve around the sun. After the dust settled on that particular punch-up, we switched up our consensus reality. We changed the story.

Stories.

That’s another way of thinking about consensus reality: our currencies, our religions, our sports and our laws are stories that we collectively choose to believe.

Web standards are a collection of intangibles that we collectively agree to be true. They’re our stories. They’re our collective consensus reality. They are what web browsers agree to implement, and what we agree to use.

The web is agreement.

The Web Is Agreement by Paul Downey.

For human beings to collaborate together, they need a shared purpose. They must have a shared consensus reality—a shared story.

Once a group of people share a purpose, they can work together to establish principles.

Design principles are points of agreement. There are design principles underlying every human endeavour. Sometimes they are tacit. Sometimes they are written down.

Patterns emerge from principles.

Patterns upon Principles upon Purpose.

Here’s an example of a human endeavour: the creation of a nation state, like the United States of America.

  1. The purpose is agreed in the declaration of independence.
  2. The principles are documented in the constitution.
  3. The patterns emerge in the form of laws.

HTML elements, CSS features, and JavaScript APIs are all patterns (that we agree upon). Those patterns are informed by design principles.

HTML, CSS, and JavaScript.

I’ve been collecting design principles of varying quality at principles.adactio.com.

Here’s one of the design principles behind HTML5. It’s my personal favourite—the priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

“In case of conflict”—that’s exactly what a good design principle does! It establishes the boundaries of agreement. If you disagree with the design principles of a project, there probably isn’t much point contributing to that project.

Also, it’s reversible. You could imagine a different project that favoured theoretical purity above all else. In fact, that’s pretty much what XHTML 2 was all about.

XHTML 1 was simply HTML reformulated with the syntax of XML: lowercase tags, lowercase attributes, always quoting attribute values.

Remember HTML doesn’t care whether tags and attributes are uppercase or lowercase, or whether you put quotes around your attribute values. You can even leave out some closing tags.

So XHTML 1 was actually kind of a nice bit of agreement: professional web developers agreed on using lowercase tags and attributes, and we agreed to quote our attributes. Browsers didn’t care one way or the other.

But XHTML 2 was going to take the error-handling model of XML and apply it to HTML. This is the error handling model of XML: if the parser encounters a single error, don’t render the document.

Of course nobody agreed to this. Browsers didn’t agree to implement XHTML 2. Developers didn’t agree to use it. It ceased to exist.

It turns out that creating a format is relatively straightforward. But how do you turn something into a standard? The really hard part is getting agreement.

Sturgeon’s Law states:

90% of everything is crap.

Coincidentally, 90% is also the percentage of the world’s crap that gets transported by ocean. Your clothes, your food, your furniture, your electronics …chances are that at some point they were transported within an intermodal container.

These shipping containers are probably the most visible—and certainly one of the most important—standards in the physical world. Before the use of intermodal containers, loading and unloading cargo from ships was a long, laborious, and dangerous task.

Along came Malcom McLean who realised that the whole process could be made an order of magnitude more efficient if the cargo were stored in containers that could be moved from ship to truck to train.

But he wasn’t the only one. The movement towards containerisation was already happening independently around the world. But everyone was using different sized containers with different kinds of fittings. If this continued, the result would be a tower of Babel instead of smoothly running global logistics.

Malcolm McLean and his engineer Keith Tantlinger designed two crate sizes—20ft and 40ft—that would work for ships, trucks, and trains. Their design also incorporated an ingenious twistlock mechanism to secure containers together. But the extra step that would ensure that their design would win out was this: Tantlinger convinced McLean to give up the patent rights.

This wasn’t done out of any hippy-dippy ideology. These were hard-nosed businessmen. But they understood that a rising tide raises all boats, and they wanted all boats to be carrying the same kind of containers.

Without the threat of a patent lurking beneath the surface, ready to torpedo the potential benefits, the intermodal container went on to change the world economy. (The world economy is very large and intangible.)

The World Wide Web also ended up changing the world economy, and much more besides. And like the intermodal container, the World Wide Web is patent-free.

The first ever web server—Tim Berners-Lee’s NeXT machine.

Again, this was a pragmatic choice to help foster adoption. When Tim Berners-Lee and his colleague Robert Cailleau were trying to get people to use their World Wide Web project they faced some stiff competition. Lots of people were already using Gopher. Anyone remember Gopher?

The seemingly unstoppable growth of the Gopher protocol was somewhat hobbled in the early ’90s when the University of Minnesota announced that it was going to start charging fees for using it. This was a cautionary lesson for Berners-Lee and Cailleau. They wanted to make sure that CERN didn’t make the same mistake.

On April 30th, 1993, the code for the World Wide Project was made freely available.

This is for everyone.

Robert Cailleau’s copy of the document that put the World Wide Web into the public domain.

If you’re trying to get people to adopt a standard or use a new hypertext system, the biggest obstacle you’re going to face is inertia. As the brilliant computer scientist Grace Hopper used to say:

The most dangerous phrase in the English language is “We’ve always done it this way.”

Grace Hopper.

Rear Admiral Grace Hopper waged war on business as usual. She was well aware how abritrary business as usual is. Business as usual is simply the current state of our consensus reality. She said:

Humans are allergic to change.

I try to fight that.

That’s why I have a clock on my wall that runs counter‐clockwise.

Our clocks are a perfect example of a ubiquitous but arbitrary convention. Why should clocks run clockwise rather than counter-clockwise?

One neat explanation is that clocks are mimicing the movement of a shadow across the face of a sundial …in the Northern hemisphere. Had clocks been invented in the Southern hemisphere, they would indeed run counter-clockwise.

A sundial in the Southern hemisphere.

But on the clock face itself, why do we carve up time into 24 hours? Why are there 60 minutes in an hour? Why are there are 60 seconds in a minute?

24 by 60 by 60.

It probably all goes back to Babylonian accountants. Early cuneiform tablets show that they used a sexagecimal system for counting—that’s because 60 is the lowest number that can be divided evenly by 6, 5, 4, 3, 2, and 1.

But we don’t count in base 60; we count in base 10. That in itself is arbitrary—we just happen to have a total of ten digits on our hands.

So if the sexagesimal system of telling time is an accident of accounting, and base ten is more widespread, why don’t we switch to a decimal timekeeping system?

It has been tried. The French revolution introduced not just a new decimal calendar—much neater than our base 12 calendar—but also decimal time. Each day had ten hours. Each hour had 100 minutes. Each minute had 100 seconds. So much better!

10 by 100 by 100.

It didn’t take. Humans are allergic to change. Sexagesimal time may be arbitrary and messy but …we’ve always done it this way.

Incidentally, this is also why I’m not holding my breath in anticipation of the USA ever switching to the metric system.

Instead of trying to completely change people’s behaviour, you’re likely to have more success by incrementally and subtly altering what people are used to.

That was certainly the case with the World Wide Web.

The Hypertext Transfer Protocol sits on top of the existing TCP/IP stack.

The key building block of the web is the URL. But instead of creating an entirely new addressing scheme, the web uses the existing Domain Name System.

HTTP upon TCP/IP; URLs upon DNS.

Then there’s the lingua franca of the World Wide Web. These elements probably look familiar to you:

<body> <title> <p> <h1> <h2> <h3> <ol> <ul> <li> <dl> <dt> <dd>

You recognise this language, right? That’s right—it’s SGML. Standard Generalised Markup Language.

Specifically, it’s CERN SGML—a flavour of SGML that was already popular at CERN when Tim Berners-Lee was working on the World Wide Project. He used this vocabulary as the basis for the HyperText Markup Language.

Because this vocabulary was already familiar to people at CERN, convincing them to use HTML wasn’t too much of a hard sell. They could take an existing SGML document, change the file extension to .htm and it would work in one of those new fangled web browsers.

HTML upon SGML.

In fact, HTML worked better than expected. The initial idea was that HTML pages would be little more than indices that pointed to other files containing the real meat and potatoes of content—spreadsheets, word processing documents, whatever. But to everyone’s surprise, people started writing and publishing content in HTML.

Was HTML the best format? Far from it. But it was just good enough and easy enough to get the job done.

It has since changed, but that change has happened according to another design principle:

Evolution, not revolution

From its humble beginnings with the handful of elements borrowed from CERN SGML, HTML has grown to encompass an additional 100 elements over its lifespan. And yet, it’s still technically the same format!

This is a classic example of the paradox called the Ship Of Theseus, also known as Trigger’s Broom.

You can take an HTML document written over two decades ago, and open it in a browser today.

Even more astonishing, you can take an HTML document written today and open it in a browser from two decades ago. That’s because the error-handling model of HTML has always been to simply ignore any tags it doesn’t recognise and render the content inside them.

That pattern of behaviour is a direct result of the design principle:

Degrade Gracefully

…document conformance requirements should be designed so that Web content can degrade gracefully in older or less capable user agents, even when making use of new elements, attributes, APIs and content models.

Here’s a picture from 2006.

Tantek Çelik, Brian Suda, Ryan King, Chris Messina, Mark Norman Francis, and Jeremy Keith.

That’s me in the cowboy hat—the picture was taken in Austin, Texas. This is an impromptu gathering of people involved in the microformats community.

Microformats, like any other standards, are sets of agreements. In this case, they’re agreements on which class values to use to mark up some of the missing elements from HTML—people, places, and events. That’s pretty much it.

And yes, they do have design principles—some very good ones—but that’s not why I’m showing this picture.

Some of the people in this picture—Tantek Çelik, Ryan King, and Chris Messina—were involved in the creation of BarCamp, a series of grassroots geek gatherings.

BarCamps sound like they shouldn’t work, but they do. The schedule for the event is arrived at collectively at the beginning of the gathering. It’s kind of amazing how the agreement emerges—rough consensus and running events.

In the run-up to a BarCamp in 2007, Chris Messina posted this message to the fledgeling social networking site, twitter.com:

how do you feel about using # (pound) for groups. As in #barcamp [msg]?

This was when tagging was all the rage. We were all about folksonomies back then. Chris proposed that we would call this a “hashtag”.

I wasn’t a fan:

Thinking that hashtags disrupt the reading flow of natural language. Sorry @factoryjoe

But it didn’t matter what I thought. People agreed to this convention, and after a while Twitter began turning the hashtagged words into links.

In doing so, they were following another HTML design principle:

Pave the cowpaths

It sounds like advice for agrarian architects, but its meaning is clarified:

When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.

Twitter had previously paved a cowpath when people started prefacing usernames with the @ symbol. That convention didn’t come from Twitter, but they didn’t try to stop it. They rolled with it, and turned any username prefaced with an @ symbol into a link.

The aperand symbol.

The @ symbol made sense because people were used to using it from email. The choice to use that symbol in email addresses was made by Ray Tomlinson. He needed a symbol to separate the person and the domain, looked down at his keyboard, saw the @ symbol, and thought “that’ll do.”

Perhaps Chris followed a similar process when he proposed the symbol for the hashtag.

The octothorpe symbol.

It could have just as easily been called a “number tag” or “octothorpe tag” or “pound tag”.

This symbol started life as a shortcut for “pound”, or more specifically “libra pondo”, meaning a pound in weight. Libra pondo was abbreviated to lb when written. That got turned into a ligature ℔ when written hastily. That shape was the common ancestor of two symbols we use today: £ and #.

The eight-pointed symbol was (perhaps jokingly) renamed the octothorpe in the 1960s when it was added to telephone keypads. It’s still there on the digital keypad of your mobile phone. If you were to ask someone born in this millenium what that key is called, they would probably tell you it’s the hashtag key. And if they’re learning to read sheet music, I’ve heard tell that they refer to the sharp notes as hashtag notes.

If this upsets you, you might be the kind of person who rages at the word “literally” being used to mean “figuratively” or supermarkets with aisles for “10 items or less” instead of “10 items or fewer”.

Tough luck. The English language is agreement. That’s why English dictionaries exist not to dictate usage of the language, but to document usage.

It’s much the same with web standards bodies. They don’t carve the standards into tablets of stone and then come down the mountain to distribute them amongst the browsers. No, it’s what the browsers implement that gets carved in stone. That’s why it’s so important that browsers are in agreement. In the bad old days of the browser wars of the late 90s, we saw what happened when browsers implemented their own proprietary features.

Standards require interoperability.

Interoperability requires agreement.

Standards atop Interoperability atop Agreement.

So what we can learn from the history of standardisation?

Well, there are some direct lessons from the HTML design principles.

The priority of constituencies

Consider users over authors…

Listen, I want developer convenience as much as the next developer. But never at the expense of user needs.

I’ve often said that if I have the choice between making something my problem, and making it the user’s problem, I’ll make it my problem every time. That’s the job.

I worry that these days developer convenience is sometimes prized more highly than user needs. I think we could all use a priority of constituencies on every project we work on, and I would hope that we would prioritise users over authors.

Degrade gracefully

Web content can degrade gracefully in older or less capable user agents…

I know that I go on about progressive enhancement a lot. Sometimes I make it sound like a silver bullet. Well, it kinda is.

I mean, you can’t just buy a bullet made of silver—you have to make it yourself. If you’re not used to crafting bullets from silver, it will take some getting used to.

Again, if developer convenience is your priority, silver bullets are hard to justify. But if you’re prioritising users over authors, progressive enhancement is the logical methodology to use.

Evolution, not revolution

It’s a testament to the power and flexibility of the web that we don’t have to build with progressive enhancement. We don’t have to build with a separation of concerns like structure, presentation, and behaviour.

We don’t have to use what the browser gives us: buttons, dropdowns, hyperlinks. If we want to, we can make these things from scratch using JavaScript, divs and ARIA attributes.

But why do that? Is it because those native buttons and dropdowns might be inconsistent from browser to browser.

Consistency is not the purpose of the world wide web.

Universality is the key principle underlying the web.

Our patterns should reflect the intent of the medium.

Use what the browser gives you—build on top of those agreements. Because that’s the bigger lesson to be learned from the history of web standards, clocks, containers, and hashtags.

Future atop Present atop Past.

Our world is made up of incremental improvements to what has come before. And that’s how we will push forward to a better tomorrow: By building on top of what we already have instead of trying to create something entirely from scratch. And by working together to get agreement instead of going it alone.

The future can be a frightening prospect, and I often get people asking me for advice on how they should prepare for the web’s future. Usually they’re thinking about which programming language or framework or library they should be investing their time in. But these specific patterns matter much less than the broader principles of working together, collaborating and coming to agreement. It’s kind of insulting that we refer to these as “soft skills”—they couldn’t be more important.

Working on the web, it’s easy to get downhearted by the seemingly ephemeral nature of what we build. None of it is “real”; none of it is tangible. And yet, looking at the history of civilisation, it’s the intangibles that survive: ideas, philosophies, culture and concepts.

The future can be frightening because it is intangible and unknown. But like all the intangible pieces of our consensus reality, the future is something we construct …through agreement.

Now let’s agree to go forward together to build the future web!

Thursday, September 6th, 2018

Chrome’s NOSCRIPT Intervention - TimKadlec.com

Testing time with Tim.

Long story short, the NOSCRIPT intervention looks like a really great feature for users. More often than not it provides significant reduction in data usage, not to mention the reduction in CPU time—no small thing for the many, many people running affordable, low-powered devices.

Unit deconverter - make your units less useful!

Take a perfectly useful standardised measurement of length, weight, speed or time, and convert to something far less useful (but much more fun).

Saturday, September 1st, 2018

Conversational Semantics · An A List Apart Article

I love, love, love all the little details of HTML that Aaron offers up here. And I really like how he positions non-visual user-agents like searchbots, screen readers, and voice assisants as headless UIs.

HTML is a truly robust and expressive language that is often overlooked and undervalued, but it has the incredible potential to nurture conversations with our users without requiring a lot of effort on our part. Simply taking the time to code web pages well will enable our sites to speak to our customers like they speak to each other. Thinking about how our sites are experienced as headless interfaces now will set the stage for more natural interactions between the real world and the digital one.

Monday, August 27th, 2018

Disable scripts for Data Saver users on slow connections - Chrome Platform Status

An excellent idea for people in low-bandwidth situations: automatically disable JavaScript. As long as the site is built with progressive enhancement, there’s no problem (and if not, the user is presented with the choice to enable scripts).

Power to the people!

Tuesday, August 21st, 2018

The Web I Want - DEV Community 👩‍💻👨‍💻

Scores of people who just want to deliver their content and have it look vaguely nice are convinced you need every web technology under the sun to deliver text.

This is very lawnoffgetting but I can relate.

I made my first website about 20 years ago and it delivered as much content as most websites today. It was more accessible, ran faster and easier to develop then 90% of the stuff you’ll read on here.

20 years later I browse the Internet with a few tabs open and I have somehow downloaded many megabytes of data, my laptop is on fire and yet in terms of actual content delivery nothing has really changed.

Monday, August 13th, 2018

The power of progressive enhancement – No Divide – Medium

The beauty of this approach is that the site doesn’t ever appear broken and the user won’t even be aware that they are getting the ‘default’ experience. With progressive enhancement, every user has their own experience of the site, rather than an experience that the designers and developers demand of them.

A case study in applying progressive enhancement to all aspects of a site.

Progressive enhancement isn’t necessarily more work and it certainly isn’t a non-JavaScript fallback, it’s a change in how we think about our projects. A complete mindset change is required here and it starts by remembering that you don’t build websites for yourself, you build them for others.

Saturday, August 11th, 2018

Weft. — Ethan Marcotte

I think we often focus on designing or building an element, without researching the other elements it should connect to—without understanding the system it lives in.

Friday, August 10th, 2018

Creating the “Perfect” CSS System – Gusto Design – Medium

This is great advice from Lindsay Grizzard—getting agreement is so much more important than personal preference when it comes to collaborating on a design system.

When starting a project, get developers onboard with your CSS, JS and even HTML conventions from the start. Meet early and often to discuss every library, framework, mental model, and gem you are interested in using and take feedback seriously. Simply put, if they absolutely hate BEM and refuse to write it, don’t use BEM.

It’s all about the people, people!

Tuesday, August 7th, 2018

The Web is Made of Edge Cases by Taylor Hunt on CodePen

Oh, this is magnificent! A rallying call for everyone designing and developing on the web to avoid making any assumptions about the people we’re building for:

People will use your site how they want, and according to their means. That is wonderful, and why the Web was built.

I would even say that the % of people viewing your site the way you do rapidly approaches zilch.

Thursday, August 2nd, 2018

Accessibility: Start with the foundations | susan jean robertson

I encourage you to think about and make sure you are using the right elements at the right time. Sometimes I overthink this, but that’s because it’s that important to me - I want to make sure that the markup I use helps people understand the content, and doesn’t hinder them.

Tuesday, July 31st, 2018

CSS exclusions with Queen Bey

This great post by Hui Jing is ostensibly about CSS shapes and exclusions, but there’s a much broader message too:

Build demos, and play around with anything that seems remotely interesting. Even if that feature is in early stages, or only supported by 1 browser. And then talk about it, or write and tweet about your experience, your use cases, what you liked or disliked about it.

We can shape the web to what we want it to be, but only if we get involved.

The Accessibility of Styled Form Controls & Friends | a11y_styled_form_controls

A great collection of styled and accessible form elements:

Form controls are necessary in many interfaces, but are often considered annoying, if not downright difficult, to style. Many of the markup patterns presented here can serve as a baseline for building more attractive form controls without having to exclude users who may rely on assistive technology to get things done.