Tags: css

68

sparkline

Starting out

I had a really enjoyable time at Codebar Brighton last week, not least because Morty came along.

I particularly enjoy teaching people who have zero previous experience of making a web page. There’s something about explaining HTML and CSS from first principles that appeals to me. I especially love it when people ask lots of questions. “What does this element do?”, “Why do some elements have closing tags and others don’t?”, “Why is it textarea and not input type="textarea"?” The answer usually involves me going down a rabbit-hole of web archeology, so I’m in my happy place.

But there’s only so much time at Codebar each week, so it’s nice to be able to point people to other resources that they can peruse at their leisure. It turns out that’s it’s actually kind of tricky to find resources at that level. There are lots of great articles and tutorials out there for professional web developers—Smashing Magazine, A List Apart, CSS Tricks, etc.—but no so much for complete beginners.

Here are some of the resources I’ve found:

  • MarkSheet by Jeremy Thomas is a free HTML and CSS tutorial. It starts with an explanation of the internet, then the World Wide Web, and then web browsers, before diving into HTML syntax. Jeremy is the same guy who recently made CSS Reference.
  • Learn to Code HTML & CSS by Shay Howe is another free online book. You can buy a paper copy too. It’s filled with good, clear explanations.
  • Zero to Hero Coding by Vera Deák is an ongoing series. She’s starting out on her career as a front-end developer, so her perspective is particularly valuable.

If I find any more handy resources, I’ll link to them and tag them with “learning”.

Between the braces

In a post called Side Effects in CSS that he wrote a while back, Philip Walton talks about different kinds of challenges in writing CSS:

There are two types of problems in CSS: cosmetic problems and architectural problems.

The cosmetic problems are solved by making something look the way you want it to. The architectural problems are trickier because they have more long-term effects—maintainability, modularity, encapsulation; all that tricky stuff. Philip goes on to say:

If I had to choose between hiring an amazing designer who could replicate even the most complicated visual challenges easily in code and someone who understood the nuances of predictable and maintainable CSS, I’d choose the latter in a heartbeat.

This resonates with something I noticed a while back while I was doing some code reviews. Most of the time when I’m analysing CSS and trying to figure out how “good” it is—and I know that’s very subjective—I’m concerned with what’s on the outside of the curly braces.

selector {
    property: value;
}

The stuff inside the curly braces—the properties and values—that’s where the cosmetic problems get solved. It’s also the stuff that you can look up; I certainly don’t try to store all possible CSS properties and values in my head. It’s also easy to evaluate: Does it make the thing look like you want it to look? Yes? Good. It works.

The stuff outside the curly braces—the selectors—that’s harder to judge. It needs to be evaluated with lots of “what ifs”: What if this selects something you didn’t intend to? What if the markup changes? What if someone else writes some CSS that negates this?

I find it fascinating that most of the innovation in CSS from the browser makers and standards bodies arrives in the form of new properties and values—flexbox, grid, shapes, viewport units, and so on. Meanwhile there’s a whole other world of problems to be solved outside the curly braces. There’s not much that the browser makers or standards bodies can do to help us there. I think that’s why most of the really interesting ideas and thoughts around CSS in recent years have focused on that challenge.

Talking about hypertext

#CSSday starts off with a great history lesson of our industry by @adactio

I’ve just published a transcript of the talk I gave at the HTML Special that preceded CSS Day a couple of weeks back. I’ve also recorded an audio version for your huffduffing pleasure.

It’s not like the usual talks I give. The subject matter was assigned to me, Mission Impossible style. PPK wanted each speaker to give an entire talk on just one HTML element. He offered me the best element of them all: the A element.

There were a few different directions I could’ve taken it. I could’ve tried to make it practical, but I quickly dismissed that idea. Instead I went in the completely opposite direction, making it as pretentious as possible. I figured a talk about hypertext could afford to be winding and circuitous, building on some of the ideas I wrote about in my piece for The Manual a few years back. It’s quite self-indulgent of me, but I used it as an opportunity to geek out about some of my favourite things; from Borges, Babbage, and Bletchley to Leibniz, Lovelace, and Licklider.

I wouldn’t usually write out an entire talk word-for-word in advance, but somehow it felt right for this one. In fact, my talk preparation this time ‘round was very similar to the process Charlotte recently wrote about:

  1. Get everything out of my head and onto a mind map.
  2. Write chunks of content in short bursts—this was when I was buddying up with Paul.
  3. Put together a slide deck of visuals to support the narrative.
  4. Practice delivering the talk so I don’t look I’m just reading off a screen.

It takes me a long time to prepare talks. As the deadline for this one approached, I was getting quite panicked. It was touch and go there for a while, but I managed to get it done in time.

I’m pleased with how it turned out. On the day, I had fun delivering it. People seemed to like it too, which was gratifying.

Although with this kind of talk, it was inevitable that I wouldn’t be able to please everyone.

I guess this talk was a one-off affair. That said, if you’re putting on an event and you think this subject matter would be appropriate, let me know. I’d be more than happy to deliver it again.

Sticky headers

I made a little tweak to The Session today. The navigation bar across the top is “sticky” now—it doesn’t scroll with the rest of the content.

I made sure that the stickiness only kicks in if the screen is both wide and tall enough to warrant it. Vertical media queries are your friend!

But it’s not enough to just put some position: fixed CSS inside a media query. There are some knock-on effects that I needed to mitigate.

I use the space bar to paginate through long pages. It drives me nuts when sites with sticky headers don’t accommodate this. I made use of Tim Murtaugh’s sticky pagination fixer. It makes sure that page-jumping with the keyboard (using the space bar or page down) still works. I remember when I linked to this script two years ago, thinking “I bet this will come in handy one day.” Past me was right!

The other “gotcha!” with having a sticky header is making sure that in-page anchors still work. Nicolas Gallagher covers the options for this in a post called Jump links and viewport positioning. Here’s the CSS I ended up using:

:target:before {
    content: '';
    display: block;
    height: 3em;
    margin: -3em 0 0;
}

I also needed to check any of my existing JavaScript to see if I was using scrollTo anywhere, and adjust the calculations to account for the newly-sticky header.

Anyway, just a few things to consider if you’re going to make a navigational element “sticky”:

  1. Use min-height in your media query,
  2. Take care of keyboard-initiated page scrolling,
  3. Adjust the positioning of in-page links.

Amsterdam Brighton Amsterdam

I’m about to have a crazy few days that will see me bouncing between Brighton and Amsterdam.

It starts tomorrow. I’m flying to Amsterdam in the morning and speaking at this Icons event in the afternoon about digital preservation and long-term thinking.

Then, the next morning, I’ll be opening up the inaugural HTML Special which is a new addition the CSS Day conference. Each talk on Thursday will cover one HTML element. I am honoured to speaking about the A element. Here’s the talk description:

The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Enquire within upon everything.

I’ve been working all out to get this talk done and I finally wrapped it up today. Right now, I feel pretty happy with it, but I bet I’ll change that opinion in the next 48 hours. I’m pretty sure that this will be one of those talks that people will either love or hate, kind of like my 2008 dConstruct talk, The System Of The World.

After CSS Day, I’ll be heading back to Brighton on Saturday, June 18th to play a Salter Cane gig in The Greys pub. If you’re around, you should definitely come along—not only is it free, but there will be some excellent support courtesy of Jon London, and Lucas and King.

Then, the next morning, I’ll be speaking at DrupalCamp Brighton, opening up day two of the event. I won’t be able to stick around long afterwards though, because I need to skidaddle to the airport to go back to Amsterdam!

Google are having their Progressive Web App Dev Summit there on Monday on Tuesday. I’ll be moderating a panel on the second day, so I’ll need to pay close attention to all the talks. I’ll be grilling representatives from Google, Samsung, Opera, Microsoft, and Mozilla. Considering my recent rants about some very bad decisions on the part of Google’s Chrome team, it’s very brave of them to ask me to be there, much less moderate a panel in public.

You can still register for the event, by the way. Like the Salter Cane gig, it’s free. But if you can’t make it along, I’d still like to know what you think I should be asking the panelists about.

Got a burning question for browser/device makers? Write it down, post it somewhere on the web with a link back to this post, and then send me a web mention (there’s a form for you to paste in the URL at the bottom of this post).

Delay

Mobile browser vendors have faced a dilemma for quite a while. They’ve got this double-tap gesture that allows users to zoom in on part of a page (particularly handy on non-responsive sites). But that means that every time a user makes a single tap, the browser has to wait for just a moment to see if it’s followed by another tap. “Just a moment” in this case works out to be somewhere between 300 and 350 milliseconds. So every time a user is trying to click a link or press a button on a web page, there’s a slight but noticeable delay.

For a while, mobile browsers tried to “solve” the problem by removing the delay if the viewport size had been set to non-scalable using a meta viewport declaration of user-scalable="no". In other words, the browser was rewarding bad behaviour: sites that deliberately broke accessibility by removing the ability to zoom were the ones that felt snappier than their accessible counterparts.

Fortunately Android changed their default behaviour. They decided to remove the tap delay for any site that had a meta viewport declaration of width=device-width (which is pretty much every responsive website). That still left Apple.

I discussed this a couple of years ago with Ted (my go-to guy on the inside of the infinite loop):

He’d prefer a per-element solution rather than a per-document meta element. An attribute? Or maybe a CSS declaration similar to pointer events?

I thought for a minute, and then I spitballed this idea: what if the 300 millisecond delay only applied to non-focusable elements?

After all, the tap delay is only noticeable when you’re trying to tap on a focusable element: links, buttons, form fields. Double tapping tends to happen on text content: divs, paragraphs, sections.

Well, the Webkit team have announced their solution. As well as following Android’s lead and removing the delay for responsive sites, they’ve also provided a way for authors to declare which elements should have the delay removed using the CSS property touch-action:

Putting touch-action: manipulation; on a clickable element makes WebKit consider touches that begin on the element only for the purposes of panning and pinching to zoom. This means WebKit does not consider double-tap gestures on the element, so single taps are dispatched immediately.

So to get the behaviour I was hoping for—no delay on focusable elements—I can add this line to my CSS:

a, button, input, select, textarea, label, summary {
  touch-action: manipulation;
}

That ought to do it. I suppose I could also throw [tabindex]:not([tabindex="-1"]) into that list of selectors.

It probably goes without saying, but you shouldn’t do:

* { touch-action: manipulation; }

or:

body { touch-action: manipulation; }

That default behaviour of touch-action: auto is still what you want on most elements.

Anyway, I’m off to update my CSS even though this latest fix probably won’t land in mobile Safari until, oh ….probably next October.

Pseudo and pseudon’t

I like CSS pseudo-classes. They come in handy for adding little enhancements to interfaces based on interaction.

Take the form-related pseudo-classes, for example: :valid, :invalid, :required, :in-range, and many more.

Let’s say I want to adjust the appearance of an element based on whether it has been filled in correctly. I might have an input element like this:

<input type="email" required>

Then I can write some CSS to put green border on it once it meets the minimum requirements for validity:

input:valid {
  border: 1px solid green;
}

That works, but somewhat annoyingly, the appearance will change while the user is still typing in the field (as soon as the user types an @ symbol, the border goes green). That can be distracting, or downright annoying.

I only want to display the green border when the input is valid and the field is not focused. Luckily for me, those last two words (“not focused”) map nicely to some more pseudo-classes: not and focus:

input:not(:focus):valid {
  border: 1px solid green;
}

If I want to get really fancy, I could display an icon next to form fields that have been filled in. But to do that, I’d need more than a pseudo-class; I’d need a pseudo-element, like :after

input:not(:focus):valid::after {
  content: '✓';
}

…except that won’t work. It turns out that you can’t add generated content to replaced elements like form fields. I’d have to add a regular element into my markup, like this:

<input type="email" required>
<span></span>

So I could style it with:

input:not(:focus):valid + span::after {
  content: '✓';
}

But that feels icky.

Update: See this clever flexbox technique by Hugo Giraudel for a potential solution.

Whatever works for you

I was one of the panelists on the most recent episode of the Shop Talk Show along with Nicole, Colin Megill, and Jed Schmidt. The topic was inline styles. Well, not quite. That’s not a great term to describe the concept. The idea is that you apply styling directly to DOM nodes using JavaScript, instead of using CSS selectors to match up styles to DOM nodes.

It’s an interesting idea that I could certainly imagine being useful in certain situations such as dynamically updating an interface in real time (it feels a bit more “close to the metal” to reflect the state updates directly rather than doing it via class swapping). But there are many, many other situations where the cascade is very useful indeed.

I expressed concern that styling via JavaScript raises the barrier to styling from a declarative language like CSS to a programming language (although, as they pointed out, it’s more like moving from CSS to JSON). I asked whether it might not be possible to add just one more layer of abstraction so that people could continue to write in CSS—which they’re familiar with—and then do JavaScript magic to match those selectors, extract those styles, and apply them directly to the DOM nodes. Since recording the podcast, I came across Glen Maddern’s proposal to do exactly that. It makes sense to me try to solve the perceived problems with CSS—issues of scope and specificity—without asking everyone to change the way they write.

In short, my response was “hey, like, whatever, it’s cool, each to their own.” There are many, many different kinds of websites and many, many different ways to make them. I like that.

So I was kind of surprised by the bullishness of those who seem to honestly believe that this is the way to build on the web, and that CSS will become a relic. At one point I even asked directly, “Do you really believe that CSS is over? That all styles will be managed through JavaScript from here on?” and received an emphatic “Yes!” in response.

I find that a little disheartening. Chris has written about the confidence of youth:

Discussions are always worth having. Weighing options is always interesting. Demonstrating what has worked (and what hasn’t) for you is always useful. There are ways to communicate that don’t resort to dogmatism.

There are big differences between saying:

  • You can do this,
  • You should do this, and
  • You must do this.

My take on the inline styles discussion was that it fits firmly in the “you can do this” slot. It could be a very handy tool to have in your toolbox for certain situations. But ideally your toolbox should have many other tools. When all you have is a hammer, yadda, yadda, yadda, nail.

I don’t think you do your cause any favours by jumping straight to the “you must do this” stage. I think that people are more amenable to hearing “hey, here’s something that worked for me; maybe it will work for you” rather than “everything you know is wrong and this is the future.” I certainly don’t think that it’s helpful to compare CSS to Neanderthals co-existing with JavaScript Homo Sapiens.

Like I said on the podcast, it’s a big web out there. The idea that there is “one true way” that would work on all possible projects seems unlikely—and undesirable.

“A ha!”, you may be thinking, “But you yourself talk about progressive enhancement as if it’s the one try way to build on the web—hoisted by your own petard.” Actually, I don’t. There are certainly situations where progressive enhancement isn’t workable—although I believe those cases are rarer than you might think. But my over-riding attitude towards any questions of web design and development is:

It depends.

Building the dConstruct 2015 site

I remember when I first saw Paddy’s illustration for this year’s dConstruct site, I thought “Well, that’s a design direction, but there’s no way that Graham will be able to implement all of it.” There was a tight deadline for getting the site out, and let’s face it, there was so much going on in the design that we’d just have to prioritise.

I underestimated Graham’s sheer bloody-mindedness.

At the next front-end pow-wow at Clearleft, Graham showed the dConstruct site in all its glory …in Lynx.

http://2015.dconstruct.org in Lynx.

I love that. Even with the focus on the gorgeous illustration and futuristic atmosphere of the design, Graham took the time to think about the absolute basics: marking up the content in a logical structured way. Everything after that—the imagery, the fonts, the skewed style—all of it was built on a solid foundation.

One site, two browsers.

It would’ve been easy to go crazy with the fonts and images, but Graham made sure to optimise everything to within an inch of its life. The biggest bottleneck comes from a third party provider—the map tiles and associated JavaScript …so that’s loaded in after the initial content is loaded. It turns out that the site build was a matter of prioritisation after all.

http://2015.dconstruct.org/

There’s plenty of CSS trickery going on: transforms, transitions, and opacity. But for the icing on the cake, Graham reached for canvas and programmed space elevator traffic with randomly seeded velocity and size.

Oh, and of course it’s all responsive.

So, putting that all together…

The dConstruct 2015 site is gorgeous, semantic, responsive, and performant. Conventional wisdom dictates that you have to choose, but this little site—built on a really tight schedule—shows otherwise.

100 words 051

I’ve been thinking about what Harry said recently about logic in CSS. I think he makes an astute observation.

You can think of each part of a selector as a condition:

condition { }

That translates to code like:

if (condition) { }

So if you have a CSS selector like this:

condition1 condition2 condition3 { }

…it translates to code like this:

if (condition1) {
    if (condition2) {
        if (condition3) {
        }
    }
}

That doesn’t feel very elegant, even in its simpler form:

if (condition1 && condition2 && condition3) { }

I like Harry’s rule of thumb:

Think of your selectors as mini programs: Every time you nest or qualify, you are adding an if statement; read these ifs out loud to yourself to try and keep your selectors sane.

100 words 033

Charlotte came up with a nifty trick that combines two different techniques she’s been working with.

The first building block is the pattern of using checkboxes, labels, and the :checked pseudo-class to create progressive disclosure toggles without JavaScript. There’s just one caveat with that technique though—the item being toggled must appear after the trigger label in the source order of the markup.

Enter the second building block: flexbox. With flexbox, we’re no longer at the mercy of the source order in our markup. By using flex-direction: column-reverse, the progressive disclosure trigger can be displayed after the item being toggled.

BEMphasis

I’m working on a project with a team of developers who are trying out the BEM syntax for their class names. I’ve tried BEM before, but I’m not a huge fan of underscores (for no particularly good reason) so I tend to use a modified version that avoids those characters. Still, when it comes to coding style—tabs vs. spaces, camelCasing, underscores, hyphens, or whatever—my personal opinion takes a back seat to the group consensus. And on this project, the group has opted for proper BEM all the way, and I’m more than happy to go along with that.

When it comes to naming a modified version of a component in BEM, the syntax looks like this:

component--modifier

That raises a question about how you then deploy that class name in your HTML. You could just use the modified name:

<element class="component--modifier">

But then in your CSS you’d have to repeat all the style rules for .component selector inside your rule block for .component--modifier selector. SASS could you help out here, especially with its “extends” functionality, but the final CSS is still going to containing duplicated rules.

The alternative is to keep your CSS lean and modular, and write your HTML like this:

<element class="component component--modifier">

Now you’ve taken the duplication out of CSS and put into your markup. It looks a little weird. But, on balance, it’s probably the lesser of two evils.

It strikes me that this pattern of always having the base component class name appear anywhere you have a component--modifier class name is something that you could programmatically check for. It should be relatively straightforward to write a lint tool that looks in the value of every class attribute and, if it finds any instances of foo--bar, checks to make sure that foo is also in there.

Sounds like it could be a nice little task for Grunt or Gulp. Maybe somebody has already made it.

Mind you, it seems that most lint tools out there are focused very much on enforcing a coding style for CSS and JavaScript—not so much for HTML. I worry that this reflects the mindset of many front-end developers who view CSS and JavaScript as more important than markup …which is a bit odd considering that CSS and JavaScript are subservient to the HTML document that they’re styling and scripting.

100 words 025

I often get asked what resources I’d recommend for someone totally new to making websites. There are surprisingly few tutorials out there aimed at the complete beginner. There’s Jon Duckett’s excellent—and beautiful—book. There’s the Codebar curriculum (which I keep meaning to edit and update; it’s all on Github).

Now there’s a new resource by Damian Wielgosik called How to Code in HTML5 and CSS3. Personally, I would drop the “5” and the “3”, but that’s a minor quibble; this is a great book. It manages to introduce concepts in a logical, understandable way.

And it’s free.

Inlining critical CSS for first-time visits

After listening to Scott rave on about how much of a perceived-performance benefit he got from inlining critical CSS on first load, I thought I’d give it a shot over at The Session. On the chance that this might be useful for others, I figured I’d document what I did.

The idea here is that you can give a massive boost to the perceived performance of the first page load on a site by putting the most important CSS in the head of the page. Then you cache the full stylesheet. For subsequent visits you only ever use the external stylesheet. So if you’re squeamish at the thought of munging your CSS into your HTML (and that’s a perfectly reasonable reaction), don’t worry—this is a temporary workaround just for initial visits.

My particular technology stack here is using Grunt, Apache, and PHP with Twig templates. But I’m sure you can adapt this for other technology stacks: what’s important here isn’t the technology, it’s the thinking behind it. And anyway, the end user never sees any of those technologies: the end user gets HTML, CSS, and JavaScript. As long as that’s what you’re outputting, the specifics of the technology stack really don’t matter.

Generating the critical CSS

Okay. First question: how do you figure out which CSS is critical and which CSS can be deferred?

To help answer that, and automate the task of generating the critical CSS, Filament Group have made a Grunt task called grunt-criticalcss. I added that to my project and updated my Gruntfile accordingly:

grunt.initConfig({
    // All my existing Grunt configuration goes here.
    criticalcss: {
        dist: {
            options: {
                url: 'http://thesession.dev',
                width: 1024,
                height: 800,
                filename: '/path/to/main.css',
                outputfile: '/path/to/critical.css'
            }
        }
    }
});

I’m giving it the name of my locally-hosted version of the site and some parameters to judge which CSS to prioritise. Those parameters are viewport width and height. Now, that’s not a perfect way of judging which CSS matters most, but it’ll do.

Then I add it to the list of Grunt tasks:

// All my existing Grunt tasks go here.
grunt.loadNpmTasks('grunt-criticalcss');

grunt.registerTask('default', ['sass', etc., 'criticalcss']);

The end result is that I’ve got two CSS files: the full stylesheet (called something like main.css) and a stylesheet that only contains the critical styles (called critical.css).

Cache-busting CSS

Okay, this is a bit of a tangent but trust me, it’s going to be relevant…

Most of the time it’s a very good thing that browsers cache external CSS files. But if you’ve made a change to that CSS file, then that feature becomes a bug: you need some way of telling the browser that the CSS file has been updated. The simplest way to do this is to change the name of the file so that the browser sees it as a whole new asset to be cached.

You could use query strings to do this cache-busting but that has some issues. I use a little bit of Apache rewriting to get a similar effect. I point browsers to CSS files like this:

<link rel="stylesheet" href="/css/main.20150310.css">

Now, there isn’t actually a file named main.20150310.css, it’s just called main.css. To tell the server where the actual file is, I use this rewrite rule:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.+).(d+).(js|css)$ $1.$3 [L]

That tells the server to ignore those numbers in JavaScript and CSS file names, but the browser will still interpret it as a new file whenever I update that number. You can do that in a .htaccess file or directly in the Apache configuration.

Right. With that little detour out of the way, let’s get back to the issue of inlining critical CSS.

Differentiating repeat visits

That number that I’m putting into the filenames of my CSS is something I update in my Twig template, like this (although this is really something that a Grunt task could do, I guess):

{% set cssupdate = '20150310' %}

Then I can use it like this:

<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

I can also use JavaScript to store that number in a cookie called csscached so I’ll know if the user has a cached version of this revision of the stylesheet:

<script>
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

The absence or presence of that cookie is going to be what determines whether the user gets inlined critical CSS (a first-time visitor, or a visitor with an out-of-date cached stylesheet) or whether the user gets a good ol’ fashioned external stylesheet (a repeat visitor with an up-to-date version of the stylesheet in their cache).

Here are the steps I’m going through:

First of all, set the Twig cssupdate variable to the last revision of the CSS:

{% set cssupdate = '20150310' %}

Next, check to see if there’s a cookie called csscached that matches the value of the latest revision. If there is, great! This is a repeat visitor with an up-to-date cache. Give ‘em the external stylesheet:

{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

If not, then dump the critical CSS straight into the head of the document:

{% else %}
<style>
{% include '/css/critical.css' %}
</style>

Now I still want to load the full stylesheet but I don’t want it to be a blocking request. I can do this using JavaScript. Once again it’s Filament Group to the rescue with their loadCSS script:

 <script>
    // include loadCSS here...
    loadCSS('/css/main.{{ cssupdate }}.css');

While I’m at it, I store the value of cssupdate in the csscached cookie:

    document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

Finally, consider the possibility that JavaScript isn’t available and link to the full CSS file inside a noscript element:

<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

And we’re done. Phew!

Here’s how it looks all together in my Twig template:

{% set cssupdate = '20150310' %}
{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
{% else %}
<style>
{% include '/css/critical.css' %}
</style>
<script>
// include loadCSS here...
loadCSS('/css/main.{{ cssupdate }}.css');
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>
<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

You can see the production code from The Session in this gist. I’ve tweaked the loadCSS script slightly to match my preferred JavaScript style but otherwise, it’s doing exactly what I’ve outlined here.

The result

According to Google’s PageSpeed Insights, I done good.

Optimising https://thesession.org/

Celebrating CSS

Cascading Style Sheets turned 20 years old this week. Happy birthtime, CeeSusS!

Bruce interviewed Håkon about the creation of CSS, and it makes for fascinating reading. If you want to dig even deeper, here’s Håkon’s 1994 thesis comparing competing approaches to style sheets.

CSS gets a tough rap. I remember talking to Douglas Crockford about CSS. I’ll paraphrase his stance as “Kill it with fire!” To be fair, he was mostly talking about the lack of a decent layout system in CSS—something that’s only really getting remedied now.

Most of the flak directed at CSS comes from smart programmers, decrying its lack of power. As a declarative language, it lacks even the most basic features of even the simplest procedural language. How are serious programmers supposed to write their serious programmes with such a primitive feature set?

But I think this mindset misses out a crucial facet of understanding CSS: it’s not about us. By us, I mean professional web developers. And when I say it’s not about us, I mean it’s not only about us.

The web is for everyone. That doesn’t just mean that it’s for everyone to use—the web is for everyone to create. That means that the core building blocks of the web need to be learnable by everyone, not just programmers.

I get nervous when I see web browsers gaining powerful features that can only be accessed via a JavaScript API. Geolocation is one example: it doesn’t have any declarative equivalent to its JavaScript implementation. Counter-examples would be video and audio: you can use the JavaScript API to get exactly the behaviour you want, if you’ve got that level of knowledge …or you can use the video and audio elements if you’re okay with letting web browsers handle the complexity of display and playback.

I think that CSS hits a nice sweet spot, balancing learnability and power. I love the fact that every bit of CSS ever written comes down to the same basic pattern:

selector {
    property: value;
}

That’s it!

How amazing is it that one simple pattern can scale to encompass a whole wide world of visual design variety?

Think about the revolution that CSS has gone through in recent years: OOCSS, SMACSS, BEM …these are fundamentally new ways of approaching front-end development, and yet none of these approaches required any changes to be made to the CSS specification. The power and flexibility was already available within its simple selector-property-value pattern.

Mind you, that modularity was compromised when we got things like named animations; a pattern that breaks out of the encapsulation model of CSS. Variables in CSS also break out of the modularity pattern.

Personally, I don’t think there’s any reason to have variables in the CSS language; it’s enough to have them in pre-processing tools. Variables add enormous value for developers, and no value at all for end users. As long as developers can use variables—and they can, with Sass and LESS—I don’t think we need to further complicate CSS.

Bert Bos wrote an exhaustive list of design principles for web standards. There’s some crossover with Tim Berners-Lee’s principles of design, with ideas such as modularity and robustness. Personally, I think that Bert and Håkon did a pretty damn good job of balancing principles like learnability, extensibility, longevity, interoperability and a host of other factors while still producing something powerful enough to scale for the whole web.

There’s one important phrase I want to highlight in the abstract of the 20 year old CSS proposal:

The proposed scheme provides a simple mapping between HTML elements and presentation hints.

Hints.

Every line of CSS you write is a suggestion. You are not dictating how the HTML should be rendered; you are suggesting how the HTML should be rendered. I find that to be a very liberating and empowering idea.

My only regret is that—twenty years on from the birth of CSS—web browsers are killing the very idea of user stylesheets. Along with “view source”, this feature really drove home the idea that professional web developers are not the only ones who have a say in what gets rendered in web browsers …and that the web truly is for everyone.

Polyfills and products

I was chatting about polyfills recently with Bruce and Remy—who coined the term:

A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.

I mentioned that I think that one of the earliest examples of what we would today call a polyfill was the IE7 script by Dean Edwards.

Dean wrote this (amazing) piece of JavaScript back when Internet Explorer 6 was king of the hill and Microsoft had stopped development of their browser entirely. It was a pretty shitty time in browserland back then. While other browsers were steaming ahead with browser support, Dean’s script pulled IE6 up by its bootstraps and made it understand CSS2.1 features. Crucially, you didn’t have to write your CSS any differently for the IE7 script to work—the classic hallmark of a polyfill.

Scott has a great post over on the Filament Group blog asking To Picturefill, or not to Picturefill?. Therein, he raises the larger issue of when to use polyfills of any kind. After all, every polyfill you use is a little bit of a tax that the end user must pay with a download.

Polyfills typically come at a cost to users as well, since they require users to download and execute JavaScript in order to work. Sometimes, frequently even, that cost outweighs the benefits that the polyfill would bring. For that reason, the question of whether or not to use any polyfill should be taken seriously.

Scott takes a very thoughtful approach to using any polyfill, and I try to do the same. I feel that it’s important to have an exit strategy for every polyfill you decide to use. After all, the whole point of a polyfill is that it’s a stop-gap measure until a particular feature is more widely supported.

And that’s where I run into one of the issues of working at an agency. At Clearleft, our time working with a client usually lasts a few months. At the end of that time, we’ll have delivered whatever the client needs: sometimes that’s design work; sometimes its design and a front-end pattern library.

Every now and then we get to revisit a project—like with Code for America—but that’s the exception rather than the rule. We’ve had to get very, very good at handover precisely because we won’t be the ones maintaining the code that we deliver (though we always try to budget in time to revisit the developers who are working with the code to answer any questions they might have).

That makes it very tricky to include a polyfill in our deliverables. We’d need to figure out a way of also including a timeline for revisiting that polyfill and evaluating when it’s time to drop it. That’s not an impossible task, but it’s much, much easier if you’re a developer working on a product (as opposed to a developer working at an agency). If you’re going to be the same person working on the code in the future—as well as working on it right now—it gets a lot easier to plan for evaluating polyfill usage further down the line. Set a recurring item in your calendar and you should be all set.

It’s a similar situation with vendor prefixes. Vendor prefixes were never intended to be a long-lasting part of any style sheet. Like polyfills, they’re supposed to be used with an exit strategy in mind: when the time is right, remove the prefixed styles, leaving only the unprefixed standardised CSS. Again, that’s a lot easier to do if you’re working on a product and you know that you’ll be the one revisiting the CSS later on. That’s harder to do at an agency where you’re handing over CSS to someone else.

I’m quite reluctant to use any vendor prefixes at all—which is at should be; vendor prefixes should not be used lightly. Sometimes they’re unavoidable, but that shouldn’t stop us thinking about how to remove them at a later date.

I’m mostly just thinking out loud here. I guess my point is that certain front-end development techniques and technologies feel like they’re better suited to product work rather than agency work. Although I’m sure there are plenty of counter-examples out there too of tools that really fit the agency model and are less useful for working on the same product over a long period.

But even though the agency world and the product world are very different in lots of ways, both of them require us to think about the future. How will long will the code you’re writing today last? And do you have a plan for when it needs updating or replacing?

Code refactoring for America

Here at Clearleft, we’ve been doing some extra work with Code for America following on from our initial deliverables. This makes me happy for a number of reasons:

  1. They’re a great client—really easy-going and fun to work with.
  2. We’ve got Anna back in the office and it’s always nice to have her around.
  3. We get to revisit the styleguide we provided, and test our assumptions.

That last one is important. When we provide a pattern library to a client, we hope that they’ve got everything they need. If we’ve done our job right, then they’ll be able to combine patterns in ways we haven’t foreseen to create entirely new page types.

For the most part, that’s been the case with Code for America. They have a solid set of patterns that are serving them well. But what’s been fascinating is to hear about what it’s like for the people using those patterns…

There’s been a welcome trend in recent years towards extremely robust, maintainable CSS. SMACSS, BEM, OOCSS and other methodologies might differ in their details, but their fundamental approach is pretty similar. The idea is that you apply a very specific class to every element you want to style:

<div class="thingy">
    <ul class="thingy-bit">
        <li class="thingy-bit-item"></li>
        <li class="thingy-bit-item"></li>
    </ul>
    <img class="thingy-wotsit" src="" alt="" />
</div>

That allows you to keep your CSS selectors very short, but very specific:

.thingy {}
.thingy-bit {}
.thingy-bit-item {}
.thingy-wotsit {}

There’s little or no nesting, and you only ever use class selectors. That keeps your CSS nice and clear, and you avoid specificity hell. The catch is that your HTML is necessarily more verbose: you need to explicitly add a class to whatever you want to style.

For most projects—particularly product work (think Twitter, Facebook, etc.)—that’s a completely acceptable trade-off. It’s usually the same developers editing the CSS and the HTML so there’s no problem moving complexity out of CSS and into the markup templates. Even if other people will be entering the actual content into the system, they’ll probably be doing that mediated through a Content Management System, rather than editing HTML directly.

So nine times out of ten, making the HTML more verbose is absolutely the right choice in order to make the CSS more manageable and maintainable. That’s the way we initially built the pattern library for Code for America.

Well, it turns out that the people using the markup patterns aren’t necessarily the same people who would be dealing with the CSS. Also, there isn’t necessarily a CMS involved. Instead, people (volunteers, employees, anyone really) create new pages by copying and pasting the patterns we’ve provided and then editing them.

By optimising on the CSS side of things, we’ve offloaded a lot of complexity onto their shoulders. While it’s fair enough to expect them to understand basic HTML, it’s hardly fair to expect them to learn a whole new vocabulary of thingy and thingy-wotsit class names just to get things to look they way they expect.

Here’s a markup pattern that makes more sense for the people actually dealing with the HTML:

<div class="thingy">
    <ul>
        <li></li>
        <li></li>
    </ul>
    <img src="" alt="" />
</div>

Much clearer. But now the CSS looks like this:

.thingy {}
.thingy ul {}
.thingy li {}
.thingy img {}

Actually it’s probably going to look more complicated than that: more nesting, more element selectors, more “defensive” rules trying to anticipate the kind of markup that might be used in a particular pattern.

It feels really strange for Anna and myself to work with these kind of patterns. All of our experience screams “Don’t do that! Why would you that?” …but in this case, it’s the right thing to do for the people building the actual website.

So please don’t interpret this as me saying “Hey, everyone, this is how you should write your CSS.” I’m not saying this is better or worse than adding lots of classes to your HTML. If anything, this illustrates that there is no one right way to do this.

It’s worth remembering why we’re aiming for maintainability in what we write. It’s not for any technical reason. It’s for people. If those people find it better to deal with simplified CSS with more complex HTML, than the complexity should be in the HTML. But if the priority for those people is to have simple HTML, then more complex CSS may be an acceptable price to pay.

In other words, it depends.

Continuum

Stop me if you’ve heard this one before. You’re reading an article on Smashing Magazine or A List Apart or some other publication. The article is about a specific feature of CSS, or maybe JavaScript, or perhaps it’s exploring some of the newer additions to HTML. The article is good. It explains how to use this particular feature in your work. Then you read the comments. The first comment is inevitably from someone bemoaning the fact that they can’t use this feature because it isn’t supported in every browser. Specifically, it isn’t supported in some older version of Internet Explorer that they have to support. Therefore the entire article is rendered null and void.

That attitude infuriates and depresses me. It seems to me that it demonstrates a fundamental mismatch between how that person views the job of web development and the way the web actually works.

It is entirely possible—nay, desirable—to use features long before they are supported in every browser. That’s how we move the web forward. If we waited until there was universal support for a feature before we used it, we’d still be using CSS 1.0 and HTML 2.0.

If you use a CSS feature that isn’t supported in a particular browser—like, say, an older version of Internet Explorer—that browser will simply ignore that CSS rule. So you don’t get that rounded corner, or text shadow, or whatever it was. Browsers have the same error-handling mechanism for HTML: if they see something they don’t understand, they just ignore it. The browser will not throw an error. The browser will not stop rendering the page. Browsers are very liberal in what they accept.

It’s a bit trickier with JavaScript: browsers will throw an error; browsers will stop processing the script. That’s why it’s important to use feature detection. That’s also why you definitely don’t want to rely on JavaScript for rendering your content—it’s the most fragile layer of the front-end stack. Note, I’m not saying don’t use JavaScript; I’m saying don’t rely on JavaScript. Otherwise you’ve got yourself a SPOF.

Anyway, my point is—and I can’t believe I still have to repeat this after all these years—websites do not need to look exactly the same in every browser.

“But my client!”, cries the Smashing Magazine commenter, “But my boss!”

If your client or boss expects that a website will look and behave the same in every browser on every device, then where did they get those expectations from? And rather than spending your time trying to meet those impossible expectations, I think your time would be better spent explaining why those expectations don’t match the reality of the web.

It’s like Mike Monteiro says about clients: if they just don’t get something about your design, that’s not their fault; it’s yours. Explaining your design work is part of your design work. It’s the same with web development. It’s our job to explain how the web works …and how the unevenly-distributed nature of browser capabilities is not a bug, it’s a feature.

That was true fourteen years ago when John Allsopp wrote A Dao Of Web Design, and it’s still true today. Back then, designers and developers were comparing the web to print and finding it wanting. These days, designers and developers are comparing the web to native and finding it wanting. In both cases, I feel like they’re missing the fundamental point of the web: you can provide universal access to content and tasks without providing exactly the same experience for every single browser or device. That’s not a failing of the web—that’s its killer app.

Paul Kinlan published a post called This Is the Web Platform where he tabulates the current state of browser support for various features. “Pretty damning” he says:

the feature support that is ubiquitous across the web is actually pretty small especially if you are supporting IE8.

That’s true …from a certain point of view. But it depends on your definition of “support”. If your definition of “support” is “must look and work identically to the latest version of Chrome”, then yes, you’re going to have a smaller set of features you can use (you’re also going to live a miserable existence). But if your definition of “support” is “must be able to access the content and accomplish the task”, then as long as you’re using progressive enhancement, you can use all the features you want and support Internet Explorer 8, 7, 6, 5 …you can support every browser capable of connecting to the internet.

Like Brad said:

There is a difference between support and optimization.

I think part of the problem may be with the language we use. We talk about “the browser” when we should be talking about the browsers. I’m guilty of this. I’ll use phrases like “designing in the browser” or talk about “what we can do in the browser”, when really I should be talking about designing in the browsers and what we can do in the browsers.

It’s a subtle Lakoffian thing, but when we talk about “the browser” as if it were a single entity we might be unconsiously reinforcing the expectation that there is one Platonic ideal of browser rendering and that’s what we’re designing for.

There’s another phrase that bothers me, and it’s the phrase that Paul used in the title of his article: “the web platform”. This is something I talked about back in November in my presentation The Power of Simplicity:

But this idea of the web as a platform, I get why from a marketing perspective, we’d want to use that phrase, because it puts the web on equal footing with genuine platforms.

I would say Flash is a platform, and native: iOS and Android and these things. They are platforms, in that it’s all one bundle. And the web isn’t like that.

What I mean is, if you use the Flash platform, then anyone with the Flash plug-in can get your content. It’s on or off. It’s one or zero; it’s binary. Either they have the platform or they don’t. Either they get all your content, or none of your content.

And it’s similar with native apps. If you’ve got the right phone, you can get my app. All of my app. You don’t get bits of my app, you get all my app. Or you get none of it because you don’t have that particular phone that I’m supporting.

And the web is not like that. The web is not binary, one or zero, on or off. It’s not a platform where you get one hundred per cent or zero per cent. It’s this continuum.

The web is not a platform. It’s a continuum.

Pattern sharing

Mike has written about the Code for America alpha website that we collaborated on:

We chose to work with ClearLeft because they develop a pattern portfolio (a pattern/style library) which would allow us to scale our work to our Brigades. This unique approach has aligned perfectly with our work style and decentralized organizational structure.

Thankfully, I think the approach of delivering a pattern portfolio (instead of just pages) isn’t so unique these days. Mind you, it still seems to be more common with in-house teams than agencies. The Mailchimp pattern library is a classic example.

But agencies like Paravel are—like Clearleft—delivering systems, not pages. Dave wrote about providing responsive deliverables:

Responsive deliverables should look a lot like fully-functioning Twitter Bootstrap-style systems custom tailored for your clients’ needs.

I think that’s a good way of looking at it: a Bootstrap for every project.

Here’s the front-end style guide for Code for America.

Usually these front-end deliverables will be password-protected on the Clearleft extranet for the client’s eyes only, but Code for America are all about openness, so they’re more than willing to let us share it with the world. That makes me very happy. I remember encouraging the guys at Starbucks to publish their front-end style guide and I’ve written about this spirit of sharing before:

These style guides and pattern libraries aren’t being published in an attempt to provide ready-made solutions—every project should have its own distinct pattern library. Instead, these pattern libraries are being published in a spirit of openness and sharing …a way of saying “Hey, this is what worked for us in these particular circumstances.”

If you’re poking around the Code for America style guide, you’ll notice that it borrows some ideas from the pattern primer idea I published a while back. But in this iteration, the markup is available via a toggle—a nice variation. There’s also a patchwork page that provides a nice glance-able uninterrupted view of the same patterns.

Every project is a learning experience and each front-end style guide gives us ideas about how to do the next one better. In fact, Mark is busy working on better internal tools for creating these kinds of deliverables—something we’ll definitely be sharing. In the meantime, I’ll be encouraging other clients to be as open as Code for America have been in allowing us to share these deliverables.

For more on the usefulness of front-end style guides, be sure to read Paul’s article on style guides for the web, Anna’s classic 24 Ways article, and of course, Anna’s pocket guide from Five Simple Steps.

Sasstraction

Emil has been playing around with CSS variables (or “custom properties” as they should more correctly be known), which have started landing in some browsers. It’s well worth a read. He does a great job of explaining the potential of this new CSS feature.

For now though, most of us will be using preprocessors like Sass to do our variabling for us. Sass was the subject of Chris’s talk at An Event Apart in San Francisco last week—an excellent event as always.

At one point, Chris briefly mentioned that he’s quite happy for variables (or constants, really) to remain in Sass and not to be part of the CSS spec. Alas, I didn’t get a chance to chat with Chris about that some more, but I wonder if his thinking aligns with mine. Because I too believe that CSS variables should remain firmly in the realm of preprocessers rather than browsers.

Hear me out…

There are a lot of really powerful programmatic concepts that we could add to CSS, all of which would certainly make it a more powerful language. But I think that power would come at an expense.

Right now, CSS is a relatively-straightforward language:

CSS isn’t voodoo, it’s a simple and straightforward language where you declare an element has a style and it happens.

That’s a somewhat-simplistic summation, and there’s definitely some complexity to certain aspects of CSS—like specificity or margin collapsing—but on the whole, it has a straightforward declarative syntax:

selector {
    property: value;
}

That’s it. I think that this simplicity is quite beautiful and surprisingly powerful.

Over at my collection of design principles, I’ve got a section on Bert Bos’s essay What is a good standard? In theory, it’s about designing standards in general, but it matches very closely to CSS in particular. Some of the watchwords are maintainability, modularity, extensibility, simplicity, and learnability. A lot of those principles are clearly connected. I think CSS does a pretty good job of balancing all of those principles, while still providing authors with quite a bit of power.

Going back to that fundamental pattern of CSS, you’ll notice that is completely modular:

selector {
    property: value;
}

None of those pieces (selector, property, value) reference anything elsewhere in the style sheet. But as soon as you introduce variables, that modularity is snapped apart. Now you’ve got a value that refers to something defined elsewhere in the style sheet (or even in a completely different style sheet).

But variables aren’t the first addition to CSS that sacrifices modularity. CSS animations already do that. If you want to invoke a keyframe animation, you have to define it. The declaration and the invocation happen in separate blocks:

selector {
    animation-name: myanimation;
}
@keyframes myanimation {
    from {
        property: value;
    }
    to {
        property: value;
    }
}

I’m not sure that there’s any better way to provide powerful animations in CSS, but this feature does sacrifice modularity …and I believe that has a knock-on effect for learnability and readability.

So CSS variables (or custom properties) aren’t the first crack in the wall of the design principles behind CSS. To mix my metaphors, the slippery slope began with @keyframes (and maybe @font-face too).

But there’s no denying that having variables/constants in CSS provide a lot of power. There’s plenty of programming ideas (like loops and functions) that would provide lots of power to CSS. I still don’t think it’s a good idea to mix up the declarative and the programmatic. That way lies XSLT—a strange hybrid beast that’s sort of a markup language and sort of a programming language.

I feel very strongly that HTML and CSS should remain learnable languages. I don’t just mean for professionals. I believe it’s really important that anybody should be able to write and style a web page.

Now does that mean that CSS must therefore remain hobbled? No, I don’t think so. Thanks to preprocessors like Sass, we can have our cake and eat it too. As professionals, we can use tools like Sass to wield the power of variables, functions (mixins) and other powerful concepts from the programming world.

Preprocessors cut the Gordian knot that’s formed from the tension in CSS between providing powerful features and remaining relatively easy to learn. That’s why I’m quite happy for variables, mixins, nesting and the like to remain firmly in the realm of Sass.

Incidentally, at An Event Apart, Chris was making the case that Sass’s power comes from the fact that it’s an abstraction. I don’t think that’s necessarily true—I think the fact that it provides a layer of abstraction might be a red herring.

Chris made the case for abstractions being inherently A Good Thing. Certainly if you go far enough down the stack (to Assembly Language), that’s true. But not all abstractions are good abstractions, and I’m not just talking about Spolky’s law of leaky abstractions.

Let’s take two different abstractions that share a common origin story:

  • Sass is an abstraction layer for CSS.
  • Haml is an abstraction layer for HTML.

If abstractions were inherently A Good Thing, then they would both provide value to some extent. But whereas Sass is a well-designed tool that allows CSS-savvy authors to write their CSS more easily, Haml is a steaming pile of poo.

Here’s the crucial difference: Sass doesn’t force you to write all your CSS in a completely new way. In fact, every .css file is automatically a valid .scss file. You are then free to use—or ignore—the features of Sass at your own pace.

Haml, on the other hand, forces you to use a completely new whitespace-significant syntax that maps on to HTML. There are no half-measures. It is an abstraction that is not only opinionated, it refuses to be reasoned with.

So I don’t think that Sass is good because it’s an abstraction; I think that Sass is good because it’s a well-designed abstraction. Crucially, it’s also easy to learn …just like CSS.