Tags: css

74

sparkline

Styling the Patterns Day site

Once I had a design direction for the Patterns Day site, I started combining my marked-up content with some CSS. Ironically for an event that’s all about maintainability and reusability, I wrote the styles for this one-page site with no mind for future use. I treated the page as a one-shot document. I even used ID selectors—gasp! (the IDs were in the HTML anyway as fragment identifiers).

The truth is I didn’t have much of a plan. I just started hacking away in a style element in the head of the document, playing around with colour, typography, and layout.

I started with the small-screen styles. That wasn’t a conscious decision so much as just the way I do things automatically now. When it came time to add some layout for wider viewports, I used a sprinkling of old-fashioned display: inline-block so that things looked so-so. I knew I wanted to play around with Grid layout so the inline-block styles were there as fallback for non-supporting browsers. Once things looked good enough, the fun really started.

I was building the site while I was in Seattle for An Event Apart. CSS Grid layout was definitely a hot topic there. Best of all, I was surrounded by experts: Jen, Rachel, and Eric. It was the perfect environment for me to dip my toes into the waters of grid.

Jen was very patient in talking me through the concepts, syntax, and tools for using CSS grids. Top tip: open Firefox’s inspector, select the element with the display:grid declaration, and click the “waffle” icon—instant grid overlay!

For the header of the Patterns Day site, I started by using named areas. That’s the ASCII-art approach. I got my head around it and it worked okay, but it didn’t give me quite the precision I wanted. I switched over to using explicit grid-row and grid-column declarations.

It’s definitely a new way of thinking about layout: first you define the grid, then you place the items on it (rather than previous CSS layout systems where each element interacted with the elements before and after). It was fun to move things around and not have to worry about the source order of the elements …as long as they were direct children of the element with display:grid applied.

Without any support for sub-grids, I ended up having to nest two separate grids within one another. The logo is a grid parent, which is inside the header, also a grid parent. I managed to get things to line up okay, but I think this might be a good use case for sub-grids.

The logo grid threw up some interesting challenges. I wanted each letter of the words “Patterns Day” to be styleable, but CSS doesn’t give us any way to target individual letters other than :first-letter. I wrapped each letter in a b element, made sure that they were all wrapped in an element with an aria-hidden attribute (so that the letters wouldn’t be spelled out), and then wrapped that in an element with an aria-label of “Patterns Day.” Now I could target those b elements.

For a while, I also had a br element (between “Patterns” and “Day”). That created some interesting side effects. If a br element becomes a grid item, it starts to behave very oddly: you can apply certain styles but not others. Jen and Eric then started to test other interesting elements, like hr. There was much funkiness and gnashing of specs.

It was a total nerdfest, and I loved every minute of it. This is definitely the most excitement I’ve felt around CSS for a while. It feels like a renaissance of zen gardens and layout reservoirs (kids, ask your parents).

After a couple of days playing around with grid, I had the Patterns Day site looking decent enough to launch. I dabbled with some other fun CSS stuff in there too, like gratuitous clip paths and filters when hovering over the speaker images, and applying shape-outside with an image mask.

Go ahead and view source on the Patterns Day page if you want—I ended up keeping all the CSS in the head of the document. That turned out to be pretty good for performance …for first-time visits anyway. But after launching the site, I couldn’t resist applying some more performance tweaks.

Getting griddy with it

I had the great pleasure of attending An Event Apart Seattle last week. It was, as always, excellent.

It’s always interesting to see themes emerge during an event, especially when those thematic overlaps haven’t been planned in advance. Jen noticed this one:

I remember that being a theme at An Event Apart San Francisco too, when it seemed like every speaker had words to say about ill-judged use of Bootstrap. That theme was certainly in my presentation when I talked about “the fallacy of assumed competency”:

  1. large company X uses technology Y,
  2. company X must know what they are doing because they are large,
  3. therefore technology Y must be good.

Perhaps “the fallacy of assumed suitability” would be a better term. Heydon calls it “the ‘made at Facebook’ fallacy.” But I also made sure to contrast it with the opposite extreme: “Not Invented Here syndrome”.

As well as over-arching themes, it was also interesting to see which technologies were hot topics at An Event Apart. There was one clear winner here—CSS Grid Layout.

Microsoft—a sponsor of the event—used An Event Apart as the place to announce that Grid is officially moving into development for Edge. Jen talked about Grid (of course). Rachel talked about Grid (of course). And while Eric and Una didn’t talk about it on stage, they’ve both been writing about the fun they’ve been having having with Grid. Una wrote about 3 CSS Grid Features That Make My Heart Flutter. Eric is documenting the overall of his site with Grid. So when we were all gathered together, that’s what we were nerding out about.

The CSS Squad.

There are some great resources out there for levelling up in Grid-fu:

With Jen’s help, I’ve been playing with CSS Grid on a little site that I’m planning to launch tomorrow (he said, foreshadowingly). I took me a while to get my head around it, but once it clicked I started to have a lot of fun. “Fun” seems to be the overall feeling around this technology. There’s something infectious about the excitement and enthusiasm that’s returning to the world of layout on the web. And now that the browser support is great pretty much across the board, we can start putting that fun into production.

Teaching in Porto, day three

Day two ended with a bit of a cliffhanger as I had the students mark up a document, but not yet style it. In the morning of day three, the styling began.

Rather than just treat “styling” as one big monolithic task, I broke it down into typography, colour, negative space, and so on. We time-boxed each one of those parts of the visual design. So everyone got, say, fifteen minutes to write styles relating to font families and sizes, then another fifteen minutes to write styles for colours and background colours. Bit by bit, the styles were layered on.

When it came to layout, we closed the laptops and returned to paper. Everyone did a quick round of 6-up sketching so that there was plenty of fast iteration on layout ideas. That was followed by some critique and dot-voting of the sketches.

Rather than diving into the CSS for layout—which can get quite complex—I instead walked through the approach for layout; namely putting all your layout styles inside media queries. To explain media queries, I first explained media types and then introduced the query part.

I felt pretty confident that I could skip over the nitty-gritty of media queries and cross-device layout because the next masterclass that will be taught at the New Digital School will be a week of responsive design, taught by Vitaly. I just gave them a taster—Vitaly can dive deeper.

By lunch time, I felt that we had covered CSS pretty well. After lunch it was time for the really challenging part: JavaScript.

The reason why I think JavaScript is challenging is that it’s inherently more complex than HTML or CSS. Those are declarative languages with fairly basic concepts at heart (elements, attributes, selectors, etc.), whereas an imperative language like JavaScript means entering the territory of logic, loops, variables, arrays, objects, and so on. I really didn’t want to get stuck in the weeds with that stuff.

I focused on the combination of JavaScript and the Document Object Model as a way of manipulating the HTML and CSS that’s already inside a browser. A lot of that boils down to this pattern:

When (some event happens), then (take this action).

We brainstormed some examples of this e.g. “When the user submits a form, then show a modal dialogue with an acknowledgement.” I then encouraged them to write a script …but I don’t mean a script in the JavaScript sense; I mean a script in the screenwriting or theatre sense. Line by line, write out each step that you want to accomplish. Once you’ve done that, translate each line of your English (or Portuguese) script into JavaScript.

I did quick demo as a proof of concept (which, much to my surprise, actually worked first time), but I was at pains to point out that they didn’t need to remember the syntax or vocabulary of the script; it was much more important to have a clear understanding of the thinking behind it.

With the remaining time left in the day, we ran through the many browser APIs available to JavaScript, from the relatively simple—like querySelector and Ajax—right up to the latest device APIs. I think I got the message across that, using JavaScript, there’s practically no limit to what you can do on the web these days …but the trick is to use that power responsibly.

At this point, we’ve had three days and we’ve covered three layers of web technologies: HTML, CSS, and JavaScript. Tomorrow we’ll take a step back from the nitty-gritty of the code. It’s going to be all about how to plan and think about building for the web before a single line of code gets written.

Teaching in Porto, day two

The second day in this week-long masterclass was focused on CSS. But before we could get stuck into that, there were some diversions and tangents brought on by left-over questions from day one.

This was not a problem. Far from it! The questions were really good. Like, how does a web server know that someone has permission to carry out actions via a POST request? What a perfect opportunity to talk about state! Cue a little history lesson on the web’s beginning as a deliberately stateless medium, followed by the introduction of cookies …for good and ill.

We also had a digression about performance, file sizes, and loading times—something I’m always more than happy to discuss. But by mid-morning, we were back on track and ready to tackle CSS.

As with the first day, I wanted to take a “long zoom” look at design and the web. So instead of diving straight into stylesheets, we first looked at the history of visual design: cave paintings, hieroglyphs, illuminated manuscripts, the printing press, the Swiss school …all of them examples of media where the designer knows where the “edges” of the canvas lie. Not so with the web.

So to tackle visual design on the web, I suggested separating layout from all the other aspects of visual design: colour, typography, contrast, negative space, and so on.

At this point we were ready to start thinking in CSS. I started by pointing out that all CSS boils down to one pattern:

selector {
  property: value;
}

The trick, then, is to convert what you want into that pattern. So “I want the body of the page to be off-white with dark grey text” in English is translated into the CSS:

body {
  background-color: rgb(225,225,255);
  color: rgb(51,51,51);
}

…and so one for type, contrast, hierarchy, and more.

We started applying styles to the content we had collectively marked up with post-it notes on day one. Then the students split into groups of two to create an HTML document each. Tomorrow they’ll be styling that document.

There were two important links that come up over the course of day two:

  1. A Dao Of Web Design by John Allsopp, and
  2. The CSS Zen Garden.

If all goes according to plan, we’ll be tackling the third layer of the web technology stack tomorrow: JavaScript.

Vertical limit

When I was first styling Resilient Web Design, I made heavy use of vh units. The vertical spacing between elements—headings, paragraphs, images—was all proportional to the overall viewport height. It looked great!

Then I tested it on real devices.

Here’s the problem: when a page loads up in a mobile browser—like, say, Chrome on an Android device—the URL bar is at the top of the screen. The height of that piece of the browser interface isn’t taken into account for the viewport height. That makes sense: the viewport height is the amount of screen real estate available for the content. The content doesn’t extend into the URL bar, therefore the height of the URL bar shouldn’t be part of the viewport height.

But then if you start scrolling down, the URL bar scrolls away off the top of the screen. So now it’s behaving as though it is part of the content rather than part of the browser interface. At this point, the value of the viewport height changes: now it’s the previous value plus the height of the URL bar that was previously there but which has now disappeared.

I totally understand why the URL bar is squirrelled away once the user starts scrolling—it frees up some valuable vertical space. But because that necessarily means recalculating the viewport height, it effectively makes the vh unit in CSS very, very limited in scope.

In my initial implementation of Resilient Web Design, the one where I was styling almost everything with vh, the site was unusable. Every time you started scrolling, things would jump around. I had to go back to the drawing board and remove almost all instances of vh from the styles.

I’ve left it in for one use case and I think it’s the most common use of vh: making an element take up exactly the height of the viewport. The front page of the web book uses min-height: 100vh for the title.

Scrolling.

But as soon as you scroll down from there, that element changes height. The content below it suddenly moves.

Let’s say the overall height of the browser window is 600 pixels, of which 50 pixels are taken up by the URL bar. When the page loads, 100vh is 550 pixels. But as soon as you scroll down and the URL bar floats away, the value of 100vh becomes 600 pixels.

(This also causes problems if you’re using vertical media queries. If you choose the wrong vertical breakpoint, then the media query won’t kick in when the page loads but will kick in once the user starts scrolling …or vice-versa.)

There’s a mixed message here. On the one hand, the browser is declaring that the URL bar is part of its interface; that the space is off-limits for content. But then, once scrolling starts, that is invalidated. Now the URL bar is behaving as though it is part of the content scrolling off the top of the viewport.

The result of this messiness is that the vh unit is practically useless for real-world situations with real-world devices. It works great for desktop browsers if you’re grabbing the browser window and resizing, but that’s not exactly a common scenario for anyone other than web developers.

I’m sure there’s a way of solving it with JavaScript but that feels like using an atomic bomb to crack a walnut—the whole point of having this in CSS is that we don’t need to use JavaScript for something related to styling.

It’s such a shame. A piece of CSS that’s great in theory, and is really well supported, just falls apart where it matters most.

Update: There’s a two-year old bug report on this for Chrome, and it looks like it might actually get fixed in February.

Print styles

I really wanted to make sure that the print styles for Resilient Web Design were pretty good—or at least as good as they could be given the everlasting lack of support for many print properties in browsers.

Here’s the first thing I added:

@media print {
  @page {
    margin: 1in 0.5in 0.5in;
    orphans: 4;
    widows: 3;
  }
}

That sets the margins of printed pages in inches (I could’ve used centimetres but the numbers were nice and round in inches). The orphans: 4 declaration says that if there’s less than 4 lines on a page, shunt the text onto the next page. And widows: 3 declares that there shouldn’t be less than 3 lines left alone on a page (instead more lines will be carried over from the previous page).

I always get widows and orphans confused so I remind myself that orphans are left alone at the start; widows are left alone at the end.

I try to make sure that some elements don’t get their content split up over page breaks:

@media print {
  p, li, pre, figure, blockquote {
    page-break-inside: avoid;
  }
}

I don’t want headings appearing at the end of a page with no content after them:

@media print {
  h1,h2,h3,h4,h5 {
    page-break-after: avoid;
  }
}

But sections should always start with a fresh page:

@media print {
  section {
    page-break-before: always;
  }
}

There are a few other little tweaks to hide some content from printing, but that’s pretty much it. Using print preview in browsers showed some pretty decent formatting. In fact, I used the “Save as PDF” option to create the PDF versions of the book. The portrait version comes from Chrome’s preview. The landscape version comes from Firefox, which offers more options under “Layout”.

For some more print style suggestions, have a look at the article I totally forgot about print style sheets. There’s also tips and tricks for print style sheets on Smashing Magazine. That includes a clever little trick for generating QR codes that only appear when a document is printed. I’ve used that technique for some page types over on The Session.

Starting out

I had a really enjoyable time at Codebar Brighton last week, not least because Morty came along.

I particularly enjoy teaching people who have zero previous experience of making a web page. There’s something about explaining HTML and CSS from first principles that appeals to me. I especially love it when people ask lots of questions. “What does this element do?”, “Why do some elements have closing tags and others don’t?”, “Why is it textarea and not input type="textarea"?” The answer usually involves me going down a rabbit-hole of web archeology, so I’m in my happy place.

But there’s only so much time at Codebar each week, so it’s nice to be able to point people to other resources that they can peruse at their leisure. It turns out that’s it’s actually kind of tricky to find resources at that level. There are lots of great articles and tutorials out there for professional web developers—Smashing Magazine, A List Apart, CSS Tricks, etc.—but no so much for complete beginners.

Here are some of the resources I’ve found:

  • MarkSheet by Jeremy Thomas is a free HTML and CSS tutorial. It starts with an explanation of the internet, then the World Wide Web, and then web browsers, before diving into HTML syntax. Jeremy is the same guy who recently made CSS Reference.
  • Learn to Code HTML & CSS by Shay Howe is another free online book. You can buy a paper copy too. It’s filled with good, clear explanations.
  • Zero to Hero Coding by Vera Deák is an ongoing series. She’s starting out on her career as a front-end developer, so her perspective is particularly valuable.

If I find any more handy resources, I’ll link to them and tag them with “learning”.

Between the braces

In a post called Side Effects in CSS that he wrote a while back, Philip Walton talks about different kinds of challenges in writing CSS:

There are two types of problems in CSS: cosmetic problems and architectural problems.

The cosmetic problems are solved by making something look the way you want it to. The architectural problems are trickier because they have more long-term effects—maintainability, modularity, encapsulation; all that tricky stuff. Philip goes on to say:

If I had to choose between hiring an amazing designer who could replicate even the most complicated visual challenges easily in code and someone who understood the nuances of predictable and maintainable CSS, I’d choose the latter in a heartbeat.

This resonates with something I noticed a while back while I was doing some code reviews. Most of the time when I’m analysing CSS and trying to figure out how “good” it is—and I know that’s very subjective—I’m concerned with what’s on the outside of the curly braces.

selector {
    property: value;
}

The stuff inside the curly braces—the properties and values—that’s where the cosmetic problems get solved. It’s also the stuff that you can look up; I certainly don’t try to store all possible CSS properties and values in my head. It’s also easy to evaluate: Does it make the thing look like you want it to look? Yes? Good. It works.

The stuff outside the curly braces—the selectors—that’s harder to judge. It needs to be evaluated with lots of “what ifs”: What if this selects something you didn’t intend to? What if the markup changes? What if someone else writes some CSS that negates this?

I find it fascinating that most of the innovation in CSS from the browser makers and standards bodies arrives in the form of new properties and values—flexbox, grid, shapes, viewport units, and so on. Meanwhile there’s a whole other world of problems to be solved outside the curly braces. There’s not much that the browser makers or standards bodies can do to help us there. I think that’s why most of the really interesting ideas and thoughts around CSS in recent years have focused on that challenge.

Talking about hypertext

#CSSday starts off with a great history lesson of our industry by @adactio

I’ve just published a transcript of the talk I gave at the HTML Special that preceded CSS Day a couple of weeks back. I’ve also recorded an audio version for your huffduffing pleasure.

It’s not like the usual talks I give. The subject matter was assigned to me, Mission Impossible style. PPK wanted each speaker to give an entire talk on just one HTML element. He offered me the best element of them all: the A element.

There were a few different directions I could’ve taken it. I could’ve tried to make it practical, but I quickly dismissed that idea. Instead I went in the completely opposite direction, making it as pretentious as possible. I figured a talk about hypertext could afford to be winding and circuitous, building on some of the ideas I wrote about in my piece for The Manual a few years back. It’s quite self-indulgent of me, but I used it as an opportunity to geek out about some of my favourite things; from Borges, Babbage, and Bletchley to Leibniz, Lovelace, and Licklider.

I wouldn’t usually write out an entire talk word-for-word in advance, but somehow it felt right for this one. In fact, my talk preparation this time ‘round was very similar to the process Charlotte recently wrote about:

  1. Get everything out of my head and onto a mind map.
  2. Write chunks of content in short bursts—this was when I was buddying up with Paul.
  3. Put together a slide deck of visuals to support the narrative.
  4. Practice delivering the talk so I don’t look I’m just reading off a screen.

It takes me a long time to prepare talks. As the deadline for this one approached, I was getting quite panicked. It was touch and go there for a while, but I managed to get it done in time.

I’m pleased with how it turned out. On the day, I had fun delivering it. People seemed to like it too, which was gratifying.

Although with this kind of talk, it was inevitable that I wouldn’t be able to please everyone.

I guess this talk was a one-off affair. That said, if you’re putting on an event and you think this subject matter would be appropriate, let me know. I’d be more than happy to deliver it again.

Sticky headers

I made a little tweak to The Session today. The navigation bar across the top is “sticky” now—it doesn’t scroll with the rest of the content.

I made sure that the stickiness only kicks in if the screen is both wide and tall enough to warrant it. Vertical media queries are your friend!

But it’s not enough to just put some position: fixed CSS inside a media query. There are some knock-on effects that I needed to mitigate.

I use the space bar to paginate through long pages. It drives me nuts when sites with sticky headers don’t accommodate this. I made use of Tim Murtaugh’s sticky pagination fixer. It makes sure that page-jumping with the keyboard (using the space bar or page down) still works. I remember when I linked to this script two years ago, thinking “I bet this will come in handy one day.” Past me was right!

The other “gotcha!” with having a sticky header is making sure that in-page anchors still work. Nicolas Gallagher covers the options for this in a post called Jump links and viewport positioning. Here’s the CSS I ended up using:

:target:before {
    content: '';
    display: block;
    height: 3em;
    margin: -3em 0 0;
}

I also needed to check any of my existing JavaScript to see if I was using scrollTo anywhere, and adjust the calculations to account for the newly-sticky header.

Anyway, just a few things to consider if you’re going to make a navigational element “sticky”:

  1. Use min-height in your media query,
  2. Take care of keyboard-initiated page scrolling,
  3. Adjust the positioning of in-page links.

Amsterdam Brighton Amsterdam

I’m about to have a crazy few days that will see me bouncing between Brighton and Amsterdam.

It starts tomorrow. I’m flying to Amsterdam in the morning and speaking at this Icons event in the afternoon about digital preservation and long-term thinking.

Then, the next morning, I’ll be opening up the inaugural HTML Special which is a new addition the CSS Day conference. Each talk on Thursday will cover one HTML element. I am honoured to speaking about the A element. Here’s the talk description:

The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Enquire within upon everything.

I’ve been working all out to get this talk done and I finally wrapped it up today. Right now, I feel pretty happy with it, but I bet I’ll change that opinion in the next 48 hours. I’m pretty sure that this will be one of those talks that people will either love or hate, kind of like my 2008 dConstruct talk, The System Of The World.

After CSS Day, I’ll be heading back to Brighton on Saturday, June 18th to play a Salter Cane gig in The Greys pub. If you’re around, you should definitely come along—not only is it free, but there will be some excellent support courtesy of Jon London, and Lucas and King.

Then, the next morning, I’ll be speaking at DrupalCamp Brighton, opening up day two of the event. I won’t be able to stick around long afterwards though, because I need to skidaddle to the airport to go back to Amsterdam!

Google are having their Progressive Web App Dev Summit there on Monday on Tuesday. I’ll be moderating a panel on the second day, so I’ll need to pay close attention to all the talks. I’ll be grilling representatives from Google, Samsung, Opera, Microsoft, and Mozilla. Considering my recent rants about some very bad decisions on the part of Google’s Chrome team, it’s very brave of them to ask me to be there, much less moderate a panel in public.

You can still register for the event, by the way. Like the Salter Cane gig, it’s free. But if you can’t make it along, I’d still like to know what you think I should be asking the panelists about.

Got a burning question for browser/device makers? Write it down, post it somewhere on the web with a link back to this post, and then send me a web mention (there’s a form for you to paste in the URL at the bottom of this post).

Delay

Mobile browser vendors have faced a dilemma for quite a while. They’ve got this double-tap gesture that allows users to zoom in on part of a page (particularly handy on non-responsive sites). But that means that every time a user makes a single tap, the browser has to wait for just a moment to see if it’s followed by another tap. “Just a moment” in this case works out to be somewhere between 300 and 350 milliseconds. So every time a user is trying to click a link or press a button on a web page, there’s a slight but noticeable delay.

For a while, mobile browsers tried to “solve” the problem by removing the delay if the viewport size had been set to non-scalable using a meta viewport declaration of user-scalable="no". In other words, the browser was rewarding bad behaviour: sites that deliberately broke accessibility by removing the ability to zoom were the ones that felt snappier than their accessible counterparts.

Fortunately Android changed their default behaviour. They decided to remove the tap delay for any site that had a meta viewport declaration of width=device-width (which is pretty much every responsive website). That still left Apple.

I discussed this a couple of years ago with Ted (my go-to guy on the inside of the infinite loop):

He’d prefer a per-element solution rather than a per-document meta element. An attribute? Or maybe a CSS declaration similar to pointer events?

I thought for a minute, and then I spitballed this idea: what if the 300 millisecond delay only applied to non-focusable elements?

After all, the tap delay is only noticeable when you’re trying to tap on a focusable element: links, buttons, form fields. Double tapping tends to happen on text content: divs, paragraphs, sections.

Well, the Webkit team have announced their solution. As well as following Android’s lead and removing the delay for responsive sites, they’ve also provided a way for authors to declare which elements should have the delay removed using the CSS property touch-action:

Putting touch-action: manipulation; on a clickable element makes WebKit consider touches that begin on the element only for the purposes of panning and pinching to zoom. This means WebKit does not consider double-tap gestures on the element, so single taps are dispatched immediately.

So to get the behaviour I was hoping for—no delay on focusable elements—I can add this line to my CSS:

a, button, input, select, textarea, label, summary {
  touch-action: manipulation;
}

That ought to do it. I suppose I could also throw [tabindex]:not([tabindex="-1"]) into that list of selectors.

It probably goes without saying, but you shouldn’t do:

* { touch-action: manipulation; }

or:

body { touch-action: manipulation; }

That default behaviour of touch-action: auto is still what you want on most elements.

Anyway, I’m off to update my CSS even though this latest fix probably won’t land in mobile Safari until, oh ….probably next October.

Pseudo and pseudon’t

I like CSS pseudo-classes. They come in handy for adding little enhancements to interfaces based on interaction.

Take the form-related pseudo-classes, for example: :valid, :invalid, :required, :in-range, and many more.

Let’s say I want to adjust the appearance of an element based on whether it has been filled in correctly. I might have an input element like this:

<input type="email" required>

Then I can write some CSS to put green border on it once it meets the minimum requirements for validity:

input:valid {
  border: 1px solid green;
}

That works, but somewhat annoyingly, the appearance will change while the user is still typing in the field (as soon as the user types an @ symbol, the border goes green). That can be distracting, or downright annoying.

I only want to display the green border when the input is valid and the field is not focused. Luckily for me, those last two words (“not focused”) map nicely to some more pseudo-classes: not and focus:

input:not(:focus):valid {
  border: 1px solid green;
}

If I want to get really fancy, I could display an icon next to form fields that have been filled in. But to do that, I’d need more than a pseudo-class; I’d need a pseudo-element, like :after

input:not(:focus):valid::after {
  content: '✓';
}

…except that won’t work. It turns out that you can’t add generated content to replaced elements like form fields. I’d have to add a regular element into my markup, like this:

<input type="email" required>
<span></span>

So I could style it with:

input:not(:focus):valid + span::after {
  content: '✓';
}

But that feels icky.

Update: See this clever flexbox technique by Hugo Giraudel for a potential solution.

Whatever works for you

I was one of the panelists on the most recent episode of the Shop Talk Show along with Nicole, Colin Megill, and Jed Schmidt. The topic was inline styles. Well, not quite. That’s not a great term to describe the concept. The idea is that you apply styling directly to DOM nodes using JavaScript, instead of using CSS selectors to match up styles to DOM nodes.

It’s an interesting idea that I could certainly imagine being useful in certain situations such as dynamically updating an interface in real time (it feels a bit more “close to the metal” to reflect the state updates directly rather than doing it via class swapping). But there are many, many other situations where the cascade is very useful indeed.

I expressed concern that styling via JavaScript raises the barrier to styling from a declarative language like CSS to a programming language (although, as they pointed out, it’s more like moving from CSS to JSON). I asked whether it might not be possible to add just one more layer of abstraction so that people could continue to write in CSS—which they’re familiar with—and then do JavaScript magic to match those selectors, extract those styles, and apply them directly to the DOM nodes. Since recording the podcast, I came across Glen Maddern’s proposal to do exactly that. It makes sense to me try to solve the perceived problems with CSS—issues of scope and specificity—without asking everyone to change the way they write.

In short, my response was “hey, like, whatever, it’s cool, each to their own.” There are many, many different kinds of websites and many, many different ways to make them. I like that.

So I was kind of surprised by the bullishness of those who seem to honestly believe that this is the way to build on the web, and that CSS will become a relic. At one point I even asked directly, “Do you really believe that CSS is over? That all styles will be managed through JavaScript from here on?” and received an emphatic “Yes!” in response.

I find that a little disheartening. Chris has written about the confidence of youth:

Discussions are always worth having. Weighing options is always interesting. Demonstrating what has worked (and what hasn’t) for you is always useful. There are ways to communicate that don’t resort to dogmatism.

There are big differences between saying:

  • You can do this,
  • You should do this, and
  • You must do this.

My take on the inline styles discussion was that it fits firmly in the “you can do this” slot. It could be a very handy tool to have in your toolbox for certain situations. But ideally your toolbox should have many other tools. When all you have is a hammer, yadda, yadda, yadda, nail.

I don’t think you do your cause any favours by jumping straight to the “you must do this” stage. I think that people are more amenable to hearing “hey, here’s something that worked for me; maybe it will work for you” rather than “everything you know is wrong and this is the future.” I certainly don’t think that it’s helpful to compare CSS to Neanderthals co-existing with JavaScript Homo Sapiens.

Like I said on the podcast, it’s a big web out there. The idea that there is “one true way” that would work on all possible projects seems unlikely—and undesirable.

“A ha!”, you may be thinking, “But you yourself talk about progressive enhancement as if it’s the one try way to build on the web—hoisted by your own petard.” Actually, I don’t. There are certainly situations where progressive enhancement isn’t workable—although I believe those cases are rarer than you might think. But my over-riding attitude towards any questions of web design and development is:

It depends.

Building the dConstruct 2015 site

I remember when I first saw Paddy’s illustration for this year’s dConstruct site, I thought “Well, that’s a design direction, but there’s no way that Graham will be able to implement all of it.” There was a tight deadline for getting the site out, and let’s face it, there was so much going on in the design that we’d just have to prioritise.

I underestimated Graham’s sheer bloody-mindedness.

At the next front-end pow-wow at Clearleft, Graham showed the dConstruct site in all its glory …in Lynx.

http://2015.dconstruct.org in Lynx.

I love that. Even with the focus on the gorgeous illustration and futuristic atmosphere of the design, Graham took the time to think about the absolute basics: marking up the content in a logical structured way. Everything after that—the imagery, the fonts, the skewed style—all of it was built on a solid foundation.

One site, two browsers.

It would’ve been easy to go crazy with the fonts and images, but Graham made sure to optimise everything to within an inch of its life. The biggest bottleneck comes from a third party provider—the map tiles and associated JavaScript …so that’s loaded in after the initial content is loaded. It turns out that the site build was a matter of prioritisation after all.

http://2015.dconstruct.org/

There’s plenty of CSS trickery going on: transforms, transitions, and opacity. But for the icing on the cake, Graham reached for canvas and programmed space elevator traffic with randomly seeded velocity and size.

Oh, and of course it’s all responsive.

So, putting that all together…

The dConstruct 2015 site is gorgeous, semantic, responsive, and performant. Conventional wisdom dictates that you have to choose, but this little site—built on a really tight schedule—shows otherwise.

100 words 051

I’ve been thinking about what Harry said recently about logic in CSS. I think he makes an astute observation.

You can think of each part of a selector as a condition:

condition { }

That translates to code like:

if (condition) { }

So if you have a CSS selector like this:

condition1 condition2 condition3 { }

…it translates to code like this:

if (condition1) {
    if (condition2) {
        if (condition3) {
        }
    }
}

That doesn’t feel very elegant, even in its simpler form:

if (condition1 && condition2 && condition3) { }

I like Harry’s rule of thumb:

Think of your selectors as mini programs: Every time you nest or qualify, you are adding an if statement; read these ifs out loud to yourself to try and keep your selectors sane.

100 words 033

Charlotte came up with a nifty trick that combines two different techniques she’s been working with.

The first building block is the pattern of using checkboxes, labels, and the :checked pseudo-class to create progressive disclosure toggles without JavaScript. There’s just one caveat with that technique though—the item being toggled must appear after the trigger label in the source order of the markup.

Enter the second building block: flexbox. With flexbox, we’re no longer at the mercy of the source order in our markup. By using flex-direction: column-reverse, the progressive disclosure trigger can be displayed after the item being toggled.

BEMphasis

I’m working on a project with a team of developers who are trying out the BEM syntax for their class names. I’ve tried BEM before, but I’m not a huge fan of underscores (for no particularly good reason) so I tend to use a modified version that avoids those characters. Still, when it comes to coding style—tabs vs. spaces, camelCasing, underscores, hyphens, or whatever—my personal opinion takes a back seat to the group consensus. And on this project, the group has opted for proper BEM all the way, and I’m more than happy to go along with that.

When it comes to naming a modified version of a component in BEM, the syntax looks like this:

component--modifier

That raises a question about how you then deploy that class name in your HTML. You could just use the modified name:

<element class="component--modifier">

But then in your CSS you’d have to repeat all the style rules for .component selector inside your rule block for .component--modifier selector. SASS could you help out here, especially with its “extends” functionality, but the final CSS is still going to containing duplicated rules.

The alternative is to keep your CSS lean and modular, and write your HTML like this:

<element class="component component--modifier">

Now you’ve taken the duplication out of CSS and put into your markup. It looks a little weird. But, on balance, it’s probably the lesser of two evils.

It strikes me that this pattern of always having the base component class name appear anywhere you have a component--modifier class name is something that you could programmatically check for. It should be relatively straightforward to write a lint tool that looks in the value of every class attribute and, if it finds any instances of foo--bar, checks to make sure that foo is also in there.

Sounds like it could be a nice little task for Grunt or Gulp. Maybe somebody has already made it.

Mind you, it seems that most lint tools out there are focused very much on enforcing a coding style for CSS and JavaScript—not so much for HTML. I worry that this reflects the mindset of many front-end developers who view CSS and JavaScript as more important than markup …which is a bit odd considering that CSS and JavaScript are subservient to the HTML document that they’re styling and scripting.

100 words 025

I often get asked what resources I’d recommend for someone totally new to making websites. There are surprisingly few tutorials out there aimed at the complete beginner. There’s Jon Duckett’s excellent—and beautiful—book. There’s the Codebar curriculum (which I keep meaning to edit and update; it’s all on Github).

Now there’s a new resource by Damian Wielgosik called How to Code in HTML5 and CSS3. Personally, I would drop the “5” and the “3”, but that’s a minor quibble; this is a great book. It manages to introduce concepts in a logical, understandable way.

And it’s free.

Inlining critical CSS for first-time visits

After listening to Scott rave on about how much of a perceived-performance benefit he got from inlining critical CSS on first load, I thought I’d give it a shot over at The Session. On the chance that this might be useful for others, I figured I’d document what I did.

The idea here is that you can give a massive boost to the perceived performance of the first page load on a site by putting the most important CSS in the head of the page. Then you cache the full stylesheet. For subsequent visits you only ever use the external stylesheet. So if you’re squeamish at the thought of munging your CSS into your HTML (and that’s a perfectly reasonable reaction), don’t worry—this is a temporary workaround just for initial visits.

My particular technology stack here is using Grunt, Apache, and PHP with Twig templates. But I’m sure you can adapt this for other technology stacks: what’s important here isn’t the technology, it’s the thinking behind it. And anyway, the end user never sees any of those technologies: the end user gets HTML, CSS, and JavaScript. As long as that’s what you’re outputting, the specifics of the technology stack really don’t matter.

Generating the critical CSS

Okay. First question: how do you figure out which CSS is critical and which CSS can be deferred?

To help answer that, and automate the task of generating the critical CSS, Filament Group have made a Grunt task called grunt-criticalcss. I added that to my project and updated my Gruntfile accordingly:

grunt.initConfig({
    // All my existing Grunt configuration goes here.
    criticalcss: {
        dist: {
            options: {
                url: 'http://thesession.dev',
                width: 1024,
                height: 800,
                filename: '/path/to/main.css',
                outputfile: '/path/to/critical.css'
            }
        }
    }
});

I’m giving it the name of my locally-hosted version of the site and some parameters to judge which CSS to prioritise. Those parameters are viewport width and height. Now, that’s not a perfect way of judging which CSS matters most, but it’ll do.

Then I add it to the list of Grunt tasks:

// All my existing Grunt tasks go here.
grunt.loadNpmTasks('grunt-criticalcss');

grunt.registerTask('default', ['sass', etc., 'criticalcss']);

The end result is that I’ve got two CSS files: the full stylesheet (called something like main.css) and a stylesheet that only contains the critical styles (called critical.css).

Cache-busting CSS

Okay, this is a bit of a tangent but trust me, it’s going to be relevant…

Most of the time it’s a very good thing that browsers cache external CSS files. But if you’ve made a change to that CSS file, then that feature becomes a bug: you need some way of telling the browser that the CSS file has been updated. The simplest way to do this is to change the name of the file so that the browser sees it as a whole new asset to be cached.

You could use query strings to do this cache-busting but that has some issues. I use a little bit of Apache rewriting to get a similar effect. I point browsers to CSS files like this:

<link rel="stylesheet" href="/css/main.20150310.css">

Now, there isn’t actually a file named main.20150310.css, it’s just called main.css. To tell the server where the actual file is, I use this rewrite rule:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.+).(d+).(js|css)$ $1.$3 [L]

That tells the server to ignore those numbers in JavaScript and CSS file names, but the browser will still interpret it as a new file whenever I update that number. You can do that in a .htaccess file or directly in the Apache configuration.

Right. With that little detour out of the way, let’s get back to the issue of inlining critical CSS.

Differentiating repeat visits

That number that I’m putting into the filenames of my CSS is something I update in my Twig template, like this (although this is really something that a Grunt task could do, I guess):

{% set cssupdate = '20150310' %}

Then I can use it like this:

<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

I can also use JavaScript to store that number in a cookie called csscached so I’ll know if the user has a cached version of this revision of the stylesheet:

<script>
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

The absence or presence of that cookie is going to be what determines whether the user gets inlined critical CSS (a first-time visitor, or a visitor with an out-of-date cached stylesheet) or whether the user gets a good ol’ fashioned external stylesheet (a repeat visitor with an up-to-date version of the stylesheet in their cache).

Here are the steps I’m going through:

First of all, set the Twig cssupdate variable to the last revision of the CSS:

{% set cssupdate = '20150310' %}

Next, check to see if there’s a cookie called csscached that matches the value of the latest revision. If there is, great! This is a repeat visitor with an up-to-date cache. Give ‘em the external stylesheet:

{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

If not, then dump the critical CSS straight into the head of the document:

{% else %}
<style>
{% include '/css/critical.css' %}
</style>

Now I still want to load the full stylesheet but I don’t want it to be a blocking request. I can do this using JavaScript. Once again it’s Filament Group to the rescue with their loadCSS script:

 <script>
    // include loadCSS here...
    loadCSS('/css/main.{{ cssupdate }}.css');

While I’m at it, I store the value of cssupdate in the csscached cookie:

    document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

Finally, consider the possibility that JavaScript isn’t available and link to the full CSS file inside a noscript element:

<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

And we’re done. Phew!

Here’s how it looks all together in my Twig template:

{% set cssupdate = '20150310' %}
{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
{% else %}
<style>
{% include '/css/critical.css' %}
</style>
<script>
// include loadCSS here...
loadCSS('/css/main.{{ cssupdate }}.css');
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>
<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

You can see the production code from The Session in this gist. I’ve tweaked the loadCSS script slightly to match my preferred JavaScript style but otherwise, it’s doing exactly what I’ve outlined here.

The result

According to Google’s PageSpeed Insights, I done good.

Optimising https://thesession.org/