Tags: framework

8

sparkline

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

AMPstinction

I’ve come to believe that the goal of any good framework should be to make itself unnecessary.

Brian said it explicitly of his PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

That makes total sense, especially if your code is a polyfill—those solutions are temporary by design. Autoprefixer is another good example of a piece of code that becomes less and less necessary over time.

But I think it’s equally true of any successful framework or library. If the framework becomes popular enough, it will inevitably end up influencing the standards process, thereby becoming dispensible.

jQuery is the classic example of this. There’s very little reason to use jQuery these days because you can accomplish so much with browser-native JavaScript. But the reason why you can accomplish so much without jQuery is because of jQuery. I don’t think we would have querySelector without jQuery. The library proved the need for the feature. The same is true for a whole load of DOM scripting features.

The same process is almost certain to occur with React—it’s a good bet there will be a standardised equivalent to the virtual DOM at some point.

When Google first unveiled AMP, its intentions weren’t clear to me. I hoped that it existed purely to make itself redundant:

As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

Alas, as time has passed, that hope shows no signs of being fulfilled. If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting their “regular” pages put on weight.

Worse yet, the messaging from Google around AMP has shifted. Instead of pitching it as a format for creating parallel versions of your web pages, they’re now also extolling the virtues of having your AMP pages be the only version you publish:

In fact, AMP’s evolution has made it a viable solution to build entire websites.

On an episode of the Dev Mode podcast a while back, AMP was a hotly-debated topic. But even those defending AMP were doing so on the understanding that it was more a proof-of-concept than a long-term solution (and also that AMP is just for news stories—something else that Google are keen to change).

But now it’s clear that the Google AMP Project is being marketed more like a framework for the future: a collection of web components that prioritise performance …which is kind of odd, because that’s also what Google’s Polymer project is. The difference being that pages made with Polymer don’t get preferential treatment in Google’s search results. I can’t help but wonder how the Polymer team feels about AMP’s gradual pivot onto their territory.

If the AMP project existed in order to create a web where AMP was no longer needed, I think I could get behind it. But the more it’s positioned as the only viable solution to solving performance, the more uncomfortable I am with it.

Which, by the way, brings me to one of the most pernicious ideas around Google AMP—positioning anyone opposed to it as not caring about web performance. Nothing could be further from the truth. It’s precisely because performance on the web is so important that it deserves a long-term solution, co-created by all of us: not some commandents delivered to us from on-high by one organisation, enforced by preferential treatment by that organisation’s monopoly in search.

It’s the classic logical fallacy:

  1. Performance! Something must be done!
  2. AMP is something.
  3. Now something has been done.

By marketing itself as the only viable solution to the web performance problem, I think the AMP project is doing itself a great disservice. If it positioned itself as an example to be emulated, I would welcome it.

I wish that AMP were being marketed more like a temporary polyfill. And as with any polyfill, I look forward to the day when AMP is no longer necesssary.

I want AMP to become extinct. I genuinely think that the Google AMP team should share that wish.

Design systems

Talking about scaling design can get very confusing very quickly. There are a bunch of terms that get thrown around: design systems, pattern libraries, style guides, and components.

The generally-accepted definition of a design system is that it’s the outer circle—it encompasses pattern libraries, style guides, and any other artefacts. But there’s something more. Just because you have a collection of design patterns doesn’t mean you have a design system. A system is a framework. It’s a rulebook. It’s what tells you how those patterns work together.

This is something that Cennydd mentioned recently:

Here’s my thing with the modularisation trend in design: where’s the gestalt?

In my mind, the design system is the gestalt. But Cennydd is absolutely right to express concern—I think a lot of people are collecting patterns and calling the resulting collection a design system. No. That’s a pattern library. You still need to have a framework for how to use those patterns.

I understand the urge to fixate on patterns. They’re small enough to be manageable, and they’re tangible—here’s a carousel; here’s a date-picker. But a design system is big and intangible.

Games are great examples of design systems. They’re frameworks. A game is a collection of rules and constraints. You can document those rules and constraints, but you can’t point to something and say, “That is football” or “That is chess” or “That is poker.”

Even though they consist entirely of rules and constraints, football, chess, and poker still produce an almost infinite possibility space. That’s quite overwhelming. So it’s easier for us to grasp instances of football, chess, and poker. We can point to a particular occurrence and say, “That is a game of football”, or “That is a chess match.”

But if you tried to figure out the rules of football, chess, or poker just from watching one particular instance of the game, you’d have your work cut for you. It’s not impossible, but it is challenging.

Likewise, it’s not very useful to create a library of patterns without providing any framework for using those patterns.

I would go so far as to say that the actual code for the patterns is the least important part of a design system (or, certainly, it’s the part that should be most malleable and open to change). It’s more important that the patterns have been identified, named, described, and crucially, accompanied by some kind of guidance on usage.

I could easily imagine using a tool like Fractal to create a library of text snippets with no actual code. Those pieces of text—which provide information on where and when to use a pattern—could be more valuable than providing a snippet of code without any context.

One of the very first large-scale pattern libraries I can remember seeing on the web was Yahoo’s Design Pattern Library. Each pattern outlined

  1. the problem being solved;
  2. when to use this pattern;
  3. when not to use this pattern.

Only then, almost incidentally, did they link off to the code for that pattern. But it was entirely possible to use the system of patterns without ever using that code. The code was just one instance of the pattern. The important part was the framework that helped you understand when and where it was appropriate to use that pattern.

I think we lose sight of the real value of a design system when we focus too much on the components. The components are the trees. The design system is the forest. As Paul asked:

What methodologies might we uncover if we were to focus more on the relationships between components, rather than the components themselves?

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Adoption

Tom wrote a post on Ev’s blog a while back called JavaScript Frameworks: Distribution Channels for Good Ideas (I’ve been hoping he’d publish it on his own site so I’d have a more permanent URL to point to, but so far, no joy). It’s well worth a read.

I don’t really have much of an opinion on his central point that browser makers should work more closely with framework makers. I’m not so sure I agree with the central premise that frameworks are going to be around for the long haul. I think good frameworks—like jQuery—should aim to make themselves redundant.

But anyway, along the way, Tom makes this observation:

Google has an institutional tendency to go it alone.

JavaScript not good enough? Let’s create Dart to replace it. HTML not good enough? Let’s create AMP to replace it. I’m just waiting for them to announce Google Style Sheets.

I don’t really mind these inventions. We’re not forced to adopt them, and generally, we don’t. Tom again:

They poured enormous time and money into Dart, even building an entire IDE, without much to show for it. Contrast Dart’s adoption with the adoption of TypeScript and Flow, which layer improvements on top of JavaScript instead of trying to replace it.

See, that’s a really, really good point. It’s so much easier to get people to adjust their behaviour than to change it completely.

Sass is a really good example of this. You can take any .css file, save it as a .scss file, and now you’re using Sass. Then you can start using features (or not) as needed. Very smart.

Incidentally, I’m very curious to know how many people use the scss syntax (which is the same as CSS) compared to how many people use the sass indented syntax (the one with significant whitespace). In his brilliant Sass for Web Designers book, I don’t think Dan even mentioned the indented syntax.

Or compare the adoption of Sass to the adoption of HAML. Now, admittedly, the disparity there might be because Sass adds new features, whereas HAML is a purely stylistic choice. But I think the more fundamental difference is that Sass—with its scss syntax—only requires you to slightly adjust your behaviour, whereas something like HAML requires you to go all in right from the start.

This is something that has been on my mind a lately while I’ve been preparing my new talk on evaluating technology (the talk went down very well at An Event Apart San Francisco, by the way—that’s a relief). In the talk, I made a reference to one of Grace Hopper’s famous quotes:

Humans are allergic to change.

Now, Grace Hopper subsequently says:

I try to fight that.

I contrast that with the approach that Tim Berners-Lee and Robert Cailliau took with their World Wide Web project. The individual pieces were built on what people were already familiar with. URLs use slashes so they’d be feel similar to UNIX file paths. And the first fledging version of HTML took its vocabulary almost wholesale from a version of SGML already in use at CERN. In fact, you could pretty much take an existing CERN SGML file and open it as an HTML file in a web browser.

Oh, and that browser would ignore any tags it didn’t understand—behaviour that, in my opinion, would prove crucial to the growth and success of HTML. Because of its familiarity, its simplicity, and its forgiving error handling, HTML turned to be more successful than Tim Berners-Lee expected, as he wrote in his book Weaving The Web:

I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content. It would turn out that HTML would become amazingly popular for the content as well.

HTML and SGML; Sass and CSS; TypeScript and JavaScript. The new technology builds on top of the existing technology instead of wiping the slate clean and starting from scratch.

Humans are allergic to change. And that’s okay.

Building Reprieve

The front page of today’s Guardian ran with a story on Binyam Mohamed and his fight to stop evidence of his torture from being destroyed:

The photograph will be destroyed within 30 days of his case being dismissed by the American courts — a decision on which is due to be taken by a judge imminently, Clive Stafford Smith, Mohamed’s British lawyer and director of Reprieve, the legal charity, said today.

Reprieve recently relaunched their website and they chose Clearleft to help them. I was responsible for the front-end build; that’s my usual role. But unusually, I also had to build a CMS.

We don’t normally do back-end work at Clearleft. That’s a conscious decision; we don’t want to tie ourselves to any particular server-side language. Usually we partner up with server-side developers; either those of the client or independent agencies like New Bamboo. In the case of Reprieve, the budget didn’t allow for that option. We were faced with three possibilities:

  1. Write a CMS from scratch, probably using PHP and MySQL—the technologies I’m most comfortable with.
  2. Take an off-the-shelf platform like WordPress or Expression Engine and twist it to make it fit the needs of the client.
  3. Create a CMS using Django which would give us an admin interface for free.

There was a three person team responsible for the project: myself, Cennydd and Paul. We did a little card-sorting exercise, weighing up the pros and cons of each option. Django came out on top.

I had conflicting emotions about this. On the one hand, I was pleased to have the chance to learn a new technology. On the other hand, I was absolutely terrified that I would be completely out of my depth.

I had seen Simon giving a talk on Django just a few weeks previously. I stuck my hand up during the Q and A to ask Is it possible to learn Django without first learning Python? Simon said that a year ago, he would have said No. But given the work of fellow designers like Jeff and Bryan, the answer isn’t so clear cut. Maybe Django could be a really good introduction to Python.

By far the hardest part of building a Django website was the initial set-up. Sure, installing Django was pretty straightforward …once you’ve made sure you’ve installed the right image libraries, the right database bindings, blah, blah, blah. I can deal with programming challenges but I have no desire to become a sysadmin. Setting up my local dev environment on my Mac was a hair-tearing experience. Setting up the live environment, even on a Django-friendly host like WebFaction, was almost as frustrating …no thanks to the worst. screencast. EVER.

But I persevered, I obediently followed the tutorial, and I discovered all the things that make Django such a powerful framework; the excellent separation of concerns, the superb templating system, the lack of so-called front-end “helpers” that cripple other server-side frameworks. I think Gareth was really onto something when he noticed the way that the web standards world appears to be choosing Django.

In the end, Django proved to be absolutely the right choice for Reprieve. It provided enough flexibility for me to build a site tailored to the specific needs of the client while at the same time, giving me plenty of pre-built tools like RSS and, crucially, the admin interface. The client is extremely happy with the power that the admin interface offers.

For my part, it was an honour to work on a project with this mission statement:

We investigate, we litigate and we educate, working on the frontline, providing legal support to prisoners unable to pay for it themselves. We promote the rule of law around the world, and secure each person’s right to a fair trial. And in doing so, we save lives.

The Lessons of CSS Frameworks

Eric Meyer is going to talk about CSS frameworks here at An Event Apart San Francisco.

He did a Google search for “CSS Frameworks” and put together a list of the big players. It’s a list of nine frameworks. Eric wants to know two things: what are they doing the same and what are they doing differently.

Let’s get one question out of the way, the question which one is right for you? Answer… none of the above. It’s like templates. There’s nothing wrong with templates but you don’t put together your client’s site based on a template, right? They can be a good starting point for ideas but you do your own designs. If you’re going to use a framework, it should be yours; one that you’ve created. You can look at existing frameworks for ideas and hack at it. But the professionals in this room are not well served by picking up a framework and using it as-is.

Eric put together a grid of features and which frameworks support those features. Every framework does reset, colours, and fonts. The fact that every framework has a reset is evidence of the frustration we all feel with the inconsistencies between browsers. The rules for colour tend to be much more minimal. Font styling, on the other hand, is more fully-featured generally. Whereas the colour might just be set for the body element, font sizes and faces are specified throughout. Usually that font face is Helvetica. Most frameworks steer away from trying to style form elements. Almost all of them do layout, usually combinations of columns. Four of the nine frameworks included print styles. Three of the nine included hacks.

Font sizes

Four of the nine frameworks are setting font sizes on the body with pixels. Tripoli uses Richard’s 62.5% rule. Eric points out how using a 76% rule on the body can lead to inconsistent font-sizing between browsers because of the inconsistencies of rounding off font sizes. Only two of the frameworks aren’t using unitless line-heights. Generally you want a line height of 1 to get propagated down the document tree rather than simply the computed value of 1em in pixels. You don’t want a 40 pixel element having a line height of 12 pixels.

Heading sizes

Most of these frameworks, with the exception of YUI, are setting heading sizes in some form or another. The only place where you’ll see a heading size go below 1em is in a browser style sheet. In the frameworks, no heading size, even h6, goes below the size of body text, 1em. Blueprint and Elements set pretty large sizes on h1 and h2. The other frameworks cluster around the same size range, never getting very big or very small. Eric averaged out all the measurements to get the average size for h1 and h2.

Naming conventions

Where frameworks are using IDs or classes, what names are they using? Four of them use psuedo-namespaced class names beginning with grid- or container- or span- (which you would apply to a div!?). You’re supposed put classes in your markup like grid-3 or span-5 or whatever. This seems pretty complicated. Three frameworks use more intuitive names like page, header or main. In fact header, main and footer are universal IDs across the three frameworks.

Style inclusion patterns

Some of the frameworks have a single short style sheet that you point to from your markup, which then links off to separate style sheets for fonts, colours, layout, etc. But most of them use separate style sheets and you must link to each one in your markup. Eric reckons that this is because IE for Windows will cache the first style sheet you point to with a link element but not any subsequent style sheets with @import.

What the hack?

There are two kinds of hacks:

  1. Hacks that point to failings in CSS like self-clearing floated elements and things like pseudo-padding.
  2. Hacks for Internet Explorer 6.

Some cool bits

Some of the frameworks provide compressed versions for production use to keep file size down.

Three of the frameworks had debugging styles that you could “turn on” to say, display the grid in the document.

YAML provides a draft file which is like a template style sheet. The selectors are written out but the declarations are left empty. This could be a handy training tool (fill in the curly braces).

960 provides “sketch” files: PDFs of the grid for you to print out and scribble on.

Thus endeth Eric’s roundup of CSS frameworks.

Wireframework

There’s been a lot of buzz lately around a new CSS framework called Blueprint. It’s basically a collection of resources pulled together from other sources: Khoi’s grids, Richard’s vertical rhythm, Eric’s reset and more.

Some people—including contributors to the CSS—have expressed their reservations about the non-semantic class names used in the framework. That’s a valid concern but, as Simon pointed out in the comments to Mark’s post, you don’t have to restrict yourself to those class names: you can always add your own semantics to the markup.

I don’t see myself using Blueprint. It just seems too restrictive for use in a real-world project. Maybe if I’m building a grid-based layout that’s precisely 960 pixels wide it could save me some time, but I’m mostly reminded of the quote apocryphally attributed to Henry Ford about the Model T:

The customer can have any color he wants so long as it’s black.

Unless I’m creating cookie-cutter sites, I don’t think a CSS framework can help me. That said, I think a framework like Blueprint has its place.

At Clearleft, a lot of our work involves wireframing. Every Information Architect has their own preference for tools and formats for creating wireframes and prototypes: some use Visio, others Omnigraffle. James and Richard usually start with paper and then move on to HTML, CSS and even a dab of JavaScript.

This results in quick wireframes that illustrate hierarchy, are addressable and allow for a good level of interaction. Creating HTML wireframes requires a different mindset to creating documents intended for the Web. You don’t have to worry about cross-browser CSS, bulletproof markup or unobtrusive JavaScript. With those concerns out of the equation, the benefits of using cookie-cutter code really come to the fore.

So while I might have reservations about using a JavaScript library on a production site, I’d have no such qualms when it comes to generating a quick prototype. The same goes for Blueprint. I think it could be ideally suited to HTML wireframes.

I may be a bit of a control freak, but I’d no sooner use a CSS framework for a live site than I’d use clip art for images. I firmly believe that creating good markup is a craft that, like good design, takes time. It may seem unrealistic to some, but I don’t want to compromise that quality without a very good reason.

That’s my hard-nosed attitude when it comes to creating documents for the World Wide Web. If the documents are intended purely as wireframes for internal use, then my attitude softens considerably. Then I think a framework like Blueprint could really shine.