Tags: framework

86

sparkline

Thursday, October 11th, 2018

Why do people decide to use frameworks?

Some sensible answers to this question here…

…of which, exactly zero mention end users.

Friday, September 28th, 2018

What is Modular CSS?

A walk down memory lane, looking at the history modular CSS methodologies (and the people behind them):

Thursday, September 20th, 2018

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

Tuesday, August 21st, 2018

The Web I Want - DEV Community 👩‍💻👨‍💻

Scores of people who just want to deliver their content and have it look vaguely nice are convinced you need every web technology under the sun to deliver text.

This is very lawnoffgetting but I can relate.

I made my first website about 20 years ago and it delivered as much content as most websites today. It was more accessible, ran faster and easier to develop then 90% of the stuff you’ll read on here.

20 years later I browse the Internet with a few tabs open and I have somehow downloaded many megabytes of data, my laptop is on fire and yet in terms of actual content delivery nothing has really changed.

Monday, August 13th, 2018

The power of progressive enhancement – No Divide – Medium

The beauty of this approach is that the site doesn’t ever appear broken and the user won’t even be aware that they are getting the ‘default’ experience. With progressive enhancement, every user has their own experience of the site, rather than an experience that the designers and developers demand of them.

A case study in applying progressive enhancement to all aspects of a site.

Progressive enhancement isn’t necessarily more work and it certainly isn’t a non-JavaScript fallback, it’s a change in how we think about our projects. A complete mindset change is required here and it starts by remembering that you don’t build websites for yourself, you build them for others.

Friday, July 20th, 2018

Industry Fatigue by Jordan Moore

There are of course things worth your time and deep consideration, and there are distractions. Profound new thinking and movements within our industry - the kind that fundamentally shifts the way we work in a positive new direction are worth your time and attention. Other things are distractions. I put new industry gossip, frameworks, software and tools firmly in the distractions category. This is the sort of content that exists in the padding between big movements. It’s the kind of stuff that doesn’t break new ground and it doesn’t make or break your ability to do your job.

Tuesday, July 10th, 2018

StaticGen | Top Open Source Static Site Generators

There are a lot of static site generators out there!

Sunday, June 3rd, 2018

AMPstinction

I’ve come to believe that the goal of any good framework should be to make itself unnecessary.

Brian said it explicitly of his PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

That makes total sense, especially if your code is a polyfill—those solutions are temporary by design. Autoprefixer is another good example of a piece of code that becomes less and less necessary over time.

But I think it’s equally true of any successful framework or library. If the framework becomes popular enough, it will inevitably end up influencing the standards process, thereby becoming dispensible.

jQuery is the classic example of this. There’s very little reason to use jQuery these days because you can accomplish so much with browser-native JavaScript. But the reason why you can accomplish so much without jQuery is because of jQuery. I don’t think we would have querySelector without jQuery. The library proved the need for the feature. The same is true for a whole load of DOM scripting features.

The same process is almost certain to occur with React—it’s a good bet there will be a standardised equivalent to the virtual DOM at some point.

When Google first unveiled AMP, its intentions weren’t clear to me. I hoped that it existed purely to make itself redundant:

As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

Alas, as time has passed, that hope shows no signs of being fulfilled. If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting their “regular” pages put on weight.

Worse yet, the messaging from Google around AMP has shifted. Instead of pitching it as a format for creating parallel versions of your web pages, they’re now also extolling the virtues of having your AMP pages be the only version you publish:

In fact, AMP’s evolution has made it a viable solution to build entire websites.

On an episode of the Dev Mode podcast a while back, AMP was a hotly-debated topic. But even those defending AMP were doing so on the understanding that it was more a proof-of-concept than a long-term solution (and also that AMP is just for news stories—something else that Google are keen to change).

But now it’s clear that the Google AMP Project is being marketed more like a framework for the future: a collection of web components that prioritise performance …which is kind of odd, because that’s also what Google’s Polymer project is. The difference being that pages made with Polymer don’t get preferential treatment in Google’s search results. I can’t help but wonder how the Polymer team feels about AMP’s gradual pivot onto their territory.

If the AMP project existed in order to create a web where AMP was no longer needed, I think I could get behind it. But the more it’s positioned as the only viable solution to solving performance, the more uncomfortable I am with it.

Which, by the way, brings me to one of the most pernicious ideas around Google AMP—positioning anyone opposed to it as not caring about web performance. Nothing could be further from the truth. It’s precisely because performance on the web is so important that it deserves a long-term solution, co-created by all of us: not some commandents delivered to us from on-high by one organisation, enforced by preferential treatment by that organisation’s monopoly in search.

It’s the classic logical fallacy:

  1. Performance! Something must be done!
  2. AMP is something.
  3. Now something has been done.

By marketing itself as the only viable solution to the web performance problem, I think the AMP project is doing itself a great disservice. If it positioned itself as an example to be emulated, I would welcome it.

I wish that AMP were being marketed more like a temporary polyfill. And as with any polyfill, I look forward to the day when AMP is no longer necesssary.

I want AMP to become extinct. I genuinely think that the Google AMP team should share that wish.

The React is “just” JavaScript Myth - daverupert.com

In my experience, there’s no casual mode within React. You need to be all-in, keeping up with the ecosystem, or else your knowledge evaporates.

I think Dave is right. At this point, it’s possible to be a React developer exclusively.

React is an ecosystem. I feel like it’s a disservice to anyone trying to learn to diminish all that React entails. React shows up on the scene with Babel, Webpack, and JSX (which each have their own learning curve) then quickly branches out into technologies like Redux, React-Router, Immutable.js, Axios, Jest, Next.js, Create-React-App, GraphQL, and whatever weird plugin you need for your app.

And, as Jake points out, you either need to go all in or not at all—you can’t really incrementally add Reactness to an existing project.

Thursday, May 31st, 2018

The Cult of the Complex · An A List Apart Article

I know that Jeffrey and I sound like old men yelling at kids to get off the lawn when we bemoan the fetishisation of complex tools and build processes, but Jeffrey gets to the heart of it here: it’s about appropriateness.

As a designer who used to love creating web experiences in code, I am baffled and numbed by the growing preference for complexity over simplicity. Complexity is good for convincing people they could not possibly do your job. Simplicity is good for everything else.

And not to sound like a broken record, but once again I’m reminded of the rule of least power.

Monday, May 7th, 2018

Design systems

Talking about scaling design can get very confusing very quickly. There are a bunch of terms that get thrown around: design systems, pattern libraries, style guides, and components.

The generally-accepted definition of a design system is that it’s the outer circle—it encompasses pattern libraries, style guides, and any other artefacts. But there’s something more. Just because you have a collection of design patterns doesn’t mean you have a design system. A system is a framework. It’s a rulebook. It’s what tells you how those patterns work together.

This is something that Cennydd mentioned recently:

Here’s my thing with the modularisation trend in design: where’s the gestalt?

In my mind, the design system is the gestalt. But Cennydd is absolutely right to express concern—I think a lot of people are collecting patterns and calling the resulting collection a design system. No. That’s a pattern library. You still need to have a framework for how to use those patterns.

I understand the urge to fixate on patterns. They’re small enough to be manageable, and they’re tangible—here’s a carousel; here’s a date-picker. But a design system is big and intangible.

Games are great examples of design systems. They’re frameworks. A game is a collection of rules and constraints. You can document those rules and constraints, but you can’t point to something and say, “That is football” or “That is chess” or “That is poker.”

Even though they consist entirely of rules and constraints, football, chess, and poker still produce an almost infinite possibility space. That’s quite overwhelming. So it’s easier for us to grasp instances of football, chess, and poker. We can point to a particular occurrence and say, “That is a game of football”, or “That is a chess match.”

But if you tried to figure out the rules of football, chess, or poker just from watching one particular instance of the game, you’d have your work cut for you. It’s not impossible, but it is challenging.

Likewise, it’s not very useful to create a library of patterns without providing any framework for using those patterns.

I would go so far as to say that the actual code for the patterns is the least important part of a design system (or, certainly, it’s the part that should be most malleable and open to change). It’s more important that the patterns have been identified, named, described, and crucially, accompanied by some kind of guidance on usage.

I could easily imagine using a tool like Fractal to create a library of text snippets with no actual code. Those pieces of text—which provide information on where and when to use a pattern—could be more valuable than providing a snippet of code without any context.

One of the very first large-scale pattern libraries I can remember seeing on the web was Yahoo’s Design Pattern Library. Each pattern outlined

  1. the problem being solved;
  2. when to use this pattern;
  3. when not to use this pattern.

Only then, almost incidentally, did they link off to the code for that pattern. But it was entirely possible to use the system of patterns without ever using that code. The code was just one instance of the pattern. The important part was the framework that helped you understand when and where it was appropriate to use that pattern.

I think we lose sight of the real value of a design system when we focus too much on the components. The components are the trees. The design system is the forest. As Paul asked:

What methodologies might we uncover if we were to focus more on the relationships between components, rather than the components themselves?

Sunday, April 15th, 2018

A Framework Author’s Case Against Frameworks - YouTube

A terrific talk by Adrian Holovaty. I really hope front-end developers talk its message to heart.

A Framework Author's Case Against Frameworks

Saturday, February 24th, 2018

Complexity | CSS-Tricks

We talk about complexity, but it’s all opt-in. A wonderfully useful (and simple) website of a decade ago remains wonderfully useful and simple. Fortunately for all involved, the web, thus far, has taken compatibility quite seriously. Old websites don’t just break.

Sunday, February 11th, 2018

Seva Zaikov - Single Page Application Is Not a Silver Bullet

Harsh (but fair) assessment of the performance costs of doing everything on the client side.

Wednesday, January 31st, 2018

Stimulus 1.0: A modest JavaScript framework for the HTML you already have

All our applications have server-side rendered HTML at their core, then add sprinkles of JavaScript to make them sparkle.

Yup!—I’m definitely liking the sound of this Stimulus JavaScript framework.

It’s designed to read as a progressive enhancement when you look at the HTML it’s addressing.

Wednesday, January 24th, 2018

React, Redux and JavaScript Architecture

I still haven’t used React (I know, I know) but this looks like a nice explanation of React and Redux.

Tuesday, January 9th, 2018

The Origin of Stimulus

I really like the look of this markup-driven JavaScript library from the same people who brought us the pjax library Turbolinks.

The philosophy behind these tools matches my own philosophy (which I think is one of the most important factors in choosing a tool that works for you, not against you).

Monday, January 1st, 2018

All That Glisters ◆ 24 ways

I thought this post from Drew was a great way to wrap up another excellent crop from 24 Ways.

There are so many new tools, frameworks, techniques, styles and libraries to learn. You know what? You don’t have to use them.

Chris likes it too.

Tuesday, October 31st, 2017

Airplanes and Ashtrays – CSS Wizardry

Whenever you plan or design a system, you need to build in your own ashtrays—a codified way of dealing with the inevitability of somebody doing the wrong thing. Think of what your ideal scenario is—how do you want people to use whatever you’re building—and then try to identify any aspects of it which may be overly opinionated, prescriptive, or restrictive. Then try to preempt how people might try to avoid or circumvent these rules, and work back from there until you can design a safe middle-ground into your framework that can accept these deviations in the safest, least destructive way possible.

Netflix functions without client-side React, and it’s a good thing - JakeArchibald.com

A great bucketload of common sense from Jake:

Rather than copying bad examples from the history of native apps, where everything is delivered in one big lump, we should be doing a little with a little, then getting a little more and doing a little more, repeating until complete. Think about the things users are going to do when they first arrive, and deliver that. Especially consider those most-likely to arrive with empty caches.

And here’s a good way of thinking about that:

I’m a fan of progressive enhancement as it puts you in this mindset. Continually do as much as you can with what you’ve got.

All too often, saying “use the right tool for the job” is interpreted as “don’t use that tool!” but as Jake reminds us, the sign of a really good tool is its ability to adapt instead of demanding rigid usage:

Netflix uses React on the client and server, but they identified that the client-side portion wasn’t needed for the first interaction, so they leaned on what the browser can already do, and deferred client-side React. The story isn’t that they’re abandoning React, it’s that they’re able to defer it on the client until it’s was needed. React folks should be championing this as a feature.