Tags: performance

502

sparkline

Monday, January 18th, 2021

React Bias

Dev perception.

The juxtaposition of The HTTP Archive’s analysis and The State of JS 2020 Survey results suggest that a disproportionately small—yet exceedingly vocal minority—of white male developers advocate strongly for React, and by extension, a development experience that favors thick client/thin server architectures which are given to poor performance in adverse conditions. Such conditions are less likely to be experienced by white male developers themselves, therefore reaffirming and reflecting their own biases in their work.

Wednesday, December 23rd, 2020

HTML Over The Wire | Hotwire

This is great! The folks at Basecamp are releasing the front-end frameworks they use to build Hey. There’s Turbo—the successor to Turbolinks:

It offers a simpler alternative to the prevailing client-side frameworks which put all the logic in the front-end and confine the server side of your app to being little more than a JSON API.

With Turbo, you let the server deliver HTML directly, which means all the logic for checking permissions, interacting directly with your domain model, and everything else that goes into programming an application can happen more or less exclusively within your favorite programming language. You’re no longer mirroring logic on both sides of a JSON divide. All the logic lives on the server, and the browser deals just with the final HTML.

Yes, this is basically Hijax (which is itself simply a name for progressive enhancement applied to Ajax) and I’m totally fine with that. I don’t care what it’s called when the end result is faster, more resilient websites.

Compare and contrast the simplicity of the Hotwire/Turbo approach to the knots that React is tying itself up in to try to get the same performance benefits.

Friday, December 18th, 2020

Conditional JavaScript - JavaScript - Dev Tips

This is a good round-up of APIs you can use to decide if and how much JavaScript to load. I might look into using storage.estimate() in service workers to figure out how much gets pre-cached.

Tuesday, December 15th, 2020

Cascading Style Sheets

There are three ways—that I know of—to associate styles with markup.

External CSS

This is probably the most common. Using a link element with a rel value of “stylesheet”, you point to a URL using the href attribute. That URL is a style sheet that is applied to the current document (“the relationship of the linked resource it is that is a ‘stylesheet’ for the current document”).

<link rel="stylesheet" href="/path/to/styles.css">

In theory you could associate a style sheet with a document using an HTTP header, but I don’t think many browsers support this in practice.

You can also pull in external style sheets using the @import declaration in CSS itself, as long as the @import rule is declared at the start, before any other styles.

@import url('/path/to/more-styles.css');

When you use link rel="stylesheet" to apply styles, it’s a blocking request: the browser will fetch the style sheet before rendering the HTML. It needs to know how the HTML elements will be painted to the screen so there’s no point rendering the HTML until the CSS is parsed.

Embedded CSS

You can also place CSS rules inside a style element directly in the document. This is usually in the head of the document.

<style>
element {
    property: value;
}
</style>

When you embed CSS in the head of a document like this, there is no network request like there would be with external style sheets so there’s no render-blocking behaviour.

You can put any CSS inside the style element, which means that you could use embedded CSS to load external CSS using an @import statement (as long as that @import statement appears right at the start).

<style>
@import url('/path/to/more-styles.css');
element {
    property: value;
}
</style>

But then you’re back to having a network request.

Inline CSS

Using the style attribute you can apply CSS rules directly to an element. This is a universal attribute. It can be used on any HTML element. That doesn’t necessarily mean that the styles will work, but your markup is never invalidated by the presence of the style attribute.

<element style="property: value">
</element>

Whereas external CSS and embedded CSS don’t have any effect on specificity, inline styles are positively radioactive with specificity. Any styles applied this way are almost certain to over-ride any external or embedded styles.

You can also apply styles using JavaScript and the Document Object Model.

element.style.property = 'value';

Using the DOM style object this way is equivalent to inline styles. The radioactive specificity applies here too.

Style declarations specified in external style sheets or embedded CSS follow the rules of the cascade. Values can be over-ridden depending on the order they appear in. Combined with the separate-but-related rules for specificity, this can be very powerful. But if you don’t understand how the cascade and specificity work then the results can be unexpected, leading to frustration. In that situation, inline styles look very appealing—there’s no cascade and everything has equal specificity. But using inline styles means foregoing a lot of power—you’d be ditching the C in CSS.

A common technique for web performance is to favour embedded CSS over external CSS in order to avoid the extra network request (at least for the first visit—there are clever techniques for caching an external style sheet once the HTML has already loaded). This is commonly referred to as inlining your CSS. But really it should be called embedding your CSS.

This language mix-up is not a hill I’m going to die on (that hill would be referring to blog posts as blogs) but I thought it was worth pointing out.

Monday, December 14th, 2020

Web Almanac 2020

I spent most of the weekend reading through this and I’ve still barely scratched the surface—a lot of work has gone to the analyses and write-ups!

The sections on accessibility and performance get grimmer each year but the raw numbers on framework adaption are refreshingly perspective-setting.

Monday, November 30th, 2020

Clean advertising

Imagine if you were told that fossil fuels were the only way of extracting energy. It would be an absurd claim. Not only are other energy sources available—solar, wind, geothermal, nuclear—fossil fuels aren’t even the most effecient source of energy. To say that you can’t have energy without burning fossil fuels would be pitifully incorrect.

And yet when it comes to online advertising, we seem to have meekly accepted that you can’t have effective advertising without invasive tracking. But nothing could be further from the truth. Invasive tracking is to online advertising as fossil fuels are to energy production—an outmoded inefficient means of getting substandard results.

Before the onslaught of third party cookies and scripts, online advertising was contextual. If I searched for property insurance, I was likely to see an advertisement for property insurance. If I was reading an article about pet food, I was likely to be served an advertisement for pet food.

Simply put, contextual advertising ensured that the advertising that accompanied content could be relevant and timely. There was no big mystery about it: advertisers just needed to know what the content was about and they could serve up the appropriate advertisement. Nice and straightforward.

Too straightforward.

What if, instead of matching the advertisement to the content, we could match the advertisement to the person? Regardless of what they were searching for or reading, they’d be served advertisements that were relevant to them not just in that moment, but relevant to their lifestyles, thoughts and beliefs? Of course that would require building up dossiers of information about each person so that their profiles could be targeted and constantly updated. That’s where cross-site tracking comes in, with third-party cookies and scripts.

This is behavioural advertising. It has all but elimated contextual advertising. It has become so pervasive that online advertising and behavioural advertising have become synonymous. Contextual advertising is seen as laughably primitive compared with the clairvoyant powers of behavioural advertising.

But there’s a problem with behavioural advertising. A big problem.

It doesn’t work.

First of all, it relies on mind-reading powers by the advertising brokers—Facebook, Google, and the other middlemen of ad tech. For all the apocryphal folk tales of spooky second-guessing in online advertising, it mostly remains rubbish.

Forget privacy: you’re terrible at targeting anyway:

None of this works. They are still trying to sell me car insurance for my subway ride.

Have you actually paid attention to what advertisements you’re served? Maciej did:

I saw a lot of ads for GEICO, a brand of car insurance that I already own.

I saw multiple ads for Red Lobster, a seafood restaurant chain in America. Red Lobster doesn’t have any branches in San Francisco, where I live.

Finally, I saw a ton of ads for Zipcar, which is a car sharing service. These really pissed me off, not because I have a problem with Zipcar, but because they showed me the algorithm wasn’t even trying. It’s one thing to get the targeting wrong, but the ad engine can’t even decide if I have a car or not! You just showed me five ads for car insurance.

And yet in the twisted logic of ad tech, all of this would be seen as evidence that they need to gather even more data with even more invasive tracking and surveillance.

It turns out that bizarre logic is at the very heart of behavioural advertising. I highly recommend reading the in-depth report from The Correspondent called The new dot com bubble is here: it’s called online advertising:

It’s about a market of a quarter of a trillion dollars governed by irrationality.

The benchmarks that advertising companies use – intended to measure the number of clicks, sales and downloads that occur after an ad is viewed – are fundamentally misleading. None of these benchmarks distinguish between the selection effect (clicks, purchases and downloads that are happening anyway) and the advertising effect (clicks, purchases and downloads that would not have happened without ads).

Suppose someone told you that they keep tigers out of their garden by turning on their kitchen light every evening. You might think their logic is flawed, but they’ve been turning on the kitchen light every evening for years and there hasn’t been a single tiger in the garden the whole time. That’s the logic used by ad tech companies to justify trackers.

Tracker-driven behavioural advertising is bad for users. The advertisements are irrelevant most of the time, and on the few occasions where the advertising hits the mark, it just feels creepy.

Tracker-driven behavioural advertising is bad for advertisers. They spend their hard-earned money on invasive ad tech that results in no more sales or brand recognition than if they had relied on good ol’ contextual advertising.

Tracker-driven behavioural advertising is very bad for the web. Megabytes of third-party JavaScript are injected at exactly the wrong moment to make for the worst possible performance. And if that doesn’t ruin the user experience enough, there are still invasive overlays and consent forms to click through (which, ironically, gets people mad at the legislation—like GDPR—instead of the underlying reason for these annoying overlays: unnecessary surveillance and tracking by the site you’re visiting).

Tracker-driven behavioural advertising is good for the middlemen doing the tracking. Facebook and Google are two of the biggest players here. But that doesn’t mean that their business models need to be permanently anchored to surveillance. The very monopolies that make them kings of behavioural advertising—the biggest social network and the biggest search engine—would also make them titans of contextual advertising. They could pivot from an invasive behavioural model of advertising to a privacy-respecting contextual advertising model.

The incumbents will almost certainly resist changing something so fundamental. It would be like expecting an energy company to change their focus from fossil fuels to renewables. It won’t happen quickly. But I think that it may eventually happen …if we demand it.

In the meantime, we can all play our part. Just as we can do our bit for the environment at an individual level by sorting our recycling and making green choices in our day to day lives, we can all do our bit for the web too.

The least we can do is block third-party cookies. Some browsers are now doing this by default. That’s good.

Blocking third-party JavaScript is a bit trickier. That requires a browser extension. Most of these extensions to block third-party tracking are called ad blockers. That’s a shame. The issue is not with advertising. The issue is with tracking.

Alas, because this software is labelled under ad blocking, it has led to the ludicrous situation of an ethical argument being made to allow surveillance and tracking! It goes like this: websites need advertising to survive; if you block the ads, then you are denying these sites revenue. That argument would make sense if we were talking about contextual advertising. But it makes no sense when it comes to behavioural advertising …unless you genuinely believe that online advertising has to be behavioural, which means that online advertising has to track you to be effective. Such a belief would be completely wrong. But that doesn’t stop it being widely held.

To argue that there is a moral argument against blocking trackers is ridiculous. If anything, there’s a moral argument to be made for installing anti-tracking software for yourself, your friends, and your family. Otherwise we are collectively giving up our privacy for a business model that doesn’t even work.

It’s a shame that advertisers will lose out if tracking-blocking software prevents their ads from loading. But that’s only going to happen in the case of behavioural advertising. Contextual advertising won’t be blocked. Contextual advertising is also more lightweight than behavioural advertising. Contextual advertising is far less creepy than behavioural advertising. And crucially, contextual advertising works.

That shouldn’t be a controversial claim: the idea that people would be interested in adverts that are related to the content they’re currently looking at. The greatest trick the ad tech industry has pulled is convincing the world that contextual relevance is somehow less effective than some secret algorithm fed with all our data that’s supposed to be able to practically read our minds and know us better than we know ourselves.

Y’know, if this mind-control ray really could give me timely relevant adverts, I might possibly consider paying the price with my privacy. But as it is, YouTube still hasn’t figured out that I’m not interested in Top Gear or football.

The next time someone is talking about the necessity of advertising on the web as a business model, ask for details. Do they mean contextual or behavioural advertising? They’ll probably laugh at you and say that behavioural advertising is the only thing that works. They’ll be wrong.

I know it’s hard to imagine a future without tracker-driven behavioural advertising. But there are no good business reasons for it to continue. It was once hard to imagine a future without oil or coal. But through collective action, legislation, and smart business decisions, we can make a cleaner future.

Monday, November 16th, 2020

The Core Web Vitals hype train

Goodhart’s Law applied to Google’s core web vitals:

If developers start to focus solely on Core Web Vitals because it is important for SEO, then some folks will undoubtedly try to game the system.

Personally, my beef with core web vitals is that they introduce even more uneccessary initialisms (see, for example, Harry’s recent post where he uses CWV metrics like LCP, FID, and CLS—alongside TTFB and SI—to look at PLPs, PDPs, and SRPs. I mean, WTF?).

Friday, November 13th, 2020

Chris Ferdinandi: The Lean Web – WordPress.tv

This video of Chris’s presentation is well worth watching:

The web in 2020 is a bloated and over-engineered mess! Many modern web development “best practices” are making the web worse. This thought-provoking talk shares ideas on how to fix the problem as it explores an alternate set of best practices.

Thursday, November 12th, 2020

Official Google Webmaster Central Blog: Timing for bringing page experience to Google Search

Good news: as of May 2021, page speed (or core web vitals, if you must) will be a ranking factor in Google Search.

Even better news: at the same time, Google AMP will lose its unfairly privileged position in the top stories carousel. Hopefully this marks the beginning of the end for Google’s failed experiment in forcing publishers to use their tech.

Wednesday, October 28th, 2020

Portals and giant carousels

I posted something recently that I think might be categorised as a “shitpost”:

Most single page apps are just giant carousels.

Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:

I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.

For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”

If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.

But I’ve never actually seen that happen in a software business.

A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.

It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.

The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.

Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:

  1. We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
  2. Why are the JavaScript dependencies so large?
  3. We need all that JavaScript to create the single page app functionlity.

To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.

So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.

Except…

If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.

This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)

I linked to Jake’s excellent proposal in my shitpost saying:

bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers

But then I added—and I almost didn’t—this:

(not portals)

Now you might be asking yourself what Paul said out loud:

Excuse my ignorance but… WTF are portals!?

I replied with a link to the portals proposal and what I thought was an example use case:

Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).

That was based on my reading of the proposal:

…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.

It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.

But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).

Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.

After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.

But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.

Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.

Kenji said:

we haven’t seen interest from SPA folks in portals so far.

I’m not surprised! He goes on:

Maybe, they are happy / benefits aren’t clear yet.

From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.

Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.

I guess the web I want includes giant carousels.

Sunday, October 18th, 2020

People Problems | CSS-Tricks

I’d maybe simplify this people problem a bit: the codebase is easy to change, but the incentives within a company are not. And yet it’s the incentives that drive what kind of code gets written — what is acceptable, what needs to get fixed, how people work together. In short, we cannot be expected to fix the code without fixing the organization, too.

Thursday, October 1st, 2020

Downloading from Google Fonts

If you’re using web fonts, there are good performance (and privacy) reasons for hosting your own font files. And fortunately, Google Fonts gives you that option. There’s a “Download family” button on every specimen page.

But if you go ahead and download a font family from Google Fonts, you’ll notice something a bit odd. The .zip file only contains .ttf files. You can serve those on the web, but it’s far from the best choice. Woff2 is far leaner in file size.

This means you need to manually convert the downloaded .ttf files into .woff or .woff2 files using something like Font Squirrel’s generator. That’s fine, but I’m curious as to why this step is necessary. Why doesn’t Google Fonts provide .woff or .woff2 files in the downloaded folder? After all, if you choose to use Google Fonts as a third-party hosting service for your fonts, it most definitely serves up the appropriate file formats.

I thought maybe it was something to do with the licensing. Maybe some licenses only allow for unmodified truetype files to be distributed? But I’ve looked at fonts with different licenses—some have Apache 2 licensing, some have Open Font licensing—and they’re all quite permissive and definitely allow for modification.

Maybe the thinking is that, if you’re hosting your own font files, then you know what you’re doing and you should be able to do your own file conversion and subsetting. But I’ve come across more than one website in the wild serving up .ttf files. And who can blame them? They want to host their own font files. They downloaded those files from Google Fonts. Why shouldn’t they assume that they’re good to go?

It’s all a bit strange. If anyone knows why Google Fonts only provides .ttf files for download, please let me know. In a pinch, I will also accept rampant speculation.

Trys also pointed out some weird default behaviour if you do let Google Fonts do the hosting for you. Specifically if it’s a variable font. Let’s say it’s a font with weight as a variable axis. You specify in advance which weights you’ll be using, and then it generates separate font files to serve for each different weight.

Doesn’t that defeat the whole point of using a variable font? I mean, I can see how it could result in smaller file sizes if you’re just using one or two weights, but isn’t half the fun of having a weight axis that you can go crazy with as many weights as you want and it’s all still one font file?

Like I said, it’s all very strange.

AddyOsmani.com - Preload late-discovered Hero images faster

Did you know there’s an imagesrcset attribute you can put on link rel="preload" as="image" (along with an imagesizes attribute)?

I didn’t. (Until Amber pointed this out.)

Thursday, September 24th, 2020

Performance and people

I was helping a client with a bit of a performance audit this week. I really, really enjoy this work. It’s such a nice opportunity to get my hands in the soil of a website, so to speak, and suggest changes that will have a measurable effect on the user’s experience.

Not only is web performance a user experience issue, it may well be the user experience issue. Page speed has a proven demonstrable direct effect on user experience (and revenue and customer satisfaction and whatever other metrics you’re using).

It struck me that there’s a continuum of performance challenges. On one end of the continuum, you’ve got technical issues. These can be solved with technical solutions. On the other end of the continuum, you’ve got human issues. These can be solved with discussions, agreement, empathy, and conversations (often dreaded or awkward).

I think that, as developers, we tend to gravitate towards the technical issues. That’s our safe space. But I suspect that bigger gains can be reaped by tackling the uncomfortable human issues.

This week, for example, I uncovered three performance issues. One was definitely technical. One was definitely human. One was halfway between.

The technical issue was with web fonts. It’s a lot of fun to dive into this aspect of web performance because quite often there’s some low-hanging fruit: a relatively simple technical fix that will boost the performance (or perceived performance) of a website. That might be through resource hints (using link rel=“preload” in the HTML) or adjusting the font loading (using font-display in the CSS) or even nerdier stuff like subsetting.

In this case, the issue was with the file format of the font itself. By switching to woff2, there were significant file size savings. And the great thing is that @font-face rules allow you to specify multiple file formats so you can still support older browsers that can’t handle woff2. A win all ‘round!

The performance issue that was right in the middle of the technical/human continuum was with images. At first glance it looked like a similar issue to the fonts. Some images were being served in the wrong formats. When I say “wrong”, I guess I mean inappropriate. A photographic image, for example, is probably going to best served as a JPG rather than a PNG.

But unlike the fonts, the images weren’t in the direct control of the developers. These images were coming from a Content Management System. And while there’s a certain amount of processing you can do on the server, a human still makes the decision about what file format they’re uploading.

I’ve seen this happen at Clearleft. We launched an event site with lean performant code, but then someone uploaded an image that’s megabytes in size. The solution in that case wasn’t technical. We realised there was a knowledge gap around image file formats—which, let’s face it, is kind of a techy topic that most normal people shouldn’t be expected to know.

But it was extremely gratifying to see that people were genuinely interested in knowing a bit more about choosing the right format for the right image. I was able to provide a few rules of thumb and point to free software for converting images. It empowered those people to feel more confident using the Content Management System.

It was a similar situation with the client site I was looking at this week. Nobody is uploading oversized images in order to deliberately make the site slower. They probably don’t realise the difference that image formats can make. By having a discussion and giving them some pointers, they’ll have more knowledge and the site will be faster. Another win all ‘round!

At the other end of continuum was an issue that wasn’t technical. From a technical point of view, there was just one teeny weeny little script. But that little script is Google Tag Manager which then calls many, many other scripts that are not so teeny weeny. Third party scripts …the bane of web performance!

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

Remember when I did a countdown of the top four web performance challenges? At the number one spot is other people’s JavaScript.

Now one technical solution would be to remove the Google Tag Manager script. But that’s probably not very practical—you’ll probably just piss off some other department. That said, if you can’t find out which department was responsible for adding the Google Tag Manager script in the first place, it might we well be an option to remove it and then wait and see who complains. If no one notices it’s gone, job done!

More realistically, there’s someone who’s added that Google Tag Manager script for their own valid reasons. You’ll need to talk to them and understand their needs.

Again, as with images uploaded in a Content Management System, they may not be aware of the performance problems caused by third-party scripts. You could try throwing numbers at them, but I think you get better results by telling the story of performance.

Use tools like Request Map Generator to help them visualise the impact that third-party scripts are having. Talk to them. More importantly, listen to them. Find out why those scripts are being requested. What are the outcomes they’re working towards? Can you offer an alternative way of providing the data they need?

I think many of us developers are intimidated or apprehensive about approaching people to have those conversations. But it’s necessary. And in its own way, it can be as rewarding as tinkering with code. If the end result is a faster website, then the work is definitely worth doing—whether it’s technical work or people work.

Personally, I just really enjoy working on anything that will end up improving a website’s performance, and by extension, the user experience. If you fancy working with me on your site, you should get in touch with Clearleft.

Wednesday, September 9th, 2020

AVIF has landed - JakeArchibald.com

There’s a new image format on the browser block and it’s very performant indeed. Jake has all the details you didn’t ask for.

Monday, September 7th, 2020

Kokorobot — leanerweb

The problem is that most websites will adapt to the ever faster connections, which makes them gradually inaccessible for people with slower connections. Today, most websites are impossible to download with a dial-up connection, because they have become too corpulent.

This speaks to me:

Everything we do to make it harder to create a website or edit a web page, and harder to learn to code by viewing source, promotes that consumerist vision of the web.

Pretending that one needs a team of professionals to put simple articles online will become a self-fulfilling prophecy. Overcomplicating the web means lifting up the ladder that used to make it possible for people to teach themselves and surprise everyone with unexpected new ideas.

There’s a list of links at the end of this piece to help you reach this goal:

It is vital that the web stay participatory. That means not just making sites small enough so the whole world can visit them, but small enough so that people can learn to build their own, by example. Bloat makes the web inaccessible.

Friday, August 28th, 2020

Why you should hire a frontend developer - Technology in government

This is a really good description of the role of a front-end developer.

That’s front end, not full stack.

Thursday, August 20th, 2020

Service Workers | Go Make Things

Chris Ferdinandi blogs every day about the power of vanilla JavaScript. For over a week now, his daily posts have been about service workers. The cumulative result is this excellent collection of resources.

Wednesday, August 19th, 2020

radEventListener: a Tale of Client-side Framework Performance | CSS-Tricks

Excellent research by Jeremy Wagner comparing the performance impact of React, Preact, and vanilla JavaScript. The results are simultaneously shocking and entirely unsurprising.