Tags: tool

241

sparkline

Sunday, January 3rd, 2021

My stack will outlive yours

My stack requires no maintenance, has perfect Lighthouse scores, will never have any security vulnerability, is based on open standards, is portable, has an instant dev loop, has no build step and… will outlive any other stack.

Wednesday, December 16th, 2020

npm ruin dev

This was originally published on CSS Tricks in December 2020 as part of a year-end round-up of responses to the question “What is one thing you learned about building websites this year?”

In 2020, I rediscovered the enjoyment of building a website with plain ol’ HTML, CSS, and JavaScript—no transpilin’, no compilin’, no build tools other than my hands on the keyboard.

Seeing as my personal brand could be summed up “so late to the game that the stadium has been demolished”, I decided to start a podcast in 2020. It’s the podcast of my agency, Clearleft, and it has been given the soaringly imaginative title of The Clearleft Podcast. I’m really pleased with how the first season turned out. I’m also really pleased with the website I put together for it.

The website isn’t very big, though it will grow with time. I had a think about what the build process for the site should be and after literally seconds of debate, I settled on a build process of none. Zero. Nada.

This turned out to be enormously liberating. It felt very hands-on to write the actual HTML and CSS that will be delivered to end users, without any mediation. I felt like I was getting my hands into the soil of the site.

CSS has evolved so much in recent years—with features like calc() and custom properties—that you don’t have to use preprocessors like Sass. And vanilla JavaScript is powerful, fully-featured, and works across browsers without any compiling.

Don’t get me wrong—I totally understand why complicated pipelines are necessary for complicated websites. If you’re part of a large team, you probably need to have processes in place so that everyone can contribute to the codebase in a consistent way. The more complex that codebase is, the more technology you need to help you automate your work and catch errors before they go live.

But that set-up isn’t appropriate for every website. And all those tools and processes that are supposed to save time sometimes end up wasting time further down the road. Ever had to revisit a project after, say, six or twelve months? Maybe you just want to make one little change to the CSS. But you can’t because a dependency is broken. So you try to update it. But it relies on a different version of Node. Before you know it, you’re Bryan Cranston changing a light bulb. You should be tweaking one line of CSS but instead you’re battling entropy.

Whenever I’m tackling a problem in front-end development, I like to apply the principle of least power: choose the least powerful language suitable for a given purpose. A classic example would be using a simple HTML button element instead of trying to recreate all the native functionality of a button using a div with lashings of ARIA and JavaScript. This year, I realized that this same principle applies to build tools too.

Instead of reaching for all-singing all-dancing toolchain by default, I’m going to start with a boring baseline. If and when that becomes too painful or unwieldy, then I’ll throw in a task manager. But every time I add a dependency, I’ll be limiting the lifespan of the project.

My new year’s resolution for 2021 will be to go on a diet. No more weighty node_modules folders; just crispy and delicious HTML, CSS, and JavaScript.

Wednesday, December 9th, 2020

npm ruin dev | CSS-Tricks

Chris is gathering end-of-year thoughts from people in response to the question:

What is one thing you learned about building websites this year?

Here’s mine.

In 2020, I rediscovered the enjoyment of building a website with plain ol’ HTML, CSS, and JavaScript — no transpilin’, no compilin’, no build tools other than my hands on the keyboard.

Thursday, November 12th, 2020

An opinionated guide to accessibility testing /// Iain Bean

  1. First impressions
  2. The Tab key
  3. Automated testing tools
  4. Screen reader testing
  5. Next steps

Thursday, October 29th, 2020

Phantom Analyzer

A simple, real-time website scanner to see what invisible creepers are lurking in the shadows and collecting information about you.

Looks good for adactio.com, thesession.org, and huffduffer.com …but clearleft.com is letting the side down.

Monday, October 19th, 2020

Boring by default

More on battling entropy:

Ever needed to change “just a small thing” on an old page you build years ago? I recently had the pleasure and the simple task of changing some colors in CSS lead to a whole day of me wrangling with old deprecated Grunt tasks and trying to get the build task running.

The solution:

That’s why starting with HTML, CSS and JavaScript without the need to ever compile anything on your local machine is a good idea. Changing some colors on such a page would indeed only take minutes and not a whole day.

I like this mindset:

Be boring by default and enhance on the way.

Friday, October 16th, 2020

The (extremely) loud minority - Andy Bell

Dev perception:

It’s understandable to think that JavaScript frameworks and their communities are eating the web because places like Twitter are awash with very loud voices from said communities.

Always remember that although a subset of the JavaScript community can be very loud, they represent a paltry portion of the web as a whole.

Tuesday, October 13th, 2020

Modern JS is amazing. Modern JS is trash. | Go Make Things

My name is Jeremy Keith and I endorse this message:

I love the modern JS platform (the stuff the browser does for you), and hate modern JS tooling.

Monday, October 12th, 2020

Cheating Entropy with Native Web Technologies - Jim Nielsen’s Weblog

This post really highlights one of the biggest issues with the convoluted build tools used for “modern” web development. If you return to a project after any length of time, this is what awaits:

I find entropy staring me back in the face: library updates, breaking API changes, refactored mental models, and possible downright obsolescence. An incredible amount of effort will be required to make a simple change, test it, and get it live.

Always bet on HTML:

Take a moment and think about this super power: if you write vanilla HTML, CSS, and JS, all you have to do is put that code in a web browser and it runs. Edit a file, refresh the page, you’ve got a feedback cycle. As soon as you introduce tooling, as soon as you introduce an abstraction not native to the browser, you may have to invent the universe for a feedback cycle.

Maintainability matters—if not for you, then for future you.

The more I author code as it will be run by the browser the easier it will be to maintain that code over time, despite its perceived inferior developer ergonomics (remember, developer experience encompasses both the present and the future, i.e. “how simple are the ergonomics to build this now and maintain it into the future?) I don’t mind typing some extra characters now if it means I don’t have to learn/relearn, setup, configure, integrate, update, maintain, and inevitably troubleshoot a build tool or framework later.

Monday, September 21st, 2020

Kinopio

Cennydd asked for recommendations on Twitter a little while back:

Can anyone recommend an outlining app for macOS? I’m falling out with OmniOutliner. Not Notion, please.

This was my response:

The only outlining tool that makes sense for my brain is https://kinopio.club/

It’s more like a virtual crazy wall than a virtual Dewey decimal system.

I’ve written before about how I prepare a conference talk. The first step involves a sheet of A3 paper:

I used to do this mind-mapping step by opening a text file and dumping my thoughts into it. I told myself that they were in no particular order, but because a text file reads left to right and top to bottom, they are in an order, whether I intended it or not. By using a big sheet of paper, I can genuinely get things down in a disconnected way (and later, I can literally start drawing connections).

Kinopio is like a digital version of that A3 sheet of paper. It doesn’t force any kind of hierarchy on your raw ingredients. You can clump things together, join them up, break them apart, or just dump everything down in one go. That very much suits my approach to preparing something like a talk (or a book). The act of organising all the parts into a single narrative timeline is an important challenge, but it’s one that I like to defer to later. The first task is braindumping.

When I was preparing my talk for An Event Apart Online, I used Kinopio.club to get stuff out of my head. Here’s the initial brain dump. Here are the final slides. You can kind of see the general gist of the slidedeck in the initial brain dump, but I really like that I didn’t have to put anything into a sequential outline.

In some ways, Kinopio is like an anti-outlining tool. It’s scrappy and messy—which is exactly why it works so well for the early part of the process. If I use a tool that feels too high-fidelity too early on, I get a kind of impedence mismatch between the state of the project and the polish of the artifact.

I like that Kinopio feels quite personal. Unlike Google Docs or other more polished tools, the documents you make with this aren’t really for sharing. Still, I thought I’d share my scribblings anyway.

Wednesday, September 16th, 2020

When you browse Instagram and find former Australian Prime Minister Tony Abbott’s passport number

This was an absolute delight to read! Usually when you read security-related write-ups, the fun comes from the cleverness of the techniques …but this involved nothing cleverer than dev tools. In this instance, the fun is in the telling of the tale.

Monday, September 14th, 2020

The tangled webs we weave - daverupert.com

So my little mashup, which was supposed to be just 3 technologies ended up exposing me to ~20 different technologies and had me digging into nth-level dependency source code after midnight.

The technologies within technologies that Dave lists here is like emptying a bag of scrabble pieces.

The “modern” web stack really is quite something—we’ve done an amazing job of taking relatively straightforward tasks and making them complicated, over-engineered, and guaranteed to be out of date in no time at all.

The plumbing and glue code are not my favorite parts of the job. And often, you don’t truly know the limitations of any given dependency until you’re five thousand lines of code into a project. Massive sunk costs and the promise of rapid application development can come screeching to a halt when you run out of short cuts.

Wednesday, August 19th, 2020

Make Your Own Dev Tool | Amber’s Website

I love bookmarklets! I use them every day (I’m using one right now to post this link). Amber does a great job explaining what they are and how you can make one. I really like the way she frames them as your own personal dev tools!

Wednesday, August 5th, 2020

In a Land Before Dev Tools | Amber’s Website

A great little history lesson from Amber—ah, Firebug!

Friday, July 24th, 2020

Custom Property Coverup | Amber’s Website

This is a great bit of detective work by Amber! It’s the puzzling case of The Browser Dev Tools and the Missing Computed Values from Custom Properties.

Who do I know working on dev tools for Chrome, Firefox, or Safari that can help Amber find an answer to this mystery?

Saturday, July 18th, 2020

Your blog doesn’t need a JavaScript framework /// Iain Bean

If the browser needs to parse 296kb of JavaScript to show a list of blog posts, that’s not Progressive Enhancement, it’s using the wrong tool for the job.

A good explanation of the hydration problem in tools like Gatsby.

JavaScript is a powerful language that can do some incredible things, but it’s incredibly easy to jump to using it too early in development, when you could be using HTML and CSS instead.

Wednesday, May 13th, 2020

Sass and clamp

CSS got some pretty nifty features recently. There’s the min() and max() functions. If you use them for, say, width you can use one rule where previously you would’ve needed to use two (a width declaration followed by either min-width or max-width). But they can also be applied to font-size! That’s very nifty—we’ve never had min-font-size or max-font-size properties.

There’s also the clamp() function. That allows you to set a minimum size, a default size, and a maximum size. Again, it can be used for lengths, like width, or for font-size.

Over on thesession.org, I’ve had some media queries in place for a while now that would increase the font-size for larger screens. It’s nothing crucial, just a nice-to-have so that on wide screens, the font is bumped up accordingly. I realised I could replace all those media queries with one clamp() statement, thanks to the vw (viewport width) unit:

font-size: clamp(1rem, 1.333vw, 1.5rem);

By default, the font-size is 1.333vw (1.333% of the viewport width), but it will never get smaller than 1rem and it will never get larger than 1.5rem.

That works, but there’s a bit of an issue with using raw vw units like that. If someone is on a wide screen and they try to adjust the font size, nothing will happen. The viewport width doesn’t change when you bump the font size up or down.

The solution is to mix in some kind of unit that does respond to the font size being bumped up or down (like, say, the rem unit). Handily, clamp() allows you to combine units, just like calc(). So I can do this:

font-size: clamp(1rem, 0.5rem + 0.666vw, 1.5rem);

The result is much the same as my previous rule, but now—thanks to the presence of that 0.5rem value—the font size responds to being adjusted by the user.

You could use a full 1rem in that default value:

font-size: clamp(1rem, 1rem + 0.333vw, 1.5rem);

…but if you do that, the minimum size (1rem) will never be reached—the default value will always be larger. So in effect it’s no different than saying:

font-size: min(1.rem + 0.333vw, 1.5rem);

I mentioned this to Chris just the other day.

Anyway, I got the result I wanted. I wanted the font size to stay at the browser default size (usually 16 pixels) until the screen was larger than around 1200 pixels. From there, the font size gets gradually bigger, until it hits one and a half times the browser default (which would be 24 pixels if the default size started at 16). I decided to apply it to the :root element (which is html) using percentages:

:root {
  font-size: clamp(100%, 50% + 0.666vw, 150%);
}

(My thinking goes like this: if we take a screen width of 1200 pixels, then 1vw would be 12 pixels: 1200 divided by 100. So for a font size of 16 pixels, that would be 1.333vw. But because I’m combining it with half of the default font size—50% of 16 pixels = 8 pixels—I need to cut the vw value in half as well: 50% of 1.333vw = 0.666vw.)

So I’ve got the CSS rule I want. I dropped it in to the top of my file and…

I got an error.

There was nothing wrong with my CSS. The problem was that I was dropping it into a Sass file (.scss).

Perhaps I am showing my age. Do people even use Sass any more? I hear that post-processors usurped Sass’s dominance (although no-one’s ever been able to explain to me why they’re different to pre-processers like Sass; they both process something you’ve written into something else). Or maybe everyone’s just writing their CSS in JS now. I hear that’s a thing.

The Session is a looooong-term project so I’m very hesitant to use any technology that won’t stand the test of time. When I added Sass into the mix, back in—I think—2012 or so, I wasn’t sure whether it was the right thing to do, from a long-term perspective. But it did offer some useful functionality so I went ahead and used it.

Now, eight years later, it was having a hard time dealing with the new clamp() function. Specifically, it didn’t like the values being calculated through the addition of multiple units. I think it was clashing with Sass’s in-built ability to add units together.

I started to ask myself whether I should still be using Sass. I looked at which features I was using…

Variables. Well, now we’ve got CSS custom properties, which are even more powerful than Sass variables because they can be updated in real time. Sass variables are like const. CSS custom properties are like let.

Mixins. These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

Nesting. I’ve never been a fan. I know it can make the source files look tidier but I find it can sometimes obfuscate what you’re final selectors are going to look like. So this wasn’t something I was using much any way.

Multiple files. Ah! This is the thing I would miss most. Having separate .scss files for separate interface elements is very handy!

But globbing a bunch of separate .scss files into one .css file isn’t really a Sass task. That’s what build tools are for. In fact, that’s what I was already doing with my JavaScript files; I write them as individual .js files that then get concatenated into one .js file using Grunt.

(Yes, this project uses Grunt. I told you I was showing my age. But, you know what? It works. Though seeing as I’m mostly using it for concatenation, I could probably replace it with a makefile. If I’m going to use old technology, I might as well go all the way.)

I swapped out Sass variables for CSS custom properties, mixins for calc(), and removed what little nesting I was doing. Then I stripped the Sass parts out of my Grunt file and replaced them with some concatenation and minification tasks. All of this makes no difference to the actual website, but it means I’ve got one less dependency …and I can use clamp()!

Remember a little while back when I was making a dark mode for my site? I made this observation:

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop.

It feels like something similar has happened with tools like Sass. Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.

It’s like when we used to need something like jQuery to do DOM Scripting succinctly using CSS selectors. Then we got things like querySelector() in JavaScript so we no longer needed the trailblazer.

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

You could argue that this is what happened with Flash. It certainly happened with jQuery and Sass. I’m pretty sure we’ll see the same cycle play out with frameworks like React.

Tuesday, March 10th, 2020

Lighthouse bookmarklet

I use Firefox. You should too. It’s fast, secure, and more privacy-focused than the leading browser from the big G.

When it comes to web development, the CSS developer tooling in Firefox is second-to-none. But when it comes to JavaScript and network-related debugging (like service workers), Chrome’s tools are currently better than Firefox’s (for now). For example, Chrome has a tab in its developer tools that lets you run Lighthouse on the currently open tab.

Yesterday, I got the Calibre newsletter, which always has handy performance-related links from Karolina. She pointed to a Lighthouse extension for Firefox. “Excellent!”, I thought, and I immediately installed it. But I had some qualms about installing a plug-in from Google into a browser from Mozilla, particularly as the plug-in page says:

This is not a Recommended Extension. Make sure you trust it before installing

Well, I gave it a go. It turns out that all it actually does is redirect to the online version of Lighthouse. “Hang on”, I thought. “This could just be a bookmarklet!”

So I immediately uninstalled the browser extension and made this bookmarklet:

Lighthouse

Drag that up to your desktop browser’s bookmarks toolbar. Press it whenever you’re on a site that you want to test.

Tuesday, March 3rd, 2020

Telling the story of performance

At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

Measuring performance is important. Communicating the story of performance is equally important.

Selectors Explained

I can see this coming in very handy at Codebar—pop any CSS selector in here and get a plain English explanation of what it’s doing.