Tags: javascript

867

sparkline

Tuesday, November 12th, 2019

Third party

The web turned 30 this year. When I was back at CERN to mark this anniversary, there was a lot of introspection and questioning the direction that the web has taken. Everyone I know that uses the web is in agreement that tracking and surveillance are out of control. It seems only right to question whether the web has lost its way.

But here’s the thing: the technologies that enable tracking and surveillance didn’t exist in the early years of the web—JavaScript and cookies.

Without cookies, the web was stateless. This was by design. Now, I totally understand why cookies—or something like cookies—were needed. Without some way of keeping track of state, there’s no good way for a website to “remember” what’s in your shopping cart, or whether you’ve authenticated yourself.

But why would cookies ever need to work across domains? Authentication, shopping carts and all that good stuff can happen on the same domain. Third-party cookies, on the other hand, seem custom made for tracking and frankly, not much else.

Browsers allow you to disable third-party cookies, though it’s not yet the default. If enough people do it—and complain about the sites that stop working when third-party cookies are disabled—then maybe it can become the default.

Firefox is taking steps in this direction, automatically disabling some third-party cookies—the ones that known trackers. Safari is also taking steps to prevent cross-site tracking. It’s not too late to change the tide of third-party cookies.

Then there’s third-party JavaScript.

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default. But I guess the precedent had been set with images and style sheets: they could be embedded regardless of whether their domain names matched yours. Still, this is executable code we’re talking about here: that’s quite a footgun that the web has given site owners. And boy, oh boy, has it been used by the worst people to do the most damage.

Again, as with cookies, if we were to imagine what the web would be like if JavaScript was restricted by a same-domain policy, there are certainly things that would be trickier to do.

  • Embedding video, audio, and maps would get a lot finickier.
  • Analytics would need to be self-hosted. I don’t think that would bother any site owners. An analytics platform like Google Analytics that tracks people across domains is doing it for its own benefit rather than that of site owners.
  • Advertising wouldn’t be creepy and annoying. Instead of what’s so euphemistically called “personalisation”, advertisers would have to rely on serving relevant ads based on the content of the site rather than an invasive psychological profile of the user. (I honestly think that advertisers would benefit from this kind of targetting.)

It’s harder to imagine putting the genie back in the bottle when it comes to third-party JavaScript than it is with third-party cookies. All the same, I wish that browsers made it easier to experiment with it. Just as I can choose to accept all cookies, reject all cookies, or only accept same-origin cookies, I wish I could accept all JavaScript, reject all JavaScript, or only accept same-origin JavaScript.

As it is, browsers are making it harder and harder to exercise any control over JavaScript at all. So we reach for third-party tools. We don’t call them JavaScript managers though. We call them ad blockers. But honestly, most of the ad-blocker users I know—myself included—are not bothered by the advertising; we’re bothered by the tracking. We should really call them surveillance blockers.

If third-party JavaScript weren’t the norm, not only would it make the web more secure, it would make it way more performant. Read the chapter on third parties in this year’s newly-released Web Almanac. The figures are staggering.

93% of pages include at least one third-party resource, 76% of pages issue a request to an analytics domain, the median page requests content from at least 9 unique third-party domains that represent 35% of their total network activity, and the most active 10% of pages issue a whopping 175 third-party requests or more.

I don’t think all the web’s performance ills are due to third-party scripts; developers are doing a bang-up job of making their sites big and bloated with their own self-hosted frameworks and code. But as long as third-party JavaScript is allowed onto a site, there’s a limit to how much good developers can do to improve the performance of their sites.

I go to performance-related conferences and you know who I’ve never seen at those events? The people who write the JavaScript for third-party tracking scripts. Those developers are wielding an outsized influence on the health of the web.

I’m very happy to see the work being done by Mozilla and Apple to normalise the idea of rejecting third-party cookies. I’d love to see the rejection of third-party JavaScript normalised in the same way. I know that it would make my life as a developer harder. But that’s of lesser importance. It would be better for the web.

Monday, November 11th, 2019

JavaScript | 2019 | The Web Almanac by HTTP Archive

It’s time for a look at the state of the web when it comes to JavaScript usage. Here’s the report powered by data from HTTP Archive:

JavaScript is the most costly resource we send to browsers; having to be downloaded, parsed, compiled, and finally executed. Although browsers have significantly decreased the time it takes to parse and compile scripts, download and execution have become the most expensive stages when JavaScript is processed by a web page.

Sending smaller JavaScript bundles to the browser is the best way to reduce download times, and in turn improve page performance. But how much JavaScript do we really use?

When it comes to frameworks and UI libraries, there are some interesting numbers. Given the volume of chatter in the dev world, you’d be forgiven for thinking that React is used on the majority of websites today. The real number? 4.6% of websites. That’s less than the number of websites using CSS custom properties.

This is reminding me of what I wrote about dev perception.

Thursday, November 7th, 2019

What I’ve learned about accessibility in SPAs

Nolan writes up what he learned making accessibiity improvements to a single page app. The two big takeways involve letting the browser do the work for you:

Here’s the best piece of accessibility advice for newbies: if something is a button, make it a <button>. If something is an input, make it an <input>. Don’t try to reinvent everything from scratch using <div>s and <span>s.

And then there are all the issues that crop up when you take over the task of handling navigations:

  • You need to manage focus yourself.
  • You need to manage scroll position yourself.

For classic server-rendered pages, most browser engines give you this functionality for free. You don’t have to code anything. But in an SPA, since you’re overriding the normal navigation behavior, you have to handle the focus yourself.

Tuesday, November 5th, 2019

JavaScript isn’t always available and it’s not the user’s fault by Adam Silver

It’s not a matter of if your users don’t have JavaScript—it’s a matter of when and how often.

The answer to that is around 1% of the time.

If you had an application bug which occurred 1% of the time, you’d fix it. No team I’ve come across would put up with that level of reliability.

The same goes for JavaScript. It’s not about people who turn it off. It’s about the nature of the web itself.

Thursday, October 31st, 2019

Indy maps

Remember when I wrote about adding travel maps to my site at the recent Indie Web Camp Brighton? I must confess that the last line I wrote was an attempt to catch a fish from the river of the lazy web:

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

In the spirit of Cunningham’s Law, I was hoping that somebody was going to respond with “It’s totally possible to use Stamen’s watercolour tiles for static maps, dumbass—look!” (to which my response would have been “thank you very much!”).

Alas, no such response was forthcoming. The hoped-for schooling never forthcame.

Still, I couldn’t quite let go of the idea of using those lovely watercolour maps somewhere on my site. But I had decided that dynamic maps would have been overkill for my archive pages:

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles.

Then I had a thought. What if I keep the static maps on my archive pages, but make them clickable? Then, on the other end of that link, I can have the dynamic version. In other words, what if I had a separate URL just for the dynamic maps?

These seemed like a good plan to me, so while I was travelling by Eurostar—the only way to travel—back from the lovely city of Antwerp where I had been speaking at Full Stack Europe, I started hacking away on making the dynamic maps even more dynamic. After all, now that they were going to have their own pages, I could go all out with any fancy features I wanted.

I kept coming back to my original goal:

I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I found a plug-in for Leaflet.js that animates polylines—thanks, Iván! With a bit of wrangling, I was able to get it to animate between the lat/lon points of whichever archive section the map was in. Rather than have it play out automatically, I also added a control so that you can start and stop the animation. While I was at it, I decided to make that “play/pause” button do something else too. Ahem.

If you’d like to see the maps in action, click the “play” button on any of these maps:

You get the idea. It’s all very silly really. It’s right up there with the time I made my sparklines playable. But that’s kind of the point. It’s my website so I can do whatever I want with it, no matter how silly.

First of all, the research department for adactio.com (that’s me) came up with the idea. Then that had to be sold in to upper management (that’s me too). A team was spun up to handle design and development (consisting of me and me). Finally, the finished result went live thanks to the tireless efforts of the adactio.com ops group (that would be me). Any feedback should be directed at the marketing department (no idea who that is).

Tuesday, October 29th, 2019

Using ES6 modules for progressive enhancement | Blog | Decade City

It looks like modules could be a great way to serve modern JavaScript to modern browsers, and serve polyfills or older code to older browsers.

Friday, October 25th, 2019

Offline Page Descriptions | Erik Runyon

Here’s a nice example of showing pages offline. It’s subtly different from what I’m doing on my own site, which goes to show that there’s no one-size-fits-all recipe when it comes to offline strategies.

The difference between HTML, CSS, and JavaScript | Zell Liew

HTML lets you create the structure of a website.

CSS lets you make the website look nice.

JavaScript lets you change HTML and CSS. Because it lets you change HTML and CSS, it can do tons of things.

Monday, October 21st, 2019

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

Wednesday, October 16th, 2019

IndieWeb Link Sharing | Max Böck - Frontend Web Developer

Max describes how he does bookmarking on his own site—he’s got a bookmarklet for sharing links, like I do. But he goes further with a smart use of the “share target” section in his web app manifest, as described by Aaron.

By the way, Max’s upcoming talk at the Web Clerks conference in Vienna sounds like it’s going to be unmissable!

Sunday, October 13th, 2019

The “P” in Progressive Enhancement stands for “Pragmatism” - Andy Bell

With a Progressive Enhancement mindset, support actually means support. We’re not trying to create an identical experience: we’re creating a viable experience instead.

Also with Progressive Enhancement, it’s incredibly likely that your IE11 user, or your user on a low-powered device, or even your user on a poor connection won’t notice that they’re experiencing a “minor” experience because it’ll just work for them. This is the magic, right there. Everyone’s a winner.

Tuesday, October 8th, 2019

You really don’t need all that JavaScript, I promise

The transcript of a fantastic talk by Stuart. The latter half is a demo of Portals, but in the early part of the talk, he absolutely nails the rise in popularity of complex front-end frameworks:

I think the reason people started inventing client-side frameworks is this: that you lose control when you load another page. You click on a link, you say to the browser: navigate to here. And that’s it; it’s now out of your hands and in the browser’s hands. And then the browser gives you back control when the new page loads.

Sunday, October 6th, 2019

How to be a more productive developer | Go Make Things

Like Michael Pollan’s food rules, but for JavaScript:

  1. Plan your scripts out on paper.
  2. Stop obsessing over tools.
  3. Focus on solving problems.
  4. Maintain a library of snippets that you can reuse.

Thursday, October 3rd, 2019

Blog service workers and the chicken and the egg

This is a great little technique from Remy: when a service worker is being installed, you make sure that the page(s) the user is first visiting get added to a cache.

Wednesday, October 2nd, 2019

The perfect responsive menu (2019) | Polypane responsive browser

I don’t know about “perfect” but this pretty much matches how I go about implementing responsive navigation (but only if there are too many links to show—visible navigation is almost always preferable).

Saturday, September 21st, 2019

Going offline with microformats

For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage.

It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:

I realised I didn’t need to store anything. HTML is the API.

Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.

Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.

In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.

So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.

Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.

When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.

The markup for my offline page looks like this:

<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>

I’ll populate that “history” div with information from a cache called “pages” that I’ve created using the Cache API in my service worker.

I’m going to use async/await to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those thens would lead to some serious indentation without async/await.

All async functions have to have a name—no anonymous async functions allowed. I’m calling this one listPages, just like Remy is doing. I’m making the listPages function execute immediately:

(async function listPages() {
...
})();

Now for the code to go inside that immediately-invoked function.

I create an array called browsingHistory that I’ll populate with the data I’ll use for that “history” div.

const browsingHistory = [];

I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser.

const parser = new DOMParser();

Time to open up my “pages” cache. This is the first await statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache (again with the imaginative naming).

const cache = await caches.open('pages');

Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys!

const keys = await cache.keys();

Time to get looping. I’m getting each request in the list of keys using a for/of loop:

for (const request of keys) {
...
}

Inside the loop, I pull the page out of the cache using the match() method of the Cache API. I’ll store what I get back in a variable called response. As with everything involving the Cache API, this is asynchronous so I need to use the await keyword here.

const response = await cache.match(request);

I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text() method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await keyword. When the promise resolves, I’ll have a variable called html that contains the body of the response.

const html = await response.text();

Now I can use that DOM parser I created earlier. I’ve got a string of text in the html variable. I can generate a Document Object Model from that string using the parseFromString() method. This isn’t asynchronous so there’s no need for the await keyword.

const dom = parser.parseFromString(html, 'text/html');

Now I’ve got a DOM, which I have creatively stored in a variable called …dom.

I can poke at it using DOM methods like querySelector. I can test to see if this particular page has an h-entry on it by looking for an element with a class attribute containing the value “h-entry”:

if (dom.querySelector('.h-entry h1.p-name') {
...
}

In this particular case, I’m also checking to see if the h1 element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if statement.

Inside the if statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data!

const data = new Object;

Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request variable in my for loop.

data.url = request.url;

I’m going to store the timestamp for this h-entry. I can get that from the datetime attribute of the time element marked up with a class of dt-published.

data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));

While I’m at it, I’m going to grab the human-readable date from the innerText property of that same time.dt-published element.

data.published = dom.querySelector('.h-entry .dt-published').innerText;

The title of the h-entry is in the innerText of the element with a class of p-name.

data.title = dom.querySelector('.h-entry .p-name').innerText;

At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body of the page, but I do put it in the head in a meta element. I’ll grab that now.

data.description = dom.querySelector('meta[name="description"]').getAttribute('content');

Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory array.

browsingHistory.push(data);

My if statement and my for/in loop are finished at this point. Here’s how the whole loop looks:

for (const request of keys) {
  const response = await cache.match(request);
  const html = await response.text();
  const dom = parser.parseFromString(html, 'text/html');
  if (dom.querySelector('.h-entry h1.p-name')) {
    const data = new Object;
    data.url = request.url;
    data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
    data.published = dom.querySelector('.h-entry .dt-published').innerText;
    data.title = dom.querySelector('.h-entry .p-name').innerText;
    data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
    browsingHistory.push(data);
  }
}

That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.

First of all, I want to make sure that the browsingHistory array isn’t empty. There’s no point going any further if it is.

if (browsingHistory.length) {
...
}

Within this if statement, I can do what I want with the data I’ve put into the browsingHistory array.

I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory array according to the timestamp property of each item within it:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div. I’m storing the markup in a string called …markup (my imagination knows no bounds).

let markup = '<p>But you still have something to read:</p>';

I’m going to add a chunk of markup for each item of data.

browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML() method to do this.

document.getElementById('history').insertAdjacentHTML('beforeend', markup);

Here’s what my finished JavaScript looks like:

<script>
(async function listPages() {
  const browsingHistory = [];
  const parser = new DOMParser();
  const cache = await caches.open('pages');
  const keys = await cache.keys();
  for (const request of keys) {
    const response = await cache.match(request);
    const html = await response.text();
    const dom = parser.parseFromString(html, 'text/html');
    if (dom.querySelector('.h-entry h1.p-name')) {
      const data = new Object;
      data.url = request.url;
      data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
      data.published = dom.querySelector('.h-entry .dt-published').innerText;
      data.title = dom.querySelector('.h-entry .p-name').innerText;
      data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
      browsingHistory.push(data);
    }
  }
  if (browsingHistory.length) {
    browsingHistory.sort( (a,b) => {
      return b.timestamp - a.timestamp;
    });
    let markup = '<p>But you still have something to read:</p>';
    browsingHistory.forEach( data => {
      markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
    });
    document.getElementById('history').insertAdjacentHTML('beforeend', markup);
  }
})();
</script>

I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.

If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.

If you don’t have an offline strategy for your website, there’s a book for that.

Monday, September 16th, 2019

The Book | The Lean Web

This is such a great little web book from Chris Ferdinandi that you can read online for free.

  1. Intro
  2. Modern Best Practices
  3. How did we get here?
  4. Lean Web Principles
  5. What now?

Friday, September 13th, 2019

5G Will Definitely Make the Web Slower, Maybe | Filament Group, Inc.

The Jevons Paradox in action:

Faster networks should fix our performance problems, but so far, they have had an interesting if unintentional impact on the web. This is because historically, faster network speed has enabled developers to deliver more code to users—in particular, more JavaScript code.

And because it’s JavaScript we’re talking about:

Even if folks are on a new fast network, they’re very likely choking on the code we’re sending, rendering the potential speed improvements of 5G moot.

The longer I spend in this field, the more convinced I am that web performance is not a technical problem; it’s a people problem.

Friday, September 6th, 2019

Offline listings

This is brilliant technique by Remy!

If you’ve got a custom offline page that lists previously-visited pages (like I do on my site), you don’t have to choose between localStorage or IndexedDB—you can read the metadata straight from the HTML of the cached pages instead!

This seems forehead-smackingly obvious in hindsight. I’m totally stealing this.