Journal tags: indieweb

67

sparkline

Feeds

A little while back, Marcus Herrmann wrote about making RSS more visible again with a /feeds page. Here’s his feeds page. Here’s Remy’s.

Seems like a good idea to me. I’ve made mine:

adactio.com/feeds

As well as linking to the usual RSS feeds (blog posts, links, notes), it’s also got an explanation of how you can subscribe to a customised RSS feed using tags.

Then, earlier today, I was chatting with Matt on Twitter and he asked:

btw do you share your blogroll anywhere?

So now I’ve added another URL:

adactio.com/feeds/subscriptions

That’s got a link to my OPML file, exported from my feed reader, and a list of the (current) RSS feeds that I’m subscribed to.

I like the idea of blogrolls making a comeback. And webrings.

Dark mode revisited

I added a dark mode to my website a while back. It was a fun thing to do during Indie Web Camp Amsterdam last year.

I tied the colour scheme to the operating system level. If you choose a dark mode in your OS, my website will adjust automatically thanks to the prefers-color-scheme: dark media query.

But I’ve seen notes from a few friends, not about my site specifically, but about how they like having an explicit toggle for dark mode (as well as the media query). Whenever I read those remarks, I’d think “I’m really not sure I’ve got time to deal with adding that kind of toggle to my site.”

But then I realised, “Jeremy, you absolute muffin! You’ve had a theme switcher on your website for almost two decades now!”

Doh! I had forgotten about that theme switcher. It dates back to the early days of CSS. I wanted my site to be a demonstration of how you could apply different styles to the same underlying markup (this was before the CSS Zen Garden came along). Those themes are very dated now, but if you like you can view my site with a Zeldman theme or a sci-fi theme.

To offer a dark-mode theme for my site, all I had to do was take the default stylesheet, pull out the custom properties from the prefers-color-scheme: dark media query, and done. It took less than five minutes.

So if you want to view my site in dark mode, it’s one of the options in the “Customise” dropdown on every page of the website.

Reading

At the beginning of the year, Remy wrote about extracting Goodreads metadata so he could create his end-of-year reading list. More recently, Mark Llobrera wrote about how he created a visualisation of his reading history. In his case, he’s using JSON to store the information.

This kind of JSON storage is exactly what Tom Critchlow proposes in his post, Library JSON - A Proposal for a Decentralized Goodreads:

Thinking through building some kind of “web of books” I realized that we could use something similar to RSS to build a kind of decentralized GoodReads powered by indie sites and an underlying easy to parse format.

His proposal looks kind of similar to what Mark came up with. There’s a title, an author, an image, and some kind of date for when you started and/or finished reading the book.

Matt then points out that RSS gets close to the data format being suggested and asks how about using RSS?:

Rather than inventing a new format, my suggestion is that this is RSS plus an extension to deal with books. This is analogous to how the podcast feeds are specified: they are RSS plus custom tags.

Like Matt, I’m in favour of re-using existing wheels rather than inventing new ones, mostly to avoid a 927 situation.

But all of these proposals—whether JSON or RSS—involve the creation of a separate file, and yet the information is originally published in HTML. Along the lines of Matt’s idea, I could imagine extending the h-entry collection of class names to allow for books (or films, or other media). It already handles images (with u-photo). I think the missing fields are the date-related ones: when you start and finish reading. Those fields are present in a different microformat, h-event in the form of dt-start and dt-end. Maybe they could be combined:


<article class="h-entry h-event h-review">
<h1 class="p-name p-item">Book title</h1>
<img class="u-photo" src="image.jpg" alt="Book cover.">
<p class="p-summary h-card">Book author</p>
<time class="dt-start" datetime="YYYY-MM-DD">Start date</time>
<time class="dt-end" datetime="YYYY-MM-DD">End date</time>
<div class="e-content">Remarks</div>
<data class="p-rating" value="5">★★★★★</data>
<time class="dt-published" datetime="YYYY-MM-DDThh:mm">Date of this post</time>
</article>

That markup is simultaneously a post (h-entry) and an event (h-event) and you can even throw in h-card for the book author (as well as h-review if you like to rate the books you read). It can be converted to RSS and also converted to .ics for calendars—those parsers are already out there. It’s ready for aggregation and it’s ready for visualisation.

I publish very minimal reading posts here on adactio.com. What little data is there isn’t very structured—I don’t even separate the book title from the author. But maybe I’ll have a little play around with turning these h-entries into combined h-entry/event posts.

Outlet

We’re all hunkering down in our homes. That seems to be true of our online homes too.

People are sharing their day-to-day realities on their websites and I’m here for it. Like, I’m literally here for it. I can’t go anywhere.

On an episode of the Design Observer podcast, Jessica Helfand puts this into context:

During times of crisis, people want to make things. There’s a surge in the keeping of journals when there’s a war… it’s a response to the feeling of vulnerability, like corporeal vulnerability. My life is under attack. I am imprisoned in my house. I have to make something to say I was here, to say I mattered, to say this day happened… It’s like visual graphic reassurance.

It’s not just about crisis though. Scott Kelly talks about the value of keeping a journal during prolonged periods of repitition. And he should know—he spent a year in space:

NASA has been studying the effects of isolation on humans for decades, and one surprising finding they have made is the value of keeping a journal. Throughout my yearlong mission, I took the time to write about my experiences almost every day. If you find yourself just chronicling the days’ events (which, under the circumstances, might get repetitive) instead try describing what you are experiencing through your five senses or write about memories. Even if you don’t wind up writing a book based on your journal like I did, writing about your days will help put your experiences in perspective and let you look back later on what this unique time in history has meant.

That said, just stringing a coherent sentence together can seem like too much during The Situation. That’s okay. Your online home can also provide relief and distraction through tidying up. As Ethan puts it:

let a website be a worry stone

It can be comforting to get into the zone doing housekeeping on your website. How about a bit of a performance audit? Or maybe look into more fluid typography? Or perhaps now is the time to tinker about with that dark mode you’ve been planning?

Whatever you end up doing, my point is that your website is quite literally an outlet. While you’re stuck inside, your website is not just a place you can go to, it’s a place you can control, a place you can maintain, a place you can tidy up, a place you can expand. Most of all, it’s a place you can lose yourself in, even if it’s just for a little while.

Indie Web Camp London 2020

Do you have plans for the weekend of March 14th and 15th?

If you live anywhere near London, might I suggest that you sign up for Indie Web Camp.

Cheuk and Ana are putting it together with assistance from Calum. As always, there will be one day of Barcamp-style discussions, followed by a fun hands-on day of making.

If you’re wondering whether this is for you, ask yourself if any of this situations apply:

  • You don’t have your own website yet, but you want one.
  • You have your own website, but you need some help with it.
  • You have some ideas about the independent web.
  • You have your own website but you never seem to find the time to update it.
  • You’d like to help other people with their websites.

If you recognise yourself in any one of those scenarios, then you should definitely come along to Indie Web Camp London 2020!

2019 in numbers

I posted to adactio.com 1,600 times in 2019: sparkline

In amongst those notes were:

If you like, you can watch all that activity plotted on a map.

map

Away from this website in 2019:

Words I wrote in 2019

I wrote just over one hundred blog posts in 2019. That’s even more than I wrote in 2018, which I’m very happy with.

Here are eight posts from during the year that I think are a good representative sample. I like how these turned out.

I hope that I’ll write as many blog posts in 2020.

I’m pretty sure that I will also continue to refer to them as blog posts, not blogs. I may be the last holdout of this nomenclature in 2020. I never planned to die on this hill, but here we are.

Actually, seeing as this is technically my journal rather than my blog, I’ll just call them journal entries.

Here’s to another year of journal entries.

On this day

I’m in San Francisco to speak at An Event Apart, which kicks off tomorrow. But I arrived a few days early so that I could attend Indie Web Camp SF.

Yesterday was the discussion day. Most of the attendees were seasoned indie web campers, so quite a few of the discussions went deep on some of the building blocks. It was a good opportunity to step back and reappraise technology decisions.

Today is the day for making, tinkering, fiddling, and hacking. I had a few different ideas of what to do, mostly around showing additional context on my blog posts. I could, for instance, show related posts—other blog posts (or links) that have similar tags attached to them.

But I decided that a nice straightforward addition would be to show a kind of “on this day” context. After all, I’ve been writing blog posts here for eighteen years now; chances are that if I write a blog post on any given day, there will be something in the archives from that same day in previous years.

So that’s what I’ve done. I’ll be demoing it shortly here at Indie Web Camp, but you can see it in action now. If you look at the page for this blog post, you should see a section at the end with the heading “Previously on this day”. There you’ll see links to other posts I’ve written on December 8th in years gone by.

It’s quite a mixed bag. There’s a post about when I used to have a webcam from sixteen years ago. There’s a report from the Flash On The Beach conference from thirteen years ago (I wrote that post while I was in Berlin). And five years ago, I was writing about markup patterns for web components.

I don’t know if anyone other than me will find this feature interesting (but as it’s my website, I don’t really care). Personally, I find it fascinating to see how my writing has changed, both in terms of subject matter and tone.

Needless to say, the further back in time you go, the more chance there is that the links in my blog posts will no longer work. That’s a real shame. But then it’s a pleasant surprise when I find something that I linked to that is still online after all this time. And I can take comfort from the fact that if anyone has ever linked to anything I’ve written on my website, then those links still work.

Indy maps

Remember when I wrote about adding travel maps to my site at the recent Indie Web Camp Brighton? I must confess that the last line I wrote was an attempt to catch a fish from the river of the lazy web:

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

In the spirit of Cunningham’s Law, I was hoping that somebody was going to respond with “It’s totally possible to use Stamen’s watercolour tiles for static maps, dumbass—look!” (to which my response would have been “thank you very much!”).

Alas, no such response was forthcoming. The hoped-for schooling never forthcame.

Still, I couldn’t quite let go of the idea of using those lovely watercolour maps somewhere on my site. But I had decided that dynamic maps would have been overkill for my archive pages:

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles.

Then I had a thought. What if I keep the static maps on my archive pages, but make them clickable? Then, on the other end of that link, I can have the dynamic version. In other words, what if I had a separate URL just for the dynamic maps?

These seemed like a good plan to me, so while I was travelling by Eurostar—the only way to travel—back from the lovely city of Antwerp where I had been speaking at Full Stack Europe, I started hacking away on making the dynamic maps even more dynamic. After all, now that they were going to have their own pages, I could go all out with any fancy features I wanted.

I kept coming back to my original goal:

I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I found a plug-in for Leaflet.js that animates polylines—thanks, Iván! With a bit of wrangling, I was able to get it to animate between the lat/lon points of whichever archive section the map was in. Rather than have it play out automatically, I also added a control so that you can start and stop the animation. While I was at it, I decided to make that “play/pause” button do something else too. Ahem.

If you’d like to see the maps in action, click the “play” button on any of these maps:

You get the idea. It’s all very silly really. It’s right up there with the time I made my sparklines playable. But that’s kind of the point. It’s my website so I can do whatever I want with it, no matter how silly.

First of all, the research department for adactio.com (that’s me) came up with the idea. Then that had to be sold in to upper management (that’s me too). A team was spun up to handle design and development (consisting of me and me). Finally, the finished result went live thanks to the tireless efforts of the adactio.com ops group (that would be me). Any feedback should be directed at the marketing department (no idea who that is).

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

Something for the weekend

Your weekends are valuable. Spend them wisely. I have some suggestion on how you might spend next weekend, October 19th and 20th, depending on where you are in the world.

If you’re in the bay area, or anywhere near San Francisco, I highly recommend that you go to Science Hack Day—two days of science, hacking, and fun. This will be the last one in San Francisco so don’t miss your chance.

If you’re in the south of England, or anywhere near Brighton, come along to Indie Web Camp. Saturday will feature discussions on owning your data. Sunday will be a day of doing. I’ve written about previous Indie Web Camps before, and I really can’t recommend it highly enough!

Do me a favour and register for a spot—it’s free—so I’ve got some idea of numbers. Looking forward to seeing you there!

Dark mode

I had a very productive time at Indie Web Camp Amsterdam. The format really lends itself to getting the most of a weekend—one day of discussions followed by one day of hands-on making and doing. You should definitely come along to Indie Web Camp Brighton on October 19th and 20th to experience it for yourself.

By the end of the “doing” day, I had something fun to demo—a dark mode for my website.

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Applying the dark mode styles is pretty straightforward in theory. You put the styles inside this media query:

@media (prefers-color-scheme: dark) {
...
}

Rather than over-riding every instance of a colour in my style sheet, I decided I’d do a little bit of refactoring first and switch to using CSS custom properties (or variables, if you will).

:root {
  --background-color: #fff;
  --text-color: #333;
  --link-color: #b52;
}
body {
  background-color: var(--background-color);
  color: var(--text-color);
}
a {
  color: var(--link-color);
}

Then I can over-ride the custom properties without having to touch the already-declared styles:

@media (prefers-color-scheme: dark) {
  :root {
    --background-color: #111416
    --text-color: #ccc;
    --link-color: #f96;
  }
}

All in all, I have about a dozen custom properties for colours—variations for text, backgrounds, and interface elements like links and buttons.

By using custom properties and the prefers-color-scheme media query, I was 90% of the way there. But the devil is in the details.

I have SVGs of sparklines on my homepage. The SVG has a hard-coded colour value in the stroke attribute of the path element that draws the sparkline. Fortunately, this can be over-ridden in the style sheet:

svg.activity-sparkline path {
  stroke: var(--text-color);
}

The real challenge came with the images I use in the headers of my pages. They’re JPEGs with white corners on one side and white gradients on the other.

header images

I could make them PNGs to get transparency, but the file size would shoot up—they’re photographic images (with a little bit of scan-line treatment) so JPEGs (or WEBPs) are the better format. Then I realised I could use CSS to recreate the two effects:

  1. For the cut-out triangle in the top corner, there’s clip-path.
  2. For the gradient, there’s …gradients!
background-image: linear-gradient(
  to right,
  transparent 50%,
  var(—background-color) 100%
);

Oh, and I noticed that when I applied the clip-path for the corners, it had no effect in Safari. It turns out that after half a decade of support, it still only exists with -webkit prefix. That’s just ridiculous. At this point we should be burning vendor prefixes with fire. I can’t believe that Apple still ships standardised CSS properties that only work with a prefix.

In order to apply the CSS clip-path and gradient, I needed to save out the images again, this time without the effects baked in. I found the original Photoshop file I used to export the images. But I don’t have a copy of Photoshop any more. I haven’t had a copy of Photoshop since Adobe switched to their Mafia model of pricing. A quick bit of searching turned up Photopea, which is pretty much an entire recreation of Photoshop in the browser. I was able to open my old PSD file and re-export my images.

LEGO clone trooper Brighton bandstand Scaffolding Tokyo Florence

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop. I could probably do those raster scan lines with CSS if I were smart enough.

dark mode

This is what I demo’d at the end of Indie Web Camp Amsterdam, and I was pleased with the results. But fate had an extra bit of good timing in store for me.

The very next day at the View Source conference, Melanie Richards gave a fantastic talk called The Tailored Web: Effectively Honoring Visual Preferences (seriously, conference organisers, you want this talk on your line-up). It was packed with great insights and advice on impementing dark mode, like this little gem for adjusting images:

@media (prefers-color-scheme: dark) {
  img {
    filter: brightness(.8) contrast(1.2);
  }
}

Melanie also pointed out that you can indicate the presence of dark mode styles to browsers, although the mechanism is yet to shake out. You can do it in CSS:

:root {
  color-scheme: light dark;
}

But you can also do it in HTML:

That allows browsers to swap out replaced content; interface elements like form fields and dropdowns.

Oh, and one other addition I added after the fact was swapping out map imagery by using the picture element to point to darker map tiles:


map

light map dark map

So now I’ve got a dark mode for my website. Admittedly, it’s for just one of the eight style sheets. I’ve decided that, while I’ll update my default styles at every opportunity, I’m going to preservethe other skins as they are, like the historical museum pieces they are.

If you’re on the latest version of iOS, go ahead and toggle the light and dark options in your system preferences to flip between this site’s colour schemes.

Travel talk

It’s been a busy two weeks of travelling and speaking. Last week I spoke at Finch Conf in Edinburgh, Code Motion in Madrid, and Generate CSS in London. This week I was at Indie Web Camp, View Source, and Fronteers, all in Amsterdam.

The Edinburgh-Madrid-London whirlwind wasn’t ideal. I gave the opening talk at Finch Conf, then immediately jumped in a taxi to get to the airport to fly to Madrid, so I missed all the excellent talks. I had FOMO for a conference I actually spoke at.

I did get to spend some time at Code Motion in Madrid, but that was a waste of time. It was one of those multi-track events where the trade show floor is prioritised over the talks (and the speakers don’t get paid). I gave my talk to a mostly empty room—the classic multi-track experience. On the plus side, I had a wonderful time with Jessica exploring Madrid’s many tapas delights. The food and drink made up for the sub-par conference.

I flew back from Madrid to the UK, and immediately went straight to London to deliver the closing talk of Generate CSS. So once again, I didn’t get to see any of the other talks. That’s a real shame—it sounds like they were all excellent.

The day after Generate though, I took the Eurostar to Amsterdam. That’s where I’ve been ever since. There were just as many events as in the previous week, but because they were all in Amsterdam, I could savour them properly, instead of spending half my time travelling.

Indie Web Camp Amsterdam was excellent, although I missed out on the afternoon discussions on the first day because I popped over to the Mozilla Tech Speakers event happening at the same time. I was there to offer feedback on lightning talks. I really, really enjoyed it.

I’d really like to do more of this kind of thing. There aren’t many activities I feel qualified to give advice on, but public speaking is an exception. I’ve got plenty of experience that I’m eager to share with up-and-coming speakers. Also, I got to see some really great lightning talks!

Then it was time for View Source. There was a mix of talks, panels, and breakout conversation corners. I saw some fantastic talks by people I hadn’t seen speak before: Melanie Richards, Ali Spittal, Sharell Bryant, and Tejas Kumar. I gave the closing keynote, which was warmly received—that’s always very gratifying.

After one day of rest, it was time for Fronteers. This was where myself and Remy gave the joint talk we’ve been working on:

Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.

I’m happy to say that it went off without a hitch. Remy definitely had the tougher task—he did a live demo. Needless to say, he did it flawlessly. It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

I’ve got some more speaking engagements ahead of me. Most of them are in Europe so I’m going to do my utmost to travel to them by train. Flying is usually more convenient but it’s terrible for my carbon footprint. I’m feeling pretty guilty about that Madrid trip; I need to make ammends.

I’ll be travelling to France next week for Paris Web. Taking the Eurostar is a no-brainer for that one. Straight after that Jessica and I will be going to Frankfurt for the book fair. Taking the train from Paris to Frankfurt will be nice and straightforward.

I’ll be back in Brighton for Indie Web Camp on the weekend of October 19th and 20th—you should come!—and then I’ll be heading off to Antwerp for Full Stack Fest. Anywhere in Belgium is easily reachable by train so that’ll be another Eurostar journey.

After that, it gets a little trickier. I’ll be going to Berlin for Beyond Tellerrand but I’m not sure I can make it work by train. Same goes for Web Clerks in Vienna. Cities that far east are tough to get to by train in a reasonable amount of time (although I realise that, compared to many others, I have the luxury of spending time travelling by train).

Then there are the places that I can only get to by plane. There’s the United States. I’ll be speaking at An Event Apart in San Francisco in December. A flight is unavoidable. Last time we went to the States, Jessica and I travelled by ocean liner. But that isn’t any better for the environment, given the low-grade fuel burned by ships.

And then there’s Ireland. I make trips back there to see my mother, but there’s no alternative to flying or taking a ferry—neither are ideal for the environment. At least I can offset the carbon from my flights; the travel equivalent to putting coins in the swear jar.

Don’t get me wrong—I’m not moaning about the amount of travel involved in going to conferences and workshops. It’s fantastic that I get to go to new and interesting places. That’s something I hope I never take for granted. But I can’t ignore the environmental damage I’m doing. I’ll be making more of an effort to travel by train to Europe’s many excellent web events. While I’m at it, I can ask Paul for his trainspotter expertise.

Going offline with microformats

For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage.

It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:

I realised I didn’t need to store anything. HTML is the API.

Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.

Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.

In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.

So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.

Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.

When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.

The markup for my offline page looks like this:

<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>

I’ll populate that “history” div with information from a cache called “pages” that I’ve created using the Cache API in my service worker.

I’m going to use async/await to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those thens would lead to some serious indentation without async/await.

All async functions have to have a name—no anonymous async functions allowed. I’m calling this one listPages, just like Remy is doing. I’m making the listPages function execute immediately:

(async function listPages() {
...
})();

Now for the code to go inside that immediately-invoked function.

I create an array called browsingHistory that I’ll populate with the data I’ll use for that “history” div.

const browsingHistory = [];

I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser.

const parser = new DOMParser();

Time to open up my “pages” cache. This is the first await statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache (again with the imaginative naming).

const cache = await caches.open('pages');

Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys!

const keys = await cache.keys();

Time to get looping. I’m getting each request in the list of keys using a for/of loop:

for (const request of keys) {
...
}

Inside the loop, I pull the page out of the cache using the match() method of the Cache API. I’ll store what I get back in a variable called response. As with everything involving the Cache API, this is asynchronous so I need to use the await keyword here.

const response = await cache.match(request);

I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text() method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await keyword. When the promise resolves, I’ll have a variable called html that contains the body of the response.

const html = await response.text();

Now I can use that DOM parser I created earlier. I’ve got a string of text in the html variable. I can generate a Document Object Model from that string using the parseFromString() method. This isn’t asynchronous so there’s no need for the await keyword.

const dom = parser.parseFromString(html, 'text/html');

Now I’ve got a DOM, which I have creatively stored in a variable called …dom.

I can poke at it using DOM methods like querySelector. I can test to see if this particular page has an h-entry on it by looking for an element with a class attribute containing the value “h-entry”:

if (dom.querySelector('.h-entry h1.p-name') {
...
}

In this particular case, I’m also checking to see if the h1 element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if statement.

Inside the if statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data!

const data = new Object;

Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request variable in my for loop.

data.url = request.url;

I’m going to store the timestamp for this h-entry. I can get that from the datetime attribute of the time element marked up with a class of dt-published.

data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));

While I’m at it, I’m going to grab the human-readable date from the innerText property of that same time.dt-published element.

data.published = dom.querySelector('.h-entry .dt-published').innerText;

The title of the h-entry is in the innerText of the element with a class of p-name.

data.title = dom.querySelector('.h-entry .p-name').innerText;

At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body of the page, but I do put it in the head in a meta element. I’ll grab that now.

data.description = dom.querySelector('meta[name="description"]').getAttribute('content');

Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory array.

browsingHistory.push(data);

My if statement and my for/in loop are finished at this point. Here’s how the whole loop looks:

for (const request of keys) {
  const response = await cache.match(request);
  const html = await response.text();
  const dom = parser.parseFromString(html, 'text/html');
  if (dom.querySelector('.h-entry h1.p-name')) {
    const data = new Object;
    data.url = request.url;
    data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
    data.published = dom.querySelector('.h-entry .dt-published').innerText;
    data.title = dom.querySelector('.h-entry .p-name').innerText;
    data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
    browsingHistory.push(data);
  }
}

That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.

First of all, I want to make sure that the browsingHistory array isn’t empty. There’s no point going any further if it is.

if (browsingHistory.length) {
...
}

Within this if statement, I can do what I want with the data I’ve put into the browsingHistory array.

I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory array according to the timestamp property of each item within it:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div. I’m storing the markup in a string called …markup (my imagination knows no bounds).

let markup = '<p>But you still have something to read:</p>';

I’m going to add a chunk of markup for each item of data.

browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML() method to do this.

document.getElementById('history').insertAdjacentHTML('beforeend', markup);

Here’s what my finished JavaScript looks like:

<script>
(async function listPages() {
  const browsingHistory = [];
  const parser = new DOMParser();
  const cache = await caches.open('pages');
  const keys = await cache.keys();
  for (const request of keys) {
    const response = await cache.match(request);
    const html = await response.text();
    const dom = parser.parseFromString(html, 'text/html');
    if (dom.querySelector('.h-entry h1.p-name')) {
      const data = new Object;
      data.url = request.url;
      data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
      data.published = dom.querySelector('.h-entry .dt-published').innerText;
      data.title = dom.querySelector('.h-entry .p-name').innerText;
      data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
      browsingHistory.push(data);
    }
  }
  if (browsingHistory.length) {
    browsingHistory.sort( (a,b) => {
      return b.timestamp - a.timestamp;
    });
    let markup = '<p>But you still have something to read:</p>';
    browsingHistory.forEach( data => {
      markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
    });
    document.getElementById('history').insertAdjacentHTML('beforeend', markup);
  }
})();
</script>

I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.

If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.

If you don’t have an offline strategy for your website, there’s a book for that.

Register for Indie Web Camp Brighton 2019

Back at the end of May, I wrote:

We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

I hope you’ve got those dates marked in your calendar. Now it’s time for the next step: register for the event. Registration is free, but we need to know numbers in advance, so if you’re planning to come, please grab yourself a ticket there.

It’s going to be a lot of fun!

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

Check out the previous Brighton Indie Web Camps:

See you at 68 Middle Street on Saturday, October 19th for Indie Web Camp Brighton 2019!

Discrete replies

Earlier this year, at Indie Web Camp Düsseldorf, I got replies working on my own site. That is to say, I can host a reply on my site to something on another site.

The classic example is Twitter. In fact, if you look at all my replies, most of them are responding to tweets (I also syndicate these replies to Twitter so they show up there just like regular tweet replies).

I’m really, really glad I got replies working. I’ve been using this functionality quite a bit, and it feels really good to own my content this way.

At the time, I wrote:

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

I decided not to include them on my home page feed after all. You’ll still see them if you go to the notes section of my site, but I decided that they were overwhelming my home page a bit. They also don’t show up in my RSS feed.

I’m really happy that I’m hosting my replies, and that I’ve got URLs for all of them, but I don’t think I want to give them the same priority as blog posts, links, and regular notes.

Indie web events in Brighton

Homebrew Website Club is a regular gathering of people getting together to tinker on their own websites. It’s a play on the original Homebrew Computer Club from the ’70s. It shares a similar spirit of sharing and collaboration.

Homebrew Website Clubs happen at various locations: London, San Francisco, Portland, Nuremberg, and more. Usually there on every second Wednesday.

I started running Homebrew Website Club Brighton a while back. I tried the “every second Wednesday” thing, but it was tricky to make that work. People found it hard to keep track of which Wednesdays were Homebrew days and which weren’t. And if you missed one, then it would potentially be weeks between attending.

So I’ve made it a weekly gathering. On Thursdays. That’s mostly because Thursdays work for me: that’s one of the evenings when Jessica has her ballet class, so it’s the perfect time for me to spend a while in the company of fellow website owners.

If you’re in Brighton and you have your own website (or you want to have your own website), you should come along. It’s every Thursday from 6pm to 7:30pm ‘round at the Clearleft studio on 68 Middle Street. Add it to your calendar.

There might be a Thursday when I’m not around, but it’s highly likely that Homebrew Website Club Brighton will happen anyway because either Trys, Benjamin or Cassie will be here.

(I’m at Homebrew Website Club Brighton right now, writing this. Remy is here too, working on some very cool webmention stuff.)

There’s something else you should add to your calendar. We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

It’s been a while since we’ve had an Indie Web Camp in Brighton. You can catch up on the Brighton Indie Web Camps we had in 2014, 2015, and 2016. Since then I’ve been to Indie Web Camps in Berlin, Nuremberg, and Düsseldorf, but it’s going to be really nice to bring it back home.

Indie Web Camp UK attendees Indie Web Camp Brighton group photo IndieWebCampBrighton2016

The event will be free to attend, but I’ll set up an official ticket page on Ti.to to keep track of who’s coming. I’ll let you know when that’s up and ready. In the meantime, you can register your interest in attending on the 2019 Indie Webcamp Brighton page on the Indie Web wiki.

2018 in numbers

I posted to adactio.com 1,387 times in 2018: sparkline

In amongst those notes were:

In my blog posts, the top tags were:

  1. frontend and development (42 posts), sparkline
  2. serviceworkers (27 posts), sparkline
  3. design (20 posts), sparkline
  4. writing and publishing (19 posts), sparkline
  5. javascript (18 posts). sparkline

In my links, the top tags were:

  1. development (305 links), sparkline
  2. frontend (289 links), sparkline
  3. design (178 links), sparkline
  4. css (110 links), sparkline
  5. javascript (106 links). sparkline

When I wasn’t updating this site:

But these are just numbers. To get some real end-of-year thoughts, read posts by Remy, Andy, Ana, or Bill Gates.

Words I wrote in 2018

I wrote just shy of a hundred blog posts in 2018. That’s an increase from 2017. I’m happy about that.

Here are some posts that turned out okay…

A lot of my writing in 2018 was on technical topics—front-end development, service workers, and so on—but I should really make more of an effort to write about a wider range of topics. I always like when Zeldman writes about his glamourous life. Maybe in 2019 I’ll spend more time letting you know what I had for lunch.

I really enjoy writing words on this website. If I go too long between blog posts, I start to feel antsy. The only relief is to move my fingers up and down on the keyboard and publish something. Sounds like a bit of an addiction, doesn’t it? Well, as habits go, this is probably one of my healthier ones.

Thanks for reading my words in 2018. I didn’t write them for you—I wrote them for me—but it’s always nice when they resonate with others. I’ll keep on writing my brains out in 2019.