Tags: indieweb

312

sparkline

Wednesday, October 16th, 2019

IndieWeb Link Sharing | Max Böck - Frontend Web Developer

Max describes how he does bookmarking on his own site—he’s got a bookmarklet for sharing links, like I do. But he goes further with a smart use of the “share target” section in his web app manifest, as described by Aaron.

By the way, Max’s upcoming talk at the Web Clerks conference in Vienna sounds like it’s going to be unmissable!

Tuesday, October 15th, 2019

Manton Reece - Saying goodbye to Facebook cross-posting

Facebook and even Instagram are at odds with the principles of the open web.

Related: Aaron is playing whack-a-mole with Instagram because he provides a servie to let users export their own photographs to their own websites.

Monday, October 14th, 2019

Something for the weekend

Your weekends are valuable. Spend them wisely. I have some suggestion on how you might spend next weekend, October 19th and 20th, depending on where you are in the world.

If you’re in the bay area, or anywhere near San Francisco, I highly recommend that you go to Science Hack Day—two days of science, hacking, and fun. This will be the last one in San Francisco so don’t miss your chance.

If you’re in the south of England, or anywhere near Brighton, come along to Indie Web Camp. Saturday will feature discussions on owning your data. Sunday will be a day of doing. I’ve written about previous Indie Web Camps before, and I really can’t recommend it highly enough!

Do me a favour and register for a spot—it’s free—so I’ve got some idea of numbers. Looking forward to seeing you there!

Ne vous laissez plus déPOSSEder de vos contenus !

I saw Nicholas give this great talk at Paris Web on site deaths, the indie web, and publishing on your own site. That talk was in French, but these slides are (mostly) in English—I was able to follow along surprisingy easily!

Sunday, October 6th, 2019

Dark mode

I had a very productive time at Indie Web Camp Amsterdam. The format really lends itself to getting the most of a weekend—one day of discussions followed by one day of hands-on making and doing. You should definitely come along to Indie Web Camp Brighton on October 19th and 20th to experience it for yourself.

By the end of the “doing” day, I had something fun to demo—a dark mode for my website.

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Applying the dark mode styles is pretty straightforward in theory. You put the styles inside this media query:

@media (prefers-color-scheme: dark) {
...
}

Rather than over-riding every instance of a colour in my style sheet, I decided I’d do a little bit of refactoring first and switch to using CSS custom properties (or variables, if you will).

:root {
  --background-color: #fff;
  --text-color: #333;
  --link-color: #b52;
}
body {
  background-color: var(--background-color);
  color: var(--text-color);
}
a {
  color: var(--link-color);
}

Then I can over-ride the custom properties without having to touch the already-declared styles:

@media (prefers-color-scheme: dark) {
  :root {
    --background-color: #111416
    --text-color: #ccc;
    --link-color: #f96;
  }
}

All in all, I have about a dozen custom properties for colours—variations for text, backgrounds, and interface elements like links and buttons.

By using custom properties and the prefers-color-scheme media query, I was 90% of the way there. But the devil is in the details.

I have SVGs of sparklines on my homepage. The SVG has a hard-coded colour value in the stroke attribute of the path element that draws the sparkline. Fortunately, this can be over-ridden in the style sheet:

svg.activity-sparkline path {
  stroke: var(--text-color);
}

The real challenge came with the images I use in the headers of my pages. They’re JPEGs with white corners on one side and white gradients on the other.

header images

I could make them PNGs to get transparency, but the file size would shoot up—they’re photographic images (with a little bit of scan-line treatment) so JPEGs (or WEBPs) are the better format. Then I realised I could use CSS to recreate the two effects:

  1. For the cut-out triangle in the top corner, there’s clip-path.
  2. For the gradient, there’s …gradients!
background-image: linear-gradient(
  to right,
  transparent 50%,
  var(—background-color) 100%
);

Oh, and I noticed that when I applied the clip-path for the corners, it had no effect in Safari. It turns out that after half a decade of support, it still only exists with -webkit prefix. That’s just ridiculous. At this point we should be burning vendor prefixes with fire. I can’t believe that Apple still ships standardised CSS properties that only work with a prefix.

In order to apply the CSS clip-path and gradient, I needed to save out the images again, this time without the effects baked in. I found the original Photoshop file I used to export the images. But I don’t have a copy of Photoshop any more. I haven’t had a copy of Photoshop since Adobe switched to their Mafia model of pricing. A quick bit of searching turned up Photopea, which is pretty much an entire recreation of Photoshop in the browser. I was able to open my old PSD file and re-export my images.

LEGO clone trooper Brighton bandstand Scaffolding Tokyo Florence

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop. I could probably do those raster scan lines with CSS if I were smart enough.

dark mode

This is what I demo’d at the end of Indie Web Camp Amsterdam, and I was pleased with the results. But fate had an extra bit of good timing in store for me.

The very next day at the View Source conference, Melanie Richards gave a fantastic talk called The Tailored Web: Effectively Honoring Visual Preferences (seriously, conference organisers, you want this talk on your line-up). It was packed with great insights and advice on impementing dark mode, like this little gem for adjusting images:

@media (prefers-color-scheme: dark) {
  img {
    filter: brightness(.8) contrast(1.2);
  }
}

Melanie also pointed out that you can indicate the presence of dark mode styles to browsers, although the mechanism is yet to shake out. You can do it in CSS:

:root {
  color-scheme: light dark;
}

But you can also do it in HTML:

<meta name="supported-color-schemes" content="light dark">

That allows browsers to swap out replaced content; interface elements like form fields and dropdowns.

Oh, and one other addition I added after the fact was swapping out map imagery by using the picture element to point to darker map tiles:

<picture>
<source media="prefers-color-scheme: dark" srcset="https://api.mapbox.com/styles/v1/mapbox/dark-v10/static...">
<img src="https://api.mapbox.com/styles/v1/mapbox/outdoors-v10/static..." alt="map">
</picture>

light map dark map

So now I’ve got a dark mode for my website. Admittedly, it’s for just one of the eight style sheets. I’ve decided that, while I’ll update my default styles at every opportunity, I’m going to preservethe other skins as they are, like the historical museum pieces they are.

If you’re on the latest version of iOS, go ahead and toggle the light and dark options in your system preferences to flip between this site’s colour schemes.

Thursday, October 3rd, 2019

Travel talk

It’s been a busy two weeks of travelling and speaking. Last week I spoke at Finch Conf in Edinburgh, Code Motion in Madrid, and Generate CSS in London. This week I was at Indie Web Camp, View Source, and Fronteers, all in Amsterdam.

The Edinburgh-Madrid-London whirlwind wasn’t ideal. I gave the opening talk at Finch Conf, then immediately jumped in a taxi to get to the airport to fly to Madrid, so I missed all the excellent talks. I had FOMO for a conference I actually spoke at.

I did get to spend some time at Code Motion in Madrid, but that was a waste of time. It was one of those multi-track events where the trade show floor is prioritised over the talks (and the speakers don’t get paid). I gave my talk to a mostly empty room—the classic multi-track experience. On the plus side, I had a wonderful time with Jessica exploring Madrid’s many tapas delights. The food and drink made up for the sub-par conference.

I flew back from Madrid to the UK, and immediately went straight to London to deliver the closing talk of Generate CSS. So once again, I didn’t get to see any of the other talks. That’s a real shame—it sounds like they were all excellent.

The day after Generate though, I took the Eurostar to Amsterdam. That’s where I’ve been ever since. There were just as many events as in the previous week, but because they were all in Amsterdam, I could savour them properly, instead of spending half my time travelling.

Indie Web Camp Amsterdam was excellent, although I missed out on the afternoon discussions on the first day because I popped over to the Mozilla Tech Speakers event happening at the same time. I was there to offer feedback on lightning talks. I really, really enjoyed it.

I’d really like to do more of this kind of thing. There aren’t many activities I feel qualified to give advice on, but public speaking is an exception. I’ve got plenty of experience that I’m eager to share with up-and-coming speakers. Also, I got to see some really great lightning talks!

Then it was time for View Source. There was a mix of talks, panels, and breakout conversation corners. I saw some fantastic talks by people I hadn’t seen speak before: Melanie Richards, Ali Spittal, Sharell Bryant, and Tejas Kumar. I gave the closing keynote, which was warmly received—that’s always very gratifying.

After one day of rest, it was time for Fronteers. This was where myself and Remy gave the joint talk we’ve been working on:

Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.

I’m happy to say that it went off without a hitch. Remy definitely had the tougher task—he did a live demo. Needless to say, he did it flawlessly. It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

I’ve got some more speaking engagements ahead of me. Most of them are in Europe so I’m going to do my utmost to travel to them by train. Flying is usually more convenient but it’s terrible for my carbon footprint. I’m feeling pretty guilty about that Madrid trip; I need to make ammends.

I’ll be travelling to France next week for Paris Web. Taking the Eurostar is a no-brainer for that one. Straight after that Jessica and I will be going to Frankfurt for the book fair. Taking the train from Paris to Frankfurt will be nice and straightforward.

I’ll be back in Brighton for Indie Web Camp on the weekend of October 19th and 20th—you should come!—and then I’ll be heading off to Antwerp for Full Stack Fest. Anywhere in Belgium is easily reachable by train so that’ll be another Eurostar journey.

After that, it gets a little trickier. I’ll be going to Berlin for Beyond Tellerrand but I’m not sure I can make it work by train. Same goes for Web Clerks in Vienna. Cities that far east are tough to get to by train in a reasonable amount of time (although I realise that, compared to many others, I have the luxury of spending time travelling by train).

Then there are the places that I can only get to by plane. There’s the United States. I’ll be speaking at An Event Apart in San Francisco in December. A flight is unavoidable. Last time we went to the States, Jessica and I travelled by ocean liner. But that isn’t any better for the environment, given the low-grade fuel burned by ships.

And then there’s Ireland. I make trips back there to see my mother, but there’s no alternative to flying or taking a ferry—neither are ideal for the environment. At least I can offset the carbon from my flights; the travel equivalent to putting coins in the swear jar.

Don’t get me wrong—I’m not moaning about the amount of travel involved in going to conferences and workshops. It’s fantastic that I get to go to new and interesting places. That’s something I hope I never take for granted. But I can’t ignore the environmental damage I’m doing. I’ll be making more of an effort to travel by train to Europe’s many excellent web events. While I’m at it, I can ask Paul for his trainspotter expertise.

Wednesday, October 2nd, 2019

Brighton Bloggers 2019 meet-up – orbific.com

Some reminiscing at a recent Homebrew Website Club prompted James to organise a Brighton bloggers meetup …ten years on from the last one!

Mark your calendar: October 21st.

While you’re making your calendar, be sure to put in the dates for Indie Web Camp Brighton: October 19th and 20th. It would be lovely see some Brighton bloggers there!

Saturday, September 21st, 2019

Going offline with microformats

For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage.

It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:

I realised I didn’t need to store anything. HTML is the API.

Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.

Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.

In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.

So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.

Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.

When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.

The markup for my offline page looks like this:

<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>

I’ll populate that “history” div with information from a cache called “pages” that I’ve created using the Cache API in my service worker.

I’m going to use async/await to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those thens would lead to some serious indentation without async/await.

All async functions have to have a name—no anonymous async functions allowed. I’m calling this one listPages, just like Remy is doing. I’m making the listPages function execute immediately:

(async function listPages() {
...
})();

Now for the code to go inside that immediately-invoked function.

I create an array called browsingHistory that I’ll populate with the data I’ll use for that “history” div.

const browsingHistory = [];

I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser.

const parser = new DOMParser();

Time to open up my “pages” cache. This is the first await statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache (again with the imaginative naming).

const cache = await caches.open('pages');

Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys!

const keys = await cache.keys();

Time to get looping. I’m getting each request in the list of keys using a for/of loop:

for (const request of keys) {
...
}

Inside the loop, I pull the page out of the cache using the match() method of the Cache API. I’ll store what I get back in a variable called response. As with everything involving the Cache API, this is asynchronous so I need to use the await keyword here.

const response = await cache.match(request);

I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text() method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await keyword. When the promise resolves, I’ll have a variable called html that contains the body of the response.

const html = await response.text();

Now I can use that DOM parser I created earlier. I’ve got a string of text in the html variable. I can generate a Document Object Model from that string using the parseFromString() method. This isn’t asynchronous so there’s no need for the await keyword.

const dom = parser.parseFromString(html, 'text/html');

Now I’ve got a DOM, which I have creatively stored in a variable called …dom.

I can poke at it using DOM methods like querySelector. I can test to see if this particular page has an h-entry on it by looking for an element with a class attribute containing the value “h-entry”:

if (dom.querySelector('.h-entry h1.p-name') {
...
}

In this particular case, I’m also checking to see if the h1 element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if statement.

Inside the if statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data!

const data = new Object;

Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request variable in my for loop.

data.url = request.url;

I’m going to store the timestamp for this h-entry. I can get that from the datetime attribute of the time element marked up with a class of dt-published.

data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));

While I’m at it, I’m going to grab the human-readable date from the innerText property of that same time.dt-published element.

data.published = dom.querySelector('.h-entry .dt-published').innerText;

The title of the h-entry is in the innerText of the element with a class of p-name.

data.title = dom.querySelector('.h-entry .p-name').innerText;

At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body of the page, but I do put it in the head in a meta element. I’ll grab that now.

data.description = dom.querySelector('meta[name="description"]').getAttribute('content');

Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory array.

browsingHistory.push(data);

My if statement and my for/in loop are finished at this point. Here’s how the whole loop looks:

for (const request of keys) {
  const response = await cache.match(request);
  const html = await response.text();
  const dom = parser.parseFromString(html, 'text/html');
  if (dom.querySelector('.h-entry h1.p-name')) {
    const data = new Object;
    data.url = request.url;
    data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
    data.published = dom.querySelector('.h-entry .dt-published').innerText;
    data.title = dom.querySelector('.h-entry .p-name').innerText;
    data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
    browsingHistory.push(data);
  }
}

That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.

First of all, I want to make sure that the browsingHistory array isn’t empty. There’s no point going any further if it is.

if (browsingHistory.length) {
...
}

Within this if statement, I can do what I want with the data I’ve put into the browsingHistory array.

I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory array according to the timestamp property of each item within it:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div. I’m storing the markup in a string called …markup (my imagination knows no bounds).

let markup = '<p>But you still have something to read:</p>';

I’m going to add a chunk of markup for each item of data.

browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML() method to do this.

document.getElementById('history').insertAdjacentHTML('beforeend', markup);

Here’s what my finished JavaScript looks like:

<script>
(async function listPages() {
  const browsingHistory = [];
  const parser = new DOMParser();
  const cache = await caches.open('pages');
  const keys = await cache.keys();
  for (const request of keys) {
    const response = await cache.match(request);
    const html = await response.text();
    const dom = parser.parseFromString(html, 'text/html');
    if (dom.querySelector('.h-entry h1.p-name')) {
      const data = new Object;
      data.url = request.url;
      data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
      data.published = dom.querySelector('.h-entry .dt-published').innerText;
      data.title = dom.querySelector('.h-entry .p-name').innerText;
      data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
      browsingHistory.push(data);
    }
  }
  if (browsingHistory.length) {
    browsingHistory.sort( (a,b) => {
      return b.timestamp - a.timestamp;
    });
    let markup = '<p>But you still have something to read:</p>';
    browsingHistory.forEach( data => {
      markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
    });
    document.getElementById('history').insertAdjacentHTML('beforeend', markup);
  }
})();
</script>

I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.

If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.

If you don’t have an offline strategy for your website, there’s a book for that.

Wednesday, September 18th, 2019

A love letter to my website - DESK Magazine

We choose whether our work stays alive on the internet. As long as we keep our hosting active, our site remains online. Compare that to social media platforms that go public one day and bankrupt the next, shutting down their app and your content along with it.

Your content is yours.

But the real truth is that as long as we’re putting our work in someone else’s hands, we forfeit our ownership over it. When we create our own website, we own it – at least to the extent that the internet, beautiful in its amorphous existence, can be owned.

Monday, September 16th, 2019

I’m Taking Ownership of My Tweets—zachleat.com

I fully expect my personal website to outlive Twitter and as such have decided to take full ownership of the content I’ve posted there. In true IndieWeb fashion, I’m taking ownership of my data.

Thursday, September 5th, 2019

Why I Have a Website and You Should Too · Jamie Tanna | Software (Quality) Engineer

I know a number of people who blog as a way to express themselves, for expression’s sake, rather than for anyone else wanting to read it. It’s a great way to have a place to “scream into the void” and share your thoughts.

Sunday, September 1st, 2019

Why These Social Networks Failed So Badly

Ignore the clickbaity headline and have a read of Whitney Kimball’s obituaries of Friendster, MySpace, Bebo, OpenSocial, ConnectU, Tribe.net, Path, Yik Yak, Ello, Orkut, Google+, and Vine.

I’m sure your content on Facebook, Twitter, and Instagram is perfectly safe.

Thursday, August 29th, 2019

comment parade

A way for you to comment (anonymously, if you wish) on any post that accepts webmentions. So you can use this to respond to posts on adactio.com if you want.

Saturday, August 24th, 2019

Consume less, create more

Editing is hard because you realize how bad you are. But editing is easy because we’re all better at criticizing than we are at creating.

Relatable:

My essay was garbage. But it was my garbage.

This essay is most definitely not garbage. I like it very much.

Thursday, August 22nd, 2019

Why We All Need a Personal Website – Plus Practical Tips for How to Build One - Adobe 99U

The best time to make a personal website is 20 years ago. The second best time to make a personal website is now.

Chris offers some illustrated advice:

  • Define the purpose of your site
  • Organize your content
  • Look for inspiration
  • Own your own domain name
  • Build your website

Friday, August 9th, 2019

Register for Indie Web Camp Brighton 2019

Back at the end of May, I wrote:

We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

I hope you’ve got those dates marked in your calendar. Now it’s time for the next step: register for the event. Registration is free, but we need to know numbers in advance, so if you’re planning to come, please grab yourself a ticket there.

It’s going to be a lot of fun!

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

Check out the previous Brighton Indie Web Camps:

See you at 68 Middle Street on Saturday, October 19th for Indie Web Camp Brighton 2019!

Thursday, August 8th, 2019

Discrete replies

Earlier this year, at Indie Web Camp Düsseldorf, I got replies working on my own site. That is to say, I can host a reply on my site to something on another site.

The classic example is Twitter. In fact, if you look at all my replies, most of them are responding to tweets (I also syndicate these replies to Twitter so they show up there just like regular tweet replies).

I’m really, really glad I got replies working. I’ve been using this functionality quite a bit, and it feels really good to own my content this way.

At the time, I wrote:

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

I decided not to include them on my home page feed after all. You’ll still see them if you go to the notes section of my site, but I decided that they were overwhelming my home page a bit. They also don’t show up in my RSS feed.

I’m really happy that I’m hosting my replies, and that I’ve got URLs for all of them, but I don’t think I want to give them the same priority as blog posts, links, and regular notes.

Friday, July 19th, 2019

Simon Collison | Timeline

I’ve shaped this timeline over five months. It might look simple, but it most definitely was not. I liken it to chipping away at a block of marble, or the slow process of evolving a painting, or constructing a poem; endless edits, questions, doubling back, doubts. It was so good to have something meaty to get stuck into, but sometimes it was awful, and many times I considered throwing it away. Overall it was challenging, fun, and worth the effort.

Simon describes the process of curating the lovely timeline on his personal homepage.

My timeline is just like me, and just like my life: unfinished, and far from perfect.

Monday, July 15th, 2019

How to run a small social network site for your friends

This is a great how-to from Darius Kazemi!

The main reason to run a small social network site is that you can create an online environment tailored to the needs of your community in a way that a big corporation like Facebook or Twitter never could. Yes, you can always start a Facebook Group for your community and moderate that how you like, but only within certain bounds set by Facebook. If you (or your community) run the whole site, then you are ultimately the boss of what goes on. It is harder work than letting Facebook or Twitter or Slack or Basecamp or whoever else take care of everything, but I believe it’s worth it.

There’s a lot of good advice for community management and the whole thing is a lesson in writing excellent documentation.

Tuesday, July 2nd, 2019

Bridgy for Webmentions with Brotli—zachleat.com

This is good to know! Because of a bug in Google App Engine, Brid.gy won’t work for sites using Brotli compression on HTML.