Since Magnolia went down, taking everyone’s bookmarks with it, I’ve been through a mild cycle.

  1. Denial. “It can’t be that all the data is gone. They’ll recover it.”
  2. Anger. “I want my freaking bookmarks!”
  3. Bargaining. “Isn’t there something I can do? Maybe there’s some API hacking that would help.”
  4. Depression. “Why do I bother contributing to any social websites. Our data is doomed in the end.”
  5. Acceptance. “C’est le Web.”

I also experienced déjà vu at every stage. The only difference between the end of Pownce and the end of Magnolia was that just one of those pieces of plug-pulling was planned. From the perspective of the people running those services, that’s a huge difference. From my perspective as an avid user of both services, it felt the same.

Actually, things turned out okay for my Magnolia data in the end. I was able to recover all my bookmarks …and it wasn’t down to any API hacking either. My bookmarks were saved by two messy, scrappy, plucky little technologies: RSS and microformats.

Google Reader caches RSS feeds aggressively. As long one person has ever subscribed to the RSS feed of your Magnolia links, you should be able to retrieve your links using Google’s Feed API—‘though for the life of me, I cannot understand why Google insists on marketing all these APIs as “Ajax” APIs, hiding server-side documentation under “Flash and other Non-Javascript Environments”.

If that doesn’t work, there’s always the regular HTML as archived by Google and the Internet Archive. Magnolia’s pages were marked up with . Using tools like Glenn’s UfXtract, this structured data can be converted into JSON or some other importable format. As Chris put it, Microformats are the vinyl of the web.

Magnolia’s bookmark recovery page uses a mixture of RSS and XFolk extraction tricks. I was able to recover my bookmarks and import them into Delicious.

But what’s the point of that? Swapping one third-party service for another. Well, believe me, I did a lot of soul searching before putting my links back in another silo. Really, I should be keeping my links here on, maybe pinging Delicious or some other social bookmarking site as a back-up …what would Steven Pemberton do?

In the end, I decided to keep using Delicious partly out of convenience, but mostly because I can export my bookmarks quite easily; either through the API or as a hulking great hideous HTML bookmarks file (have you ever looked at the markup of those files that browsers import/export? Yeesh!)

But the mere presence of backup options isn’t enough. After all, Magnolia had a better API than Delicious but that didn’t help when the server came a crashin’. If I’m going to put data into a third-party site, I’m going to have to be self-disciplined and diligent about backing up regularly, just as I do with local data. So I’m getting myself into the habit of running a little PHP script every weekend that will extract all my bookmarks for safekeeping.

That’s my links taken care of. What about other data stores?

  • Twitter. This PHP script should take care of backing up all my inane utterances.
  • Flickr. I still have all the photos I’ve uploaded to Flickr so the photos themselves will be saved should anything happen to the site. But it would be a shame to lose the metadata that the pictures have accumulated. I should probably investigate how much metadata is maintained by backup services like QOOP.
  • Dopplr. Well, the data about my trips isn’t really the important part of Dopplr; it’s the ancillary stuff like coincidences that makes it so handy. Still, with a little bit of hacking on the Dopplr API I could probably whip an export script together. Update: Tom writes to tell me that in the form of an .ics file.
  • Again, like Dopplr, I’m not sure how valuable the data is outside the social context of the site. But again, like Dopplr, a bit of hacking on the API might yield a reusable export script.
  • Ffffound. I don’t use it to store anything useful or valuable. That’s what tools like are for. Update: Hacker extraordinaire Paul Mison has whipped up a Ruby script to scrape ffffound and he points me in the direction of ddddownload.
  • Facebook. It could fall off the face of the planet for all I care. I’ve never put any data into the site. I only keep a profile there as a communication hub for otherwise unconnected old friends.

As for my own sites—adactio, DOM Scripting, Principia Gastronomica, Salter Cane and of course The Session and Huffduffer—I’ve got local copies which are regularly backed up to an external hard drive and I’m doing database dumps once a week, which probably isn’t often enough. I worry sometimes that I’m not nearly as paranoid as I should be.

What happened to Magnolia was a real shame but, to put a positive spin on it, it’s been a learning experience not just for me, but for Larry too.

Have you published a response to this? :


Pelle Wessman

I’ve been meaning to at least set up a PESOS flow to, maybe this will finally make me do it 🤭 (Just need to finish my port from Jekyll to Eleventy first)


# Liked by sio.eth on Monday, January 10th, 2022 at 5:34pm

# Liked by Pelle Wessman on Monday, January 10th, 2022 at 5:34pm

Previously on this day

16 years ago I wrote BarCamp London 2: The Schedule

Get the line-up in hCalendar.

16 years ago I wrote BarCamping

BarCamp London 2: electric boogaloo.

20 years ago I wrote BBC - CNN = 866

Here is the BBC transcript of Hans Blix’s presentation to the UN security council.

20 years ago I wrote Robota

Here’s something a little bit different: a trailer for a book.

21 years ago I wrote New and improved

The eagle-eyed amongst you will have noticed a few changes here in the "Journal" section of adactio.