Tags: mapping

5

sparkline

Webmentions at Indie Web Camp Berlin

I was in Berlin for most of last week, and every day was packed with activity:

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:

  1. By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
  2. If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

  • Does it include a link to a licence?
  • Is there a restrictive robots.txt file?
  • Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.

100 words 057

It’s UX London week. That’s always a crazy busy time at Clearleft. But it’s also an opportunity. We have this sneaky tactic of kidnapping a speaker from UX London and making them give a workshop just for us. We did it a few years ago with Dave Grey and we got a fantastic few days of sketching out of it.

This time we grabbed Jeff Patton. He spent this afternoon locked in the auditorium at 68 Middle Street teaching us all about user story mapping. ‘Twas most enlightening and really helped validate some of the stuff we’ve been doing lately.

Wait. They don’t love you like I love you.

There’s been a lot of map-related activity on the BBC recently. The series of documentaries called The Beauty of Maps was all too short.

Meanwhile, Radio 4 ran a ten-part series entitled On The Map. They would have disappeared down the Beeb’s memory hole but Brian did a little bit of digital preservation and passed them on to me. I’ve put them on Huffduffer. You can subscribe to a podcast of all ten episodes or listen to them individually:

  1. The Map Makers
  2. Mapping the Metropolis
  3. Motoring Maps
  4. Social Mapping
  5. The Lie of the Land
  6. World View
  7. Off The Map
  8. Whose Map is it Anyway?
  9. Digital Maps
  10. Maps of the Mind

If you’re in London any time before September 19th, be sure to check out the Magnificent Maps exhibition at The British Library. It’s free and it’s excellent.

Brighton, mapped

Today I travelled from home to work, from work to band practice, from band practice to an educational celebration: OpenStreetMap Brighton 1.0.

Ever since the mapping workshop after dConstruct 2006, Mikel and others have been out and about improving the mapping data for Brighton from the ground up. While a map can never be truly finished—it is, after all, a representation of a changing, evolving place—the data is now remarkably complete.

There’s a natural tendency for us to think in our own domains of experience so I usually only see the potential for OpenStreetMap data in web applications and mashups. But the launch event showed some wonderful use-cases in the real world: local councils, public transport… these are organisations that would otherwise have to pay very large sums (of taxpayer’s money) to the Ordnance Survey just to display a map.

OpenStreetMap is one of those applications of technology, like Wikipedia or BarCamp, that fills me with hope. On paper, the concepts sound crazy. In reality, they don’t just compete with commercial services, they surpass them.

I really need to get myself a GPS device.

Mashing up with microformats

Back in March, during South by Southwest, Tantek asked me if I’d like to sit in on his microformats panel alongside Chris Messina and Norm! The audio recording of the panel is now available through the conference podcast.

I’ve taken the liberty of having the recording transcribed (using castingWords.com) and I’ve posted a tidied up version of the transcript to the articles section: Microformats: Evolving the Web. You can listen along through the articles RSS feed which doubles up as a podcast.

I’ve also posted the transcript on the microformats wiki so that others can edit it if they catch any glaring mistakes in the transcription.

During the panel I talked about Adactio Austin, a fairly trivial use of microformats but one that I’ve been building upon. I’d like to provide some cut’n’paste JavaScript that would allow people to get some added value from using microformats. Supposing you have a bunch of locations marked up in hCard with geotags, you could drop in a script and have a map appear showing those locations.

Perhaps the geotagging won’t even be necessary. Google added a geocoder to their mapping API two weeks ago. The UK, alas, is not yet supported (probably because the Post Office won’t let go of its monopoly that easily… Postman Pat, your money-grabbing days are numbered).

Unfortunately, Google Maps isn’t very suited to the cut’n’paste idea: you have to register a different API key for each domain where you want to use the mapping API.

The Yahoo maps API is less draconian about registration but its lack of detailed UK maps makes it a non-starter for me.

Maybe I should step away from maps and concentrate on events instead. It probably wouldn’t be too hard too write a script to create a calendar based on any hCalendar data found in a document. Perhaps I’ll investigate the calendar widget from Yahoo.

Ultimately I’d like to create something like Chris’s Mapendar idea. If only there were enough hours in the day.