Tags: amp

55

sparkline

Going offline at Indie Web Camp Düsseldorf

I’ve just come back from a ten-day trip to Germany. The trip kicked off with Indie Web Camp Düsseldorf over the course of a weekend.

IndieWebCamp Düsseldorf 2017

Once again the wonderful people at Sipgate hosted us in their beautiful building, and once again myself and Aaron helped facilitate the two days.

IndieWebCamp Düsseldorf 2017

Saturday was the BarCamp-like discussion day. Plenty of interesting topics were covered. I led a session on service workers, and that’s also what I decided to work on for the second day—that’s when the talking is done and we get down to making.

IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017

I like what Ethan is doing on his offline page. He shows a list of pages that have been cached, but instead of just listing URLs, he shows a title and description for each page.

I’ve already got a separate cache for pages that gets added to as the user browses around my site. I needed to figure out a way to store the metadata for those pages so that I could then display it on the offline page. I came up with a workable solution, and interestingly, it involved no changes to the service worker script at all.

When you visit any blog post, I put metadata about the page into localStorage (after first checking that there’s an active service worker):

if (navigator.serviceWorker && navigator.serviceWorker.controller) {
  window.addEventListener('load', function() {
    var data = {
      "title": "A minority report on artificial intelligence",
      "description": "Revisiting Spielberg’s films after a decade and a half.",
      "published": "May 7th, 2017",
      "timestamp": "1494171049"
    };
    localStorage.setItem(
      window.location.href,
      JSON.stringify(data)
    );
  });
}

In my case, I’m outputting the metadata from the server, but you could just as easily grab some from the DOM like this:

var data = {
  "title": document.querySelector("title").innerText,
  "description": document.querySelector("meta[name='description']").getAttribute("contents")
}

Meanwhile in my service worker, when you visit that same page, it gets added to a cache called “pages”. Both localStorage and the cache API are using URLs as keys. I take advantage of that on my offline page.

The nice thing about writing JavaScript on my offline page is that I know the page will only be seen by modern browsers that support service workers, so I can use all sorts of fancy from ES6, or whatever we’re calling it now.

I start by looping through the keys of the “pages” cache (that’s right—the cache API isn’t just for service workers; you can access it from any script). Then I check to see if there is a corresponding localStorage key with the same string (a URL). If there is, I pull the metadata out of local storage and add it to an array called browsingHistory:

const browsingHistory = [];
caches.open('pages')
.then( cache => {
  cache.keys()
  .then(keys => {
    keys.forEach( request => {
      let data = JSON.parse(localStorage.getItem(request.url));
        if (data) {
          data['url'] = request.url;
          browsingHistory.push(data);
      }
    });

Then I sort the list of pages in reverse chronological order:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I loop through each page in the browsing history list and construct a link to each URL, complete with title and description:

let markup = '';
browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

Finally I dump the constructed markup into a waiting div in the page with an ID of “history”:

let container = document.getElementById('history');
container.insertAdjacentHTML('beforeend', markup);

All those steps need to be wrapped inside the then clause attached to caches.open("pages") because the cache API is asynchronous.

There you have it. Now if you’re browsing adactio.com and your network connection drops (or my server goes offline), you can choose from a list of pages you’ve previously visited.

The current situation isn’t ideal though. I’ve got a clean-up operation in my service worker to limit the number of items stored in my “pages” cache. The cache never gets bigger than 35 items. But there’s no corresponding clean-up of metadata stored in localStorage. So there could be a lot more bits of metadata in local storage than there are pages in the cache. It’s not harmful, but it’s a bit wasteful.

I can’t do a clean-up of localStorage from my service worker because service workers can’t access localStorage. There’s a very good reason for that: the localStorage API is synchronous, and everything that happens in a service worker needs to be asynchronous.

Service workers can access indexedDB: it’s asynchronous. I could use indexedDB instead of localStorage, but I’m not a masochist. My best bet would be to use the localForage library, which wraps indexedDB in the simple syntax of localStorage.

Maybe I’ll do that at the next Homebrew Website Club here in Brighton.

In AMP we trust

AMP Conf was one of those deep dive events, with two days dedicated to one single technology: AMP.

Except AMP isn’t really one technology, is it? And therein lies the confusion. This was at the heart of the panel I was on. When we talk about AMP, we could be talking about one of three things:

  1. The AMP format. A bunch of web components. For instance, instead of using an img element on an AMP page, you use an amp-img element instead.
  2. The AMP rules. There’s one JavaScript file, hosted on Google’s servers, that turns those web components from spans into working elements. No other JavaScript is allowed. All your styles must be in a style element instead of an external file, and there’s a limit on what you can do with those styles.
  3. The AMP cache. The source of most confusion—and even downright enmity—this is what’s behind the fact that when you launch an AMP result from Google search, you don’t go to another website. You see Google’s cached copy of the page instead of the original.

The first piece of AMP—the format—is kind of like a collection of marginal gains. Where the img element might have some performance issues, the amp-img element optimises for perceived performance. But if you just used the AMP web components, it wouldn’t be enough to make your site blazingly fast.

The second part of AMP—the rules—is where the speed gains start to really show. You can’t have an external style sheet, and crucially, you can’t have any third-party scripts other than the AMP script itself. This is key to making AMP pages super fast. It’s not so much about what AMP does; it’s more about what it doesn’t allow. If you never used a single AMP component, but stuck to AMP’s rules disallowing external styles and scripts, you could easily make a page that’s even faster than what AMP can do.

At AMP Conf, Natalia pointed out that The Guardian’s non-AMP pages beat out the AMP pages for performance. So why even have AMP pages? Well, that’s down to the third, most contentious, part of the AMP puzzle.

The AMP cache turns the user experience of visiting an AMP page from fast to instant. While you’re still on the search results page, Google will pre-render an AMP page in the background. Not pre-fetch, pre-render. That’s why it opens so damn fast. It’s also what causes the most confusion for end users.

From my unscientific polling, the behaviour of AMP results confuses the hell out of people. The fact that the page opens instantly isn’t the problem—far from it. It’s the fact that you don’t actually go to an another page. Technically, you’re still on Google. An analogous mental model would be an RSS reader, or an email client: you don’t go to an item or an email; you view it in situ.

Well, that mental model would be fine if it were consistent. But in Google search, only some results will behave that way (the AMP pages) and others will behave just like regular links to other websites. No wonder people are confused! Some search results take them away and some search results keep them on Google …even though the page looks like a different website.

The price that we pay for the instantly-opening AMP pages from the Google cache is the URL. Because we’re looking at Google’s pre-rendered copy instead of the original URL, the address bar is not pointing to the site the browser claims to be showing. Everything in the body of the browser looks like an article from The Guardian, but if I look at the URL (which is what security people have been telling us for years is important to avoid being phished), then I’ll see a domain that is not The Guardian’s.

But wait! Couldn’t Google pre-render the page at its original URL?

Yes, they could. But they won’t.

This was a point that Paul kept coming back to: trust. There’s no way that Google can trust that someone else’s URL will play by the AMP rules (no external scripts, only loading embedded content via web components, limited styles, etc.). They can only trust the copies that they themselves are serving up from their cache.

By the way, there was a joint AMP/search panel at AMP Conf with representatives from both teams. As you can imagine, there were many questions for the search team, most of which were Glomar’d. But one thing that the search people said time and again was that Google was not hosting our AMP pages. Now I don’t don’t know if they were trying to make some fine-grained semantic distinction there, but that’s an outright falsehood. If I click on a link, and the URL I get taken to is a Google property, then I am looking at a page hosted by Google. Yes, it might be a copy of a document that started life somewhere else, but if Google are serving something from their cache, they are hosting it.

This is one of the reasons why AMP feels like such a bait’n’switch to me. When it first came along, it felt like a direct competitor to Facebook’s Instant Articles and Apple News. But the big difference, we were told, was that you get to host your own content. That appealed to me much more than having Facebook or Apple host the articles. But now it turns out that Google do host the articles.

This will be the point at which Googlers will say no, no, no, you can totally host your own AMP pages …but you won’t get the benefits of pre-rendering. But without the pre-rendering, what’s the point of even having AMP pages?

Well, there is one non-cache reason to use AMP and it’s a political reason. Beleaguered developers working for publishers of big bloated web pages have a hard time arguing with their boss when they’re told to add another crappy JavaScript tracking script or bloated library to their pages. But when they’re making AMP pages, they can easily refuse, pointing out that the AMP rules don’t allow it. Google plays the bad cop for us, and it’s a very valuable role. Sarah pointed this out on the panel we were on, and she was spot on.

Alright, but what about The Guardian? They’ve already got fast pages, but they still have to create separate AMP pages if they want to get the pre-rendering benefits when they show up in Google search results. Sorry, says Google, but it’s the only way we can trust that the pre-rendered page will be truly fast.

So here’s the impasse we’re at. Google have provided a list of best practices for making fast web pages, but the only way they can truly verify that a page is sticking to those best practices is by hosting their own copy, URLs be damned.

This was the crux of Paul’s argument when he was on the Shop Talk Show podcast (it’s a really good episode—I was genuinely reassured to hear that Paul is not gung-ho about drinking the AMP Kool Aid; he has genuine concerns about the potential downsides for the web).

Initially, I accepted this argument that Google just can’t trust the rest of the web. But the more I talked to people at AMP Conf—and I had some really, really good discussions with people away from the stage—the more I began to question it.

Here’s the thing: the regular Google search can’t guarantee that any web page is actually 100% the right result to return for a search. Instead there’s a lot of fuzziness involved: based on the content, the markup, and the number of trusted sources linking to this, it looks like it should be a good result. In other words, Google search trusts websites to—by and large—do the right thing. Sometimes websites abuse that trust and try to game the system with sneaky tricks. Google responds with penalties when that happens.

Why can’t it be the same for AMP pages? Let me host my own AMP pages (maybe even host my own AMP script) and then when the Googlebot crawls those pages—the same as it crawls any other pages—that’s when it can verify that the AMP page is abiding by the rules. If I do something sneaky and trick Google into flagging a page as fast when it actually isn’t, then take my pre-rendering reward away from me.

To be fair, Google has very, very strict rules about what and how to pre-render the AMP results it’s caching. I can see how allowing even the potential for a false positive would have a negative impact on the user experience of Google search. But c’mon, there are already false positives in regular search results—fake news, spam blogs. Googlers are smart people. They can solve—or at least mitigate—these problems.

Google says it can’t trust our self-hosted AMP pages enough to pre-render them. But they ask for a lot of trust from us. We’re supposed to trust Google to cache and host copies of our pages. We’re supposed to trust Google to provide some mechanism to users to get at the original canonical URL. I’d like to see trust work both ways.

The road to Indie Web Camp LA

After An Event Apart San Francisco, which was—as always—excellent, it was time for me to get to the next event: Indie Web Camp Los Angeles. But I wasn’t going alone. Tantek was going too, and seeing as he has a car—a convertible, even—what better way to travel from San Francisco to LA than on the Pacific Coast Highway?

It was great—travelling through the land of Steinbeck and Guthrie at the speed of Kerouac and Springsteen. We stopped for the night at Pismo Beach and then continued on, rolling into Santa Monica at sunset.

Half Moon Bay. Roadtripping with @t. Pomponio beach. Windswept. Salinas. Refueling. Driving through the Californian night. Pismo Beach. On the beach. On the beach with @t. Stopping for a coffee in Santa Barbara. Leaving Pismo Beach. Chevron. Santa Barbara steps. On the road. Driving through Malibu. Malibu sunset. Sun worshippers. Sunset in Santa Monica.

The weekend was spent in the usual Indie Web Camp fashion: a day of BarCamp-style discussions, followed by a day of hacking on our personal websites.

I decided to follow on from what I did at the Brighton Indie Web Camp. There, I made a combined tag view—a way of seeing, for example, everything tagged with “indieweb” instead of just journal entries tagged with “indieweb” or links tagged with “indieweb”. I wanted to do the same thing with my archives. I have separate archives for my journal, my links, and my notes. What I wanted was a combined view.

After some hacking, I got it working. So now you can see combined archives by year, month, and day (I managed to add a sparkline to the month view as well):

I did face a bit of a conundrum. Both my home page stream and my tag pages show posts in reverse chronological order, with the newest posts at the top. I’ve decided to replicate that for the archive view, but I’m not sure if that’s the right decision. Maybe the list of years should begin with 2001 and end with 2016, instead of the other way around. And maybe when you’re looking at a month of posts, you should see the first posts in that month at the top.

Anyway, I’ll live with it in reverse chronological order for a while and see how it feels. I’m just glad I managed to get it down—I’ve been meaning to do it for quite a while. Once again, I’m amazed by how much gets accomplished when you’re in the same physical space as other helpful, motivated people all working on improving their indie web presence, little by little.

Greetings from Indie Web Camp LA. Indie Web Camping. Hacking away. Day two of Indie Web Camp LA.

Indie Web Camp Brighton 2016

Indie Web Camp Brighton 2016 is done and dusted. It’s hard to believe that it’s already in its fifth(!) year. As with previous years, it was a lot of fun.

IndieWebCampBrighton2016

The first day—the discussions day—covered a lot of topics. I led a session on service workers, where we brainstormed offline and caching strategies for personal websites.

There was a design session looking at alternatives to simply presenting everything in a stream. Some great ideas came out of that. And there was a session all about bookmarking and linking. That one really got my brain whirring with ideas for the second day—the making/coding day.

I’ve learned from previous Indie Web Camps that a good strategy for the second day is to have two tasks to tackle: one that’s really easy (so you’ve at least got that to demo at the end), and one that’s more ambitious. This time, I put together a list of potential goals, and then ordered them by difficulty. By the end of the day, I managed to get a few of them done.

First off, I added a small bit of code to my bookmarking flow, so that any time I link to something, I send a ping to the Internet Archive to grab a copy of that URL. So here’s a link I bookmarked to one of Remy’s blog posts, and here it is in the Wayback Machine—see how the date of storage matches the date of my link.

The code to do that was pretty straightforward. I needed to hit this endpoint:

http://web.archive.org/save/{url}

I also updated my bookmarklet for posting links so that, if I’ve highlighted any text on the page I’m linking to, that text is automatically pasted in to the description.

I tweaked my webmentions a bit so that if I receive a webmention that has a type of bookmark-of, that is displayed differently to a comment, or a like, or a share. Here’s an example of Aaron bookmarking one of my articles.

The more ambitious plan was to create an over-arching /tags area for my site. I already have tag-based navigation for my journal and my links:

But until this weekend, I didn’t have the combined view:

I didn’t get around to adding pagination. That’s something I should definitely add, because some of those pages get veeeeery long. But I did spend some time adding sparklines. They can be quite revealing, especially on topics that were hot ten years ago, but have faded over time, or topics that have becoming more and more popular with each year.

All in all, a very productive weekend.

European tour

I’m recovering from an illness that laid me low a few weeks back. I had a nasty bout of man-flu which then led to a chest infection for added coughing action. I’m much better now, but alas, this illness meant I had to cancel my trip to Chicago for An Event Apart. I felt very bad about that. Not only was I reneging on a commitment, but I also missed out on an opportunity to revisit a beautiful city. But it was for the best. If I had gone, I would have spent nine hours in an airborne metal tube breathing recycled air, and then stayed in a hotel room with that special kind of air conditioning that hotels have that always seem to give me the sniffles.

Anyway, no point regretting a trip that didn’t happen—time to look forward to my next trip. I’m about to embark on a little mini tour of some lovely European cities:

  • Tomorrow I travel to Stockholm for Nordic.js. I’ve never been to Stockholm. In fact I’ve only stepped foot in Sweden on a day trip to Malmö to hang out with Emil. I’m looking forward to exploring all that Stockholm has to offer.
  • On Saturday I’ll go straight from Stockholm to Berlin for the View Source event organised by Mozilla. Looks like I’ll be staying in the east, which isn’t a part of the city I’m familiar with. Should be fun.
  • Alas, I’ll have to miss out on the final day of View Source, but with good reason. I’ll be heading from Berlin to Bologna for the excellent From The Front conference. Ah, I remember being at the very first one five years ago! I’ve made it back every second year since—I don’t need much of an excuse to go to Bologna, one of my favourite places …mostly because of the food.

The only downside to leaving town for this whirlwind tour is that there won’t be a Brighton Homebrew Website Club tomorrow. I feel bad about that—I had to cancel the one two weeks ago because I was too sick for it.

But on the plus side, when I get back, it won’t be long until Indie Web Camp Brighton on Saturday, September 24th and Sunday, September 25th. If you haven’t been to an Indie Web Camp before, you should really come along—it’s for anyone who has their own website, or wants to have their own website. If you have been to an Indie Web Camp before, you don’t need me to convince you to come along; you already know how good it is.

Sign up for Indie Web Camp Brighton here. It’s free and it’s a lot of fun.

The importance of owning your data is getting more awareness. To grow it and help people get started, we’re meeting for a bar-camp like collaboration in Brighton for two days of brainstorming, working, teaching, and helping.

Save the dates for Indie Web Camp Brighton 2016

September 24th and 25th—those are the dates you should put in your diary. That’s when this year’s Indie Web Camp Brighton is happening.

Once again it’ll be at 68 Middle Street, home to Clearleft. You can register for free now, and then add your name to the list of participants on the wiki.

If you haven’t been to an Indie Web Camp before, it’s a very straightforward proposition. The idea is that you should have your own website. That’s it. Every thing else is predicated on that. So while there’ll be plenty of discussions, demos, and designs, they’re all in service to that fundamental premise.

The first day of an Indie Web Camp is like a BarCamp. We make a schedule grid at the start of the day and people organise topics by room and time slot. It sounds chaotic. It is chaotic. But it works surprisingly well. The discussions can be about technologies, or interfaces, or ideas, or just about anything really.

The second day is for making. After the discussions from the previous day, most people will have a clear idea at this point for something they might want to do. It might involve adding some new technology to their website, or making some design changes, or helping build a tool. For people starting from scratch, this is the perfect time for them to build and launch a basic website.

At the end of the second day, everyone demos what they’ve done. I’m always amazed by how much people can accomplish in just one weekend. There’s something about having other people around to help you that makes it super productive.

You might be thinking “but I’m not a coder!” Don’t worry—there’ll be plenty of coders there so you can get their help on whatever you might decide to do. If you’re a designer, your skills will be in high demand by those coders. It’s that mish-mash of people that makes it such a fun gathering.

Last year’s Indie Web Camp Brighton was lots of fun. Let’s make Indie Web Camp Brighton 2016 even better!

Indie Web Camp Brighton group photo

Backdoor Service Workers

When I was moderating that panel at the Progressive Web App dev Summit, I brought up this point about twenty minutes in:

Alex, in your talk yesterday you were showing the AMP demo there with the Washington Post. You click through and there’s the Washington Post AMP thing, and it was able to install the Service Worker with that custom element. But I was looking at the URL bar …and that wasn’t the Washington Post. It was on the CDN from AMP. So I talked to Paul Backaus from the AMP team, and he explained that it’s an iframe, and using an iframe you can install a Service Worker from somewhere else.

Alex and Emily explained that, duh, that’s the way iframes work. It makes sense when you think about it—an iframe is pretty much the same as any other browser window. Still, it feels like it might violate the principle of least surprise.

Let’s say you followed my tongue-in-cheek advice to build a progressive web app store. Your homepage might have the latest 10 or 20 progressive web apps. You could also include 10 or 20 iframes so that those sites are “pre-installed” for the person viewing your page.

Enough theory. Here’s a practical example…

Suppose you’ve never visited the website for my book, html5forwebdesigners.com (if you have visited it, and you want to play along with this experiment, go to your browser settings and delete anything stored by that domain).

You happen to visit my website adactio.com. There’s a little blurb buried down on the home page that says “Read my book” with a link through to html5forwebdesigners.com. I’ve added this markup after the link:

<iframe src="https://html5forwebdesigners.com/iframe.html" style="width: 0; height: 0; border: 0">
</iframe>

That hidden iframe pulls in an empty page with a script element:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>HTML5 For Web Designers</title>
<script>
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/serviceworker.js');
}
</script>
</head>
</html>

That registers the Service Worker on my book’s site which then proceeds to install all the assets it needs to render the entire site offline.

There you have it. Without ever visiting the domain html5forwebdesigners.com, the site has been pre-loaded onto your device because you visited the domain adactio.com.

A few caveats:

  1. I had to relax the Content Security Policy for html5forwebdesigners.com to allow the iframe to be embedded on adactio.com:

    Header always set Access-Control-Allow-Origin: "https://adactio.com"
    
  2. If your browser’s settings has “Block third-party cookies and site data” selected in the preferences, the iframe-invoked Service Worker won’t install:

    Uncaught (in promise) DOMException: Failed to register a ServiceWorker: The user denied permission to use Service Worker.
    

The example I’ve put together here is relatively harmless. But it’s possible to imagine more extreme scenarios. Imagine there’s a publishing company that has 50 websites for 50 different publications. Each one of them could have an empty page waiting to be embedded via iframe from the other 49 sites. You only need to visit one page on one of those 50 sites to have 50 Service Workers spun up and caching assets in the background.

There’s the potential here for a tragedy of the commons. I hope we’ll be sensible about how we use this power.

Just don’t tell the advertising industry about this.

Amsterdam Brighton Amsterdam

I’m about to have a crazy few days that will see me bouncing between Brighton and Amsterdam.

It starts tomorrow. I’m flying to Amsterdam in the morning and speaking at this Icons event in the afternoon about digital preservation and long-term thinking.

Then, the next morning, I’ll be opening up the inaugural HTML Special which is a new addition the CSS Day conference. Each talk on Thursday will cover one HTML element. I am honoured to speaking about the A element. Here’s the talk description:

The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Enquire within upon everything.

I’ve been working all out to get this talk done and I finally wrapped it up today. Right now, I feel pretty happy with it, but I bet I’ll change that opinion in the next 48 hours. I’m pretty sure that this will be one of those talks that people will either love or hate, kind of like my 2008 dConstruct talk, The System Of The World.

After CSS Day, I’ll be heading back to Brighton on Saturday, June 18th to play a Salter Cane gig in The Greys pub. If you’re around, you should definitely come along—not only is it free, but there will be some excellent support courtesy of Jon London, and Lucas and King.

Then, the next morning, I’ll be speaking at DrupalCamp Brighton, opening up day two of the event. I won’t be able to stick around long afterwards though, because I need to skidaddle to the airport to go back to Amsterdam!

Google are having their Progressive Web App Dev Summit there on Monday on Tuesday. I’ll be moderating a panel on the second day, so I’ll need to pay close attention to all the talks. I’ll be grilling representatives from Google, Samsung, Opera, Microsoft, and Mozilla. Considering my recent rants about some very bad decisions on the part of Google’s Chrome team, it’s very brave of them to ask me to be there, much less moderate a panel in public.

You can still register for the event, by the way. Like the Salter Cane gig, it’s free. But if you can’t make it along, I’d still like to know what you think I should be asking the panelists about.

Got a burning question for browser/device makers? Write it down, post it somewhere on the web with a link back to this post, and then send me a web mention (there’s a form for you to paste in the URL at the bottom of this post).

Indie Web Camp Düsseldorf

Indie Web Camp Düsseldorf took place last weekend and it was—no surprise—really excellent.

It felt really good to have one in Germany again so soon after the last one in Nuremberg. Lots of familiar faces showed up as well as plenty of newcomers.

I’m blown away by how much gets done in two short days, especially from people who start the weekend without a personal website and end it with something to call their own. Like Julie’s new site for example (and once again she took loads of great photos).

My own bit of hacking was quite different to what I got up to in Nuremberg. At that event, I was concentrating on the interface, adding sparklines and a bio to my home page. This time round I concentrated more on the plumbing. I finally updated some the code that handles webmentions. I first got it working a few year’s back at an Indie Web Camp here in Brighton, but I hadn’t really updated the code in a while. I’m much happier with the way it’s working now.

I also updated the way I’m syndicating my notes to Twitter, specifically how I send photos. Previously I was using the API method /statuses/updatewithmedia.

When I was at the Mobile @Scale event at Facebook’s London office a while back, Henna Kermani gave a talk about the new way that Twitter handles file uploads. There’s a whole new part of the API for handling that. When she got off stage, I mentioned to her that I was still using the old API method and asked how long it would be until it was switched off. She looked at incredulously and said “It’s still working‽ I thought it had been turned off already!”

That’s why I spent most of my time at Indie Web Camp Düsseldorf updating my PHP. Switching over to the TwitterOAuth library made it a bit less painful—thanks to Bea for helping me out there.

When it came time to demo, I didn’t have much to show. On the surface, my site looked no different. But I feel pretty good about finally getting around to changing the wiring under the hood.

Besides, there were plenty of other great demos. There was even some more sparklining. Check out this fantastic visualisation of the Indie Web Camp IRC logs made by Kevin …who wasn’t even in Düsseldorf; he participated remotely.

If you get the chance to attend an Indie Web Camp I highly, highly recommend it. In the meantime you can start working on your personal site. Here’s a quick primer I wrote a while back on indie web building blocks. Have fun!

Full Meaning Ampersand

In the space of one week, Brighton played host to three excellent conferences:

  1. FF Conf on Friday, November 6th,
  2. Meaning on Thursday, November 12th, and
  3. Ampersand on Friday, November 13th.

I made it to two of the three—alas, I couldn’t make it to Meaning this year because it clashed with Richard’s superb workshop on Responsive Web Typography.

FF Conf and Ampersand were both superb. Despite having very different subject matter, the two events have a lot in common. They’re both affordable, one-day, single-track, focused gatherings.

Both events really benefit from having a mastermind overseeing the line-up: Remy in the case of FF Conf, and Richard in the case of Ampersand. That really paid off. Both events were superbly curated, with a diverse mix of speakers and topics.

It was really interesting to see both conferences break out of the boundary of what happens inside web browsers. At FF Conf, we were treated to talks on linguistics and inclusivity. At Ampersand, we enjoyed talks on physiology and culture. But of course we also had the really deep dives into the minutest details of JavaScript, SVG, typography, and layout.

Videos will be available from FF Conf, and audio will be available from Ampersand. Be sure to check them out once they’re released.

Marcy Sutton FFConf 2015 Playing to be different marks with Marcin

AMPed up

Apple has Apple News. Facebook has Instant Articles. Now Google has AMP: Accelerated Mobile Pages.

The big players sure are going to a lot of effort to reinvent RSS.

That may sound like a flippant remark, but it’s not too far from the truth. In the case of Apple News, its current incarnation appears to be quite literally an RSS reader, at least until the unveiling of the forthcoming Apple News Format.

Google’s AMP project looks a little bit different to the offerings from Facebook and Apple. Rather than creating a proprietary format from scratch, it mandates a subset of HTML …with some proprietary elements thrown in (or, to use the more diplomatic parlance of the extensible web, custom elements).

The idea is that alongside the regular HTML version of your document, you provide a corresponding AMP HTML version. Because the AMP HTML version will be leaner and meaner, user agents can then grab the AMP HTML version and present that to the end user for a faster browsing experience.

So if an RSS feed is an alternate representation of a homepage or a listing of articles, then an AMP document is an alternate representation of a single article.

Now, my own personal take on providing alternate representations of documents is “Sure. Why not?” Here on adactio.com I provide RSS feeds. On The Session I provide RSS, JSON, and XML. And on Huffduffer I provide RSS, Atom, JSON, and XSPF, adding:

If you would like to see another format supported, share your idea.

Also, each individual item on Huffduffer has a corresponding oEmbed version (and, in theory, an RDF version)—an alternate representation of that item …in principle, not that different from AMP. The big difference with AMP is that it’s using HTML (of sorts) for its format.

All of this sounds pretty reasonable: provide an alternate representation of your canonical HTML pages so that user-agents (Twitter, Google, browsers) can render a faster-loading version …much like an RSS reader.

So should you start providing AMP versions of your pages? My initial reaction is “Sure. Why not?”

The AMP Project website comes with a list of frequently asked questions, which of course, nobody has asked. My own list of invented frequently asked questions might look a little different.

Will this kill advertising?

We live in hope.

Alas, AMP pages will still be able to carry advertising, but in a restricted form. No more scripts that track your movement across the web …unless the script is from an authorised provider, like say, Google.

But it looks like the worst performance offenders won’t be able to get their grubby little scripts into AMP pages. This is a good thing.

Won’t this kill journalism?

Of all the horrid myths currently in circulation, the two that piss me off the most are:

  1. Journalism requires advertising to survive.
  2. Advertising requires invasive JavaScript.

Put the two together and you get the gist of most of the chicken-littling articles currently in circulation: “Journalism requires invasive JavaScript to survive.”

I could argue against the first claim, but let’s leave that for another day. Let’s suppose for now that, sure, journalism requires advertising to survive. Fine.

It’s that second point that is fundamentally wrong. The idea that the current state of advertising is the only way of advertising is incredibly short-sighted and misguided. Invasive JavaScript is not a requirement for showing me an ad. Setting a cookie is not a requirement for showing me an ad. Knowing where I live, who my friends are, what my income level is, and where I’ve been on the web …none of these are requirements for showing me an ad.

It is entirely possible to advertise to me and treat me with respect at the same time. The Deck already does this.

And you know what? Ad networks had their chance. They had their chance to treat us with respect with the Do Not Track initiative. We asked them to respect our wishes. They told us get screwed.

Now those same ad providers are crying because we’re installing ad blockers. They can get screwed.

Anyway.

It is entirely possible to advertise within AMP pages …just not using blocking JavaScript.

For a nicely nuanced take on what AMP could mean for journalism, see Joshua Benton’s article on Nieman Lab—Get AMP’d: Here’s what publishers need to know about Google’s new plan to speed up your website.

Why not just make faster web pages?

Excellent question!

For a site like adactio.com, the difference between the regular HTML version of an article and the corresponding AMP version of the same article is pretty small. It’s a shame that I can’t just say “Hey, the current version of the article is the AMP version”, but that would require that I only use a subset of HTML and that I add some required guff to my page (including an unnecessary JavaScript file).

But for most of the news sites out there, the difference between their regular HTML pages and the corresponding AMP versions will be pretty significant. That’s because the regular HTML versions are bloated with third-party scripts, oversized assets, and cruft around the actual content.

Now it is in theory possible for these news sites to get rid of all those things, and I sincerely hope that they will. But that’s a big political struggle. I am rooting for developers—like the good folks at VOX—who have to battle against bosses who honestly think that journalism requires invasive JavaScript. Best of luck.

Along comes Google saying “If you want to play in our sandbox, you’re going to have to abide by our rules.” Those rules include performance best practices (for the most part—I take issue with some of the requirements, and I’ll go into that in more detail in a moment).

Now when the boss says “Slap a three megabyte JavaScript library on it so we can show a carousel”, the developers can only respond with “Google says No.”

When the boss says “Slap a ton of third-party trackers on it so we can monetise those eyeballs”, the developers can only respond with “Google says No.”

Google have used their influence like this before and it has brought them accusations of monopolistic abuse. Some people got very upset when they began labelling (and later ranking) mobile-friendly pages. Personally, I’ve got no issue with that.

In this particular case, Google aren’t mandating what you can and can’t do on your regular HTML pages; only what you can and can’t do on the corresponding AMP page.

Which brings up another question…

Will the AMP web kill the open web?

If we all start creating AMP versions of our pages, and those pages are faster than our regular HTML versions, won’t everyone just see the AMP versions without ever seeing the “full” versions?

Tim articulates a legitimate concern:

This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.

That troubles me. Using a very specific tool to build a tailored version of my page in order to “reach everyone” doesn’t fit any definition of the “open web” that I’ve ever heard.

Fair point. But I also remember that a lot of people were upset by RSS. They didn’t like that users could go for months at a time without visiting the actual website, and yet they were reading every article. They were reading every article in non-browser user agents in a format that wasn’t HTML. On paper that sounds like the antithesis of the open web, but in practice there was always something very webby about RSS, and RSS feed readers—it put the power back in the hands of the end users.

Some people chose not to play ball. They only put snippets in their RSS feeds, not the full articles. Maybe some publishers will do the same with the AMP versions of their articles: “To read more, click here…”

But I remember what generally tended to happen to the publishers who refused to put the full content in their RSS feeds. We unsubscribed.

Still, I share the concern that any one company—whether it’s Facebook, Apple, or Google—should wield so much power over how we publish on the web. I don’t think you have to be a conspiracy theorist to view the AMP project as an attempt to replace the existing web with an alternate web, more tightly controlled by Google (albeit a faster, more performant, tightly-controlled web).

My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

I’ve been playing around with the AMP HTML spec. It has some issues. The good news is that it’s open source and the project owners seem receptive to feedback.

JavaScript

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web. I’m happy to leave them at the door (although weirdly, web fonts—another big performance killer—are allowed in).

But then there’s a bit of an about-face. In order to have a valid AMP HTML page, you must include a piece of third-party JavaScript. In this case, the third party is Google and the JavaScript file is what handles the loading of assets.

This seems a bit strange to me; on the one hand claiming that third-party JavaScript is bad for performance and on the other, requiring some third-party JavaScript. As Justin says:

For me this is loading one thing too many… the AMP JS library. Surely the document itself is going to be faster than loading a library to try and make it load faster.

On the plus side, this third-party JavaScript is loaded asynchronously. It seems to mostly be there to handle the rendering of embedded content: images, videos, audio, etc.

Embedded content

If you want audio, video, or images on your page, you must use propriet… custom elements like amp-audio, amp-video, and amp-img. In the case of images, I can see how this is a way of getting around the browser’s lookahead pre-parser (although responsive images also solve this problem). In the case of audio and video, the standard audio and video elements already come with a way of specifying preloading behaviour using the preload attribute. Very odd.

Justin again:

I’m not sure if this is solving anything at the moment that we’re not already fixing with something like responsive images.

To use amp-img for images within the flow of a document, you’ll need to specify the dimensions of the image. This makes sense from a rendering point of view—knowing the width and height ahead of time avoids repaints and reflows. Alas, in many of the cases here on adactio.com, I don’t know the dimensions of the images I’m including. So any of my AMP HTML pages that include images will be invalid.

Overall, the way that AMP HTML handles embedded content looks like a whole lot of wheel reinvention. I like the idea of providing custom elements as an option for authors. I hate the idea of making them a requirement.

Metadata

If you want to provide metadata about your document, AMP HTML currently requires the use of Google’s Schema.org vocabulary. This has a big whiff of vendor lock-in to it. I’ve flagged this up as an issue and Aaron is pushing a change so hopefully this will be resolved soon.

Accessibility

In its initial release, the AMP HTML spec came with some nasty surprises for accessibility. The biggest is probably the requirement to include this in your viewport meta element:

maximum-scale=1,user-scalable=no

Yowzers! That’s some slap in the face to decent web developers everywhere. Fortunately this has been flagged up and I’m hoping it will be fixed soon.

If it doesn’t get fixed, it’s quite a non-starter. It beggars belief that Google would mandate to authors that they must make their pages inaccessible to pinch/zoom. I would hope that many developers would rebel against such a draconian injunction. If that happens, it’ll be interesting to see what becomes of those theoretically badly-formed AMP HTML documents. Technically, they will fail validation, but for very good reason. Will those accessible documents be rejected?

Please get involved on this issue if this is important to you (hint: this should be important to you).

There are a few smaller issues. Initially the :focus pseudo-class was disallowed in author CSS, but that’s being fixed.

Currently AMP HTML documents must have this line:

<style>body {opacity: 0}</style><noscript><style>body {opacity: 1}</style></noscript>

shudders

That’s a horrible conflation of JavaScript availability and CSS. It’s being fixed though, and soon all the opacity jiggery-pokery will only happen via JavaScript, which will be a big improvement: it should either all happen in CSS or all happen in JavaScript, but not the current mixture of the two.

Discovery

The AMP HTML version of your page is not the canonical version. You can specify where the real HTML version of your document is by using rel="canonical". Great!

But how do you link from your canonical page out to the AMP HTML version? Currently you’re supposed to use rel="amphtml". No, they haven’t checked the registry. Again. I’ll go in and add it.

In the meantime, I’m also requesting that the amphtml value can be combined with the alternate value, seeing as rel values can be space separated:

rel="alternate amphtml" type="text/html"

See? Not that different to RSS:

rel="alterate" type="application/rss+xml"

POSSE

When I publish something on adactio.com in HTML, it already gets syndicated to different places. This is the Indie Web idea of POSSE: Publish (on your) Own Site, Syndicate Elsewhere. As well as providing RSS feeds, I’ve also got Twitter bots that syndicate to Twitter. An If This, Then That script pushes posts to Facebook. And if I publish a photo, it goes to Flickr. Now that Medium is finally providing a publishing API, I’ll probably start syndicating articles there as well. The more, the merrier.

From that perspective, providing AMP HTML pages feels like just one more syndication option. If it were the only option, and I felt compelled to provide AMP versions of my content, I’d be very concerned. But for now, I’ll give it a whirl and see how it goes.

Here’s a bit of PHP I’m using to convert a regular piece of HTML into AMP HTML—it’s horrible code; it uses regular expressions on HTML which, as we all know, will summon the Elder Gods.

Indie Web Camp Brighton 2015

Indie Web Camp Brighton 2015 is a wrap, and what a fun weekend it turned out to be.

I was really pleased with the turnout; not just the number of people who came along—many of them from very far afield—but also the range of skill levels and backgrounds represented. What a lovely bunch!

Indie Web Camp Brighton group photo

We kicked off the first day with a show’n’tell: people demoed their sites, showed their posting interfaces, and talked about what they’d like to improve. That sparked plenty of ideas for the afternoon discussions. But in between we had a nice long lunch break—it was a lovely sunny day in Brighton so we took full advantage of the sun, the street food, and the ice cream.

We wrapped up the first day around 5pm and I immediately dashed off to start loading in and sound checking for a Salter Cane gig that evening. That turned out to be a lot of fun—the audience were great—but I was completely knackered by the end of the day.

The weather on Sunday was far gloomier, but that was okay—we spent the whole day indoors anyway, coding and hacking away at stuff. Quite a few people were adding h-entry and h-card to their sites so I helped them out whenever I could. Meanwhile I was working on trying to get an SMS interface to my site working using the Twilio API.

The actual coding part went pretty quickly, but then I hit a wall. Whenever Twilio tried to reach a URL on my site, it would time out with a 504 error. I couldn’t figure out what was going on. On a hunch, I tried sending it to a subdomain that wasn’t being served over HTTPS. That worked fine. Now, I can’t imagine that Twilio is actually unable to work with secure endpoints, so it must be something to do with the way that I’ve enabled HTTPS on my domain. Anyway, the HTTP subdomain solution worked, and eleven minutes before demo time I finally had something to show.

We finished the day and the event with the quickfire demos. As always, there was some really impressive stuff—it’s quite amazing how much can get done in such a short space of time. Then we tidied up and headed across the street to the pub for a well-deserved pint.

All in all, a great weekend.

Indie Web Camp Brighton 2015

Indie Web Camp Brighton is happening again. It will be on the weekend of July 11th and 12th (coinciding with the big US Indie Web Camp in Portland at the same time) and it will once again be at 68 Middle Street.

You should come.

If you haven’t been to an Indie Web Camp before, you should definitely come. The event is always inspiring and productive in equal measure. The first day consists of Barcamp-style talks and discussions. The second day is filled with heads-down work, made all the more productive by the presence of other people working on similar issues that are more than happy to help out.

There are two kinds of people who should come to Indie Web Camp Brighton:

  1. Someone who has their own website and is looking to make it better, and
  2. Someone who wants their own website.

That’s basically it. There’ll be nitty-gritty discussions and implementations of formats and tools to help out, but basically it’s all about having a place on the web to call your own.

At Indie Web Camp Germany a few weeks ago—which was excellent—there was a really nice emergent thread on building blocks: microformats, webmention, micropub, and all that nerdy stuff.

At the same time, there was a really great thread on interface design. How do we make writing on our own websites as nice as writing on Medium?

I can imagine a similar two-pronged approach emerging at Indie Web Camp Brighton. That’s why I’d love to see just as many designers as developers showing up.

So basically, whether you’re in the world of UX, design, or development, and whether you’ve already got your own website or you’d like to have your own website …you should come.

You can sign up to attend here.

Once you’ve done that, if you’ve got your own website, you can log in to the Indie Web Camp wiki using your domain and add yourself to the list of participants (if you don’t have your own website or can’t log in, I can add you to the list).

It’s going to be a lot of fun, and I guarantee it’s also going to be highly productive—hope to see you there!

Small independent pieces, loosely joined

It was fascinating at Indie Web Camp Germany to see how much could be accomplished by taking some pre-existing small things and loosely joining them.

For example, there are already webmention and micropub plug-ins for quite a few CMSs. If you’re using Wordpress or Jekyll, you can get pretty far pretty quickly by making use of what people have already provided. And after that Indie Web Camp, you can add Drupal and Kirby to the list of CMSs with readily-available components.

I was somewhat surprised—and very pleased—that people made use of some little PHP snippets that I had posted as gists. I deliberately posted them as gists to show how minimal and barebones the code could be—no need for a whole project, or installers, or dockering the node to yeoman the gulp, or whatever it is the cool kids do these days.

This modular approach also worked well for interface elements. Glenn and Aaron worked on separate projects to create small JavaScript enhancements for posting interfaces. Assemble enough of these enhancements together and before you know it, you’ve got something approaching Medium.

By the end of the second day, I was amazed to see how much progress people had made. Like Johannes says:

I was pretty impressed by how much people got done. At the final demo session, everyone had something he or she had done to update their website – although I’m pretty sure that the end of this event will not be the end of their efforts to try and own their stuff online.

It was quite inspiring. In fact, I think I’ve been inspired to have an Indie Web Camp in Brighton. I’m thinking we could have it at the same time as Indie Web Camp Portland, which is on July 11th and 12th.

Save the dates.

100 words 049

The second day of Indie Web Camp Germany was really productive. It was amazing to see how much could be accomplished in just one day of collaborative hacking—people were posting to Twitter from their own site, sending webmentions, and creating their own micropub endpoints.

I made a little improvement to the links section of my site. Now every time I link to something, I check to see if it accepts webmentions and if it does, I ping it to let you know that I’ve linked to it.

I’ve posted the code as a gist. Feel free to use it.

100 words 048

Today was the first day of Indie Web Camp Germany here in Düsseldorf. The environment couldn’t have been better—the swank sipgate building has plenty of room, fantastic food, and ridiculously friendly people on hand to make sure that everything goes smoothly.

Day one is the discussion day. The topics fortuitously formed a great narrative starting with the simple building blocks of microformats, leading into webmentions, then authentication, and finally micropub and posting interfaces.

My brain is full after talking through these technologies in increasing order of complexity. Enough talking. Now I’m ready to start coding. Bring on day two.

Indie web building blocks

I was back in Nürnberg last week for the second border:none. Joschi tried an interesting format for this year’s event. The first day was a small conference-like gathering with an interesting mix of speakers, but the second day was much more collaborative, with people working together in “creator units”—part workshop, part round-table discussion.

I teamed up with Aaron to lead the session on all things indie web. It turned out to be a lot of fun. Throughout the day, we introduced the little building blocks, one by one. By the end of the day, it was amazing to see how much progress people made by taking this layered approach of small pieces, loosely stacked.

relme

The first step is: do you have a domain name?

Okay, next step: are you linking from that domain to other profiles of you on the web? Twitter, Instagram, Github, Dribbble, whatever. If so, here’s the first bit of hands-on work: add rel="me" to those links.

<a rel="me" href="https://twitter.com/adactio">Twitter</a>
<a rel="me" href="https://github.com/adactio">Github</a>
<a rel="me" href="https://www.flickr.com/people/adactio">Flickr</a>

If you don’t have any profiles on other sites, you can still mark up your telephone number or email address with rel="me". You might want to do this in a link element in the head of your HTML.

<link rel="me" href="mailto:jeremy@adactio.com" />
<link rel="me" href="sms:+447792069292" />

IndieAuth

As soon as you’ve done that, you can make use of IndieAuth. This is a technique that demonstrates a recurring theme in indie web building blocks: take advantage of the strengths of existing third-party sites. In this case, IndieAuth piggybacks on top of the fact that many third-party sites have some kind of authentication mechanism, usually through OAuth. The fact that you’re “claiming” a profile on a third-party site using rel="me"—and the third-party profile in turn links back to your site—means that we can use all the smart work that went into their authentication flow.

You can see IndieAuth in action by logging into the Indie Web Camp wiki. It’s pretty nifty.

If you’ve used rel="me" to link to a profile on something like Twitter, Github, or Flickr, you can authenticate with their OAuth flow. If you’ve used rel="me" for your email address or phone number, you can authenticate by email or SMS.

h-entry

Next question: are you publishing stuff on your site? If so, mark it up using h-entry. This involves adding a few classes to your existing markup.

<article class="h-entry">
  <div class="e-content">
    <p>Having fun with @aaronpk, helping @border_none attendees mark up their sites with rel="me" links, h-entry classes, and webmention endpoints.</p>
  </div>
  <time class="dt-published" datetime="2014-10-18 08:42:37">8:42am</time>
</article>

Now, the reason for doing this isn’t for some theoretical benefit from search engines, or browsers, but simply to make the content you’re publishing machine-parsable (which will come in handy in the next steps).

Aaron published a note on his website, inviting everyone to leave a comment. The trick is though, to leave a comment on Aaron’s site, you need to publish it on your own site.

Webmention

Here’s my response to Aaron’s post. As well as being published on my own site, it also shows up on Aaron’s. That’s because I sent a webmention to Aaron.

Webmention is basically a reimplementation of pingback, but without any of the XML silliness; it’s just a POST request with two values—the URL of the origin post, and the URL of the response.

My site doesn’t automatically send webmentions to any links I reference in my posts—I should really fix that—but that’s okay; Aaron—like me—has a form under each of his posts where you can paste in the URL of your response.

This is where those h-entry classes come in. If your post is marked up with h-entry, then it can be parsed to figure out which bit of your post is the body, which bit is the author, and so on. If your response isn’t marked up as h-entry, Aaron just displays a link back to your post. But if it is marked up in h-entry, Aaron can show the whole post on his site.

Okay. By this point, we’ve already come really far, and all people had to do was edit their HTML to add some rel attributes and class values.

For true site-to-site communication, you’ll need to have a webmention endpoint. That’s a bit trickier to add to your own site; it requires some programming. Here’s my minimum viable webmention that I wrote in PHP. But there are plenty of existing implentations you can use, like this webmention plug-in for WordPress.

Or you could request an account on webmention.io, which is basically webmention-as-a-service. Handy!

Once you have a webmention endpoint, you can point to it from the head of your HTML using a link element:

<link rel="mention" href="https://adactio.com/webmention" />

Now you can receive responses to your posts.

Here’s the really cool bit: if you sign up for Bridgy, you can start receiving responses from third-party sites like Twitter, Facebook, etc. Bridgy just needs to know who you are on those networks, looks at your website, and figures everything out from there. And it automatically turns the responses from those networks into h-entry. It feels like magic!

Here are responses from Twitter to my posts, as captured by Bridgy.

POSSE

That was mostly what Aaron and I covered in our one-day introduction to the indie web. I think that’s pretty good going.

The next step would be implementing the idea of POSSE: Publish on your Own Site, Syndicate Elsewhere.

You could do this using something as simple as If This, Then That e.g. everytime something crops up in your RSS feed, post it to Twitter, or Facebook, or both. If you don’t have an RSS feed, don’t worry: because you’re already marking your HTML up in h-entry, it can be converted to RSS easily.

I’m doing my own POSSEing to Twitter, which I’ve written about already. Since then, I’ve also started publishing photos here, which I sometimes POSSE to Twitter, and always POSSE to Flickr. Here’s my code for posting to Flickr.

I’d really like to POSSE my photos to Instagram, but that’s impossible. Instagram is a data roach-motel. The API provides no method for posting photos. The only way to post a picture to Instagram is with the Instagram app.

My only option is to do the opposite of POSSEing, which is PESOS: Publish Elsewhere, and Syndicate to your Own Site. To do that, I need to have an endpoint on my own site that can receive posts.

Micropub

Working side by side with Aaron at border:none inspired me to finally implement one more indie web building block I needed: micropub.

Having a micropub endpoint here on my own site means that I can publish from third-party sites …or even from native apps. The reason why I didn’t have one already was that I thought it would be really complicated to implement. But it turns out that, once again, the trick is to let other services do all the hard work.

First of all, I need to have something to manage authentication. Well, I already have that with IndieAuth. I got that for free just by adding rel="me" to my links to other profiles. So now I can declare indieauth.com as my authorization endpoint in the head of my HTML:

<link rel="authorization_endpoint" href="https://indieauth.com/auth" />

Now I need some way of creating and issuing authentation tokens. See what I mean about it sounding like hard work? Creating a token endpoint seems complicated.

But once again, someone else has done the hard work so I don’t have to. Tokens-as-a-service:

<link rel="token_endpoint" href="https://tokens.indieauth.com/token" />

The last piece of the puzzle is to point to my own micropub endpoint:

<link rel="micropub" href="https://adactio.com/micropub" />

That URL is where I will receive posts from third-party sites and apps (sent through a POST request with an access token in the header). It’s up to me to verify that the post is authenticated properly with a valid access token. Here’s the PHP code I’m using.

It wasn’t nearly as complicated as I thought it would be. By the time a post and a token hits the micropub endpoint, most of the hard work has already been done (authenticating, issuing a token, etc.). But there are still a few steps that I have to do:

  1. Make a GET request (I’m using cURL) back to the token endpoint I specified—sending the access token I’ve been sent in a header—verifying the token.
  2. Check that the “me” value that I get back corresponds to my identity, which is https://adactio.com
  3. Take the h-entry values that have been sent as POST variables and create a new post on my site.

I tested my micropub endpoint using Quill, a nice little posting interface that Aaron built. It comes with great documentation, including a guide to creating a micropub endpoint.

It worked.

Here’s another example: Ben Roberts has a posting interface that publishes to micropub, which means I can authenticate myself and post to my site from his interface.

Finally, there’s OwnYourGram, a service that monitors your Instagram account and posts to your micropub endpoint whenever there’s a new photo.

That worked too. And I can also hook up Bridgy to my Instagram account so that any activity on my Instagram photos also gets sent to my webmention endpoint.

Indie Web Camp

Each one of these building blocks unlocks greater and greater power:

Each one of those building blocks you implement unlocks more and more powerful tools:

But its worth remembering that these are just implementation details. What really matters is that you’re publishing your stuff on your website. If you want to use different formats and protocols to do that, that’s absolutely fine. The whole point is that this is the independent web—you can do whatever you please on your own website.

Still, if you decide to start using these tools and technologies, you’ll get the benefit of all the other people who are working on this stuff. If you have the chance to attend an Indie Web Camp, you should definitely take it: I’m always amazed by how much is accomplished in one weekend.

Some people have started referring to the indie web movement. I understand where they’re coming from; it certainly looks like a “movement” from the outside, and if you attend an Indie Web Camp, there’s a great spirit of sharing. But my underlying motivations are entirely selfish. In the same way that I don’t really care about particular formats or protocols, I don’t really care about being part of any kind of “movement.” I care about my website.

As it happens, my selfish motivations align perfectly with the principles of an indie web.

Indie Web Camp UK 2014

Indie Web Camp UK took place here in Brighton right after this year’s dConstruct. I was organising dConstruct. I was also organising Indie Web Camp. This was a problem.

It was a problem because I’m no good at multi-tasking, and I focused all my energy on dConstruct (it more or less dominated my time for the past few months). That meant that something had to give and that something was the organising of Indie Web Camp.

The event itself went perfectly smoothly. All the basics were there: a great venue, a solid internet connection, and a plan of action. But because I was so focused on dConstruct, I didn’t put any time into trying to get the word out about Indie Web Camp. Worse, I didn’t put any time into making sure that a diverse range of people knew about the event.

So in the end, Indie Web Camp UK 2014 was quite a homogenous gathering. That’s a real shame, and it’s my fault. My excuse is that I was busy with all things dConstruct, but that’s just that; an excuse. On the plus side, the effort I put into making dConstruct a diverse event paid off, but I’ll know better in future than to try to organise two back-to-back events. I need to learn to delegate and ask for help.

But I don’t want to cast Indie Web Camp in a totally negative light (I just want to acknowledge how it could have been better). It was actually pretty great. As with previous events, it was remarkably productive. The format of one day of talks, followed by one day of hacking is spot on.

Indie Web Camp UK attendees

I hadn’t planned to originally, but I spent the second day getting adactio.com switched over to https. Just a couple of weeks ago I wrote:

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

Well, I’m afraid that potential pain level has not dropped. In fact, I can confirm that get TLS working is massive pain in the behind. But on the first day of Indie Web Camp, Tim Retout led a session on security and offered up his expertise for day two. I took full advantage of his generous offer.

With Tim’s help, I was able to get adactio.com all set. If I hadn’t had his help, it probably would’ve taken me days …or I simply would’ve given up. I took plenty of notes so I could document the process. I’ll write it up soon, but alas, it will only be useful to people with the same kind of hosting set up as I have.

By the end of Indie Web Camp, thanks to Tim’s patient assistance, quite a few people has switched on TSL for their sites. The https page on the Indie Web Camp wiki is turning into quite a handy resource.

There was lots of progress in other areas too, particularly with webactions. Some of that progress relates to what I’ve been saying about Web Components. More on that later…

Throw in some Transmat action, location-based hacks, and communication tools; all-in-all a very productive weekend.

This week in Brighton

This is my favourite week of the year. It’s the week when Brighton bursts into life as the its month-long Digital Festival kicks off.

Already this week, we’ve had the Dots conference and three days of Reasons To Be Creative, where designers and makers show their work. And this afternoon Lighthouse are running their annual Improving Reality event.

But the best is yet to come. Tomorrow’s the big day: dConstruct 2014. I’ve been preparing for this day for so long now, it’s going to be very weird when it’s over. I must remember to sit back, relax and enjoy the day. I remember how fast the day whizzed by last year. I suspect that tomorrow’s proceedings might display equal levels of time dilation—I’m excited to see every single talk.

Even when dConstruct is done, the Brighton festivities will continue. I’ll be at Indie Web Camp here at 68 Middle Street on Saturday on Sunday. Also on Saturday, there’s the brilliant Maker Faire, and when the sun goes down, Brighton will be treated to Seb’s latest project which features frickin’ lasers!

This is my favourite week of the year.

Indie Web Camp Brighton

If you’re coming to this year’s dConstruct here in Brighton on September 5th—and you really, really should—then consider sticking around for the weekend.

Not only will there be the fantastic annual Maker Faire on Saturday, September 6th, but there’s also an Indie Web Camp happening at 68 Middle Street on the Saturday and Sunday.

We had an Indie Web Camp right after last year’s dConstruct and it was really good fun …and very productive to boot. The format works really well: one day of discussions and brainstorming, and one day of hacking, designing, and building.

So if you find yourself agreeing with the design principles of the Indie Web, be sure to come along. Add yourself to the list of attendees.

If you’re coming from outside Brighton for the dConstruct/Indie Web weekend, take a look at the dConstruct page on AirBnB for some accommodation ideas at very reasonable rates.

Speaking of reasonable rates… just between you and me, I’ve created a discount code for any Indie Web Campers who are coming to dConstruct. Use the discount code “indieweb” to shave £25 off the ticket price (bringing it down to £125 + VAT). And trust me, you do not want to miss this year’s dConstruct.

It’s just a little over six weeks until the best weekend in Brighton. I hope I’ll see you here then.