Tuesday, November 18th, 2014

Webiness

John Gruber quite rightly skewers a paywall-proteced “sky is falling” piece in the Wall Street Journal called The Web Is Dying; Apps Are Killing It, writing that native apps are part of the web:

They’re just superior clients to open Internet services.

This is something I wrote about earlier this year:

There’s a whole category of native apps that could just as easily be described as “artisanal web browsers” (and if someone wants to write a browser extension that replaces every mention of “native app” with “artisanal web browser” that would be just peachy).

Instagram’s native app is a web browser.

Facebook’s native app is a web browser.

Twitter’s native app is a web browser.

In that same piece, I try to define exactly what the web is:

Well, the unsexy definition I’ve used in the past is that the web consists of files (e.g. HTML, CSS, JavaScript), accessible at URLs, delivered over HTTP.

John also gives a defintion of what the web is:

There are two big four-letter “H” acronyms that powered the web from the beginning: HTML (client), and HTTP (networking protocol). Native apps are just an alternative to HTML running in a web browser (and many native apps still use HTML web views embedded within the apps themselves to render parts of their interface). Almost all native apps use HTTP/S for networking, though.

Notice the difference? Whereas John talks about two things that define the web (HTTP/S and HTML), I talk about three: HTTP(S), HTML, and URLs:

But to be honest, I don’t think that the Hypertext Transfer Protocol is the important part of the web; it’s the URLs that really matter. It’s the addressability of the files that’s the killer app of the web in my opinion.

URLs are what give the web is its reach, and that’s what’s still missing from native apps.

But John’s fundamental point that native apps and the web are not fundumentally opposed? I completely agree with that. They are complementary. Irakli Nadareishvili wrote about this false dichotomy recently in a post called Responsive Web Design or Native Mobile Apps?:

Native mobile applications are not going anywhere and the future of all websites is to be responsive. These two assertions are not mutually exclusive, they are complementary – don’t create apps when what you actually need is a website; but also don’t pretend webapps can completely replace native applications, because they can’t.

It’s also worth remembering that even if you’re using a native app—like, say, Facebook or Twitter—you’re still going to spend a lot of time following links and reading stuff that’s rendered in the app, but that lives out on the world wide web. And the reason why those apps can access those resources is because those resources have URLs.

URLs are not an implementation detail. The URI is the thing.

Sunday, November 16th, 2014

Home

There’s nothing quite so tedious as blogging about blogging, but I came across a few heart-warming thoughts recently that it would be remiss of me to let go unremarked, so please indulge me for a moment as I wallow in some meta-blogging.

Marco Arment talks about the trend that many others have noticed, of personal publishing dying out in favour of tweeting:

Too much of my writing in the last few years has gone exclusively into Twitter. I need to find a better balance.

As he rightly points out:

Twitter is a complementary medium to blogging, but it’s not a replacement.

Andy noticed a similar trend in his own writing:

Twitter and Waxy Links cannibalized all the smaller posts, and as my reach grew, I started reserving blogging for more “serious” stuff — mostly longer-form research and investigative writing.

Well, fuck that.

Amber Hewitt also talks about reviving the personal blog:

Someone made an analogy that describes social networks very well. Facebook is your neighborhood, Twitter is your local bar, and your blog is your home. (I guess Instagram is the cafe? “Look what I’m eating!”)

This made me realized I’m neglecting my home. My posts and photos are spread out on different networks and there is no centralized hub.

That reminds me of what Frank said about his site:

In light of the noisy, fragmented internet, I want a unified place for myself—the internet version of a quiet, cluttered cottage in the country.

The wonderful Gina Trapani—who has has publishing on her own site for years now—follows Andy’s lead with some guidelines for short-form blogging:

  • If it’s a paragraph, it’s a post.
  • Negotiate a comfort zone.
  • Traffic is irrelevant.
  • Simplify, simplify.
  • Ask for trusted collaborator feedback.
  • Have fun.

Good advice.

Monday, November 3rd, 2014

Just what is it that you want to do?

The supersmart Scott Jenson just gave a talk at The Web Is in Cardiff, which was by all accounts, excellent. I wish I could have seen it, but I’m currently chilling out in Florida and I haven’t mastered the art of bilocation.

Last week, Scott wrote a blog post called My Issue with Progressive Enhancement (he wrote it on Google+, which is why you might not have seen it).

In it, he takes to task the idea that—through progressive enhancement—you should be able to offer all functionality to all browsers, thereby foregoing the use of newer technologies that aren’t universally supported.

If that were what progressive enhancement meant, I’d be with him all the way. But progressive enhancement is not about offering all functionality; progressive enhancement is about making sure that your core functionality is available to everyone. Everything after that is, well, an enhancement (the clue is in the name).

The trick to doing this well is figuring out what is core functionality, and what is an enhancement. There are no hard and fast rules.

Sometimes it’s really obvious. Web fonts? They’re an enhancement. Rounded corners? An enhancement. Gradients? An enhancement. Actually, come to think of it, all of your CSS is an enhancement. Your content, on the other hand, is not. That should be available to everyone. And in the case of task-based web thangs, that means the fundamental tasks should be available to everyone …but you can still layer more tasks on top.

If you’re building an e-commerce site, then being able to add items to a shopping cart and being able to check out are your core tasks. Once you’ve got that working with good ol’ HTML form elements, then you can go crazy with your enhancements: animating, transitioning, swiping, dragging, dropping …the sky’s the limit.

This is exactly what Orde Saunders describes:

I’m not suggesting that you try and replicate all your JavaScript functionality when it’s disabled, above all that’s just not practical. What you should be aiming for is being able to complete the basics - for example adding a product to a shopping cart and then checking out. This is necessarily going to be clunky as judged by current standards and I suggest you don’t spend much time on optimising this process.

Scott asked about building a camera app with progressive enhancement:

Here again, the real question to ask is “what is the core functionality?” Building a camera app is a means to an end, not the end itself. You need to ask what the end goal is. Perhaps it’s “enable people to share photos with their friends.” Going back to good ol’ HTML, you can accomplish that task with:

<input type="file" accept="image/*">

Now that you’ve got that out of the way, you can spend the majority of your time making the best damn camera app you can, using all the latest browser technologies. (Perhaps WebRTC? Maybe use a canvas element to display the captured image data and apply CSS filters on top?)

Scott says:

My point is that not everything devolves to content. Sometimes the functionality is the point.

I agree wholeheartedly. In fact, I would say that even in the case of “content” sites, functionality is still the point—the functionality would be reading/hearing/accessing content. But I think that Scott is misunderstanding progressive enhancement if he think it means providing all the functionality that one can possibly provide.

Mat recently pointed out that there are plenty of enhancements on the Boston Globe site that require JavaScript, but the core functionality is available to everyone:

Scott again:

What I’m chaffing at is the belief that when a page is offering specific functionality, Let’s say a camera app or a chat app, what does it mean to progressively enhance it?

Again, a realtime chat app is a means to an end. What is it enabling? The ability for people to talk to each other over the web? Okay, we can do that using good ol’ HTML—text and form elements—with full page refreshes. That won’t be realtime. That’s okay. The realtime part is an enhancement. Use Web Sockets and WebRTC (in the browsers that support them) to provide the realtime experience. But everyone gets the core functionality.

Like I said, the trick is figuring out what’s core functionality and what’s an enhancement.

Ethan provides another example. Let’s say you’re building a browser-based rich text editor, that uses JavaScript to do all sorts of formatting on the fly. The core functionality is not the formatting on the fly; the core functionality is being able to edit text:

If progressive enhancement truly meant making all functionality available to everyone, then it would be unworkable. I think that’s a common misconception around progressive enhancement; there’s this idea that using progressive enhancement means that you’re going to spend all your time making stuff work in older browsers. In fact, it’s the exact opposite. As long as you spend a little bit of time at the start making sure that the core functionality works with good ol’ fashioned HTML, then you can spend most of your time trying out the latest and greatest browser technologies.

As Orde put it:

What you are going to be spending the majority of your time and effort on is the enhanced JavaScript version as that is how the majority of your customers will be experiencing your site.

The other Scott—Scott Jehl—wrote a while back:

For us, building with Progressive Enhancement moves almost all of our development time and costs to newer browsers, not older ones.

Progressive Enhancement frees us to focus on the costs of building features for modern browsers, without worrying much about leaving anyone out. With a strongly qualified codebase, older browser support comes nearly for free.

Approaching browser support this way requires a different way of thinking. For everything you’re building, you need to ask “is this core functionality, or is it an enhancment?” and build accordingly. It takes a bit of getting used to, but it gets easier the more you do it (until, after a while, it becomes second nature).

But if you’re thinking about progressive enhancement as “devolving” down—as Scott Jenson describes in his post—then I think you’re on the wrong track. Instead it’s about taking care of the core functionality quickly and then spending your time “enhancing” up.

Scott asks:

Shouldn’t we be allowed to experiment? Isn’t it reasonable to build things that push the envelope?

Absolutely! And the best and safest way to do that is to make sure that you’re providing your core functionality for everyone. Once you do that, you can go nuts with the latest and greatest experimental envelope-pushing technologies, secure in the knowledge that you don’t even need to worry about the fact that they don’t work in older browsers. Geolocation! Offline storage! Device APIs! Anything you can think of, you can use as a powerful enhancement on top of your core tasks.

Once you realise this, it’s immensely liberating to use progressive enhancement. You can have the best of both worlds: universal access to core functionality, combined with all the latest cuting-edge technology too.

Thursday, October 23rd, 2014

Be progressive

Aaron wrote a great post a little while back called A Fundamental Disconnect. In it, he points to a worldview amongst many modern web developers, who see JavaScript as a universally-available technology in web browsers. They are, in effect, viewing a browser’s JavaScript engine as a runtime environment, and treating web development no different to any other kind of software development.

The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.

Treating JavaScript support in “the browser” as a known quantity is as much of a consensual hallucination as deciding that all viewports are 960 pixels wide. Even that phrasing—“the browser”—shows a framing that’s at odds with the reality of developing for the web; we don’t have to think about “the browser”, we have to think about browsers:

Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.

While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:

And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.

Please don’t think that either Aaron or I are saying that you shouldn’t use JavaScript. Far from it! It’s simply a matter of how you wield the power of JavaScript. If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold. But if you start by building on a classic server/client model, and then enhance with JavaScript, you can have your cake and eat it too. Modern browsers get a smooth, rich experience. Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.

Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.

Stuart takes issue with that assertion in a post called Fundamentally Disconnected. In it, he points out that the server isn’t quite the controlled environment that Aaron claims:

Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.

It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.

Stuart finishes on a stirring note:

The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.

However he wraps up by saying that…

…the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

In a post called Missed Connections, Aaron pushes back against that last point:

The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.

While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed.

Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).

I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work.

But here’s the problem with progressively enhancing from server functionality to a rich client:

A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.

Good point.

Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

Ah, there’s the rub!

When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?

Personally, I try not to completely reinvent all the business logic that I’ve already figured out on the server, and then rewrite it all in JavaScript. I prefer to use JavaScript—and specifically Ajax—as a dumb waiter, shuffling data back and forth between the client and server, where the real complexity lies.

I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.

But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.

Here’s the harsh truth: building websites with progressive enhancement is not convenient.

Building a client-side web thang that requires JavaScript to work is convenient, especially if you’re using a framework like Angular or Ember. In fact, that’s the main selling point of those frameworks: developer convenience.

The trade-off is that to get that level of developer convenience, you have to sacrifice the universal reach that the web provides, and limit your audience to the browsers that can run a pre-determined level of JavaScript. Many developers are quite willing to make that trade-off.

Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.

Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.

But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?

This is the promise of isomorphic JavaScript. It’s a terrible name for a great idea.

For me, this is the most exciting aspect of Node.js:

With Node.js, a fast, stable server-side JavaScript runtime, we can now make this dream a reality. By creating the appropriate abstractions, we can write our application logic such that it runs on both the server and the client — the definition of isomorphic JavaScript.

Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.

Interestingly, the reason why many large sites are investigating this approach isn’t about universal access—quite often they have separate siloed sites for different device classes. Instead it’s about performance. The problem with having all of your functionality wrapped up in JavaScript on the client is that, until all of that JavaScript has loaded, the user gets absolutely nothing. Compare that to rendering an HTML document sent from the server, and the perceived performance difference is very noticable.

Here’s the ideal situation:

  1. A browser requests a URL.
  2. The server sends HTML, which renders quickly, along with with some mustard-cutting JavaScript.
  3. If the browser doesn’t cut the mustard, or JavaScript fails, fall back to full page refreshes.
  4. If the browser does cut the mustard, keep all the interaction in the client, just like a single page app.

With Node.js on the server, and JavaScript in the client, steps 3 and 4 could theoretically use the same code.

So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?

Well, partly it’s back to that question of controlling the server environment.

This is something that Nicholas Zakas tackled a year ago when he wrote about Node.js and the new web front-end. He proposes a third layer that sits between the business logic and the rendered output. By applying the idea of isomorphic JavaScript, this interface layer could be run on the server (as Node.js) or on the client (as JavaScript), while still allowing you to have the rest of your server environment running whatever programming language works for you.

It’s still early days for this kind of thinking, and there are lots of stumbling blocks—trying to write JavaScript that can be executed on both the server and the client isn’t so easy. But I’m pretty excited about where this could lead. I love the idea of building in a way that provide the performance and universal access of progressive enhancement, while also providing the developer convenience of JavaScript frameworks.

In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.

But while the mood might currently seem to be in favour of using monolithic JavaScript frameworks to build client-side apps that rely on JavaScript in browsers, I think that the tide might change if we started to see poster children for progressive enhancement.

Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.

Now …who wants to do the same thing for progressive enhancement?

Wednesday, October 22nd, 2014

A question of markup

Hi,

I’m really sorry it’s taken me so long to write back to you (over a month!)—I’m really crap at email.

I’m writing to you hoping you can help me make my colleagues take html5 “seriously”. They have read your book, they know it’s the “right” thing to do, but still they write !doctype HTML and then div, div, div, div, div…

Now, if you could provide me with some answers to their “why bother?- questions” would be really appreciated.

I have to be honest, I don’t think it’s worth spending lots of time agonising over what’s the right element to use for marking up a particular piece of content.

That said, I also think it’s lazy to just use divs and spans for everything, if a more appropriate element is available.

Paragraphs, lists, figures …these are all pretty straightforward and require almost no thought.

Deciding whether something is a section or an article, though …that’s another story. It’s not so clear. And I’m not sure it’s worth the effort. Frankly, a div might be just fine in most cases.

For example, can one assume that in the future we will be pulling content directly from websites and therefore it would be smart to tell this technology which content is the article, what are the navigation and so on?

There are some third-party tools (like Readability) that pay attention to the semantics of the elements you use, but the most important use-case is assistive technology. For tools such as screen readers, there’s a massive benefit to marking up headings, lists, and other straightforward elements, as well as some of the newer additions like nav and main.

But for many situations, a div is just fine. If you’re just grouping some stuff together that doesn’t have a thematic relation (for instance, you might be grouping them together to apply a particular style), then div works perfectly well. And if you’re marking up a piece of inline text and you’re not emphasising it, or otherwise differentiating it semantically, then a span is the right element to use.

So for most situations, I don’t think it’s worth overthinking the choice of HTML elements. A moment or two should be enough to decide which element is right. Any longer than that, and you might as well just use a div or span, and move on to other decisions.

But there’s one area where I think it’s worth spending a bit longer to decide on the right element, and that’s with forms.

When you’re marking up forms, it’s really worth making sure that you’re using the right element. Never use a span or a div if you’re just going to add style and behaviour to make it look and act like a button: use an actual button instead (not only is it the correct element to use, it’s going to save you a lot of work).

Likewise, if a piece of text is labelling a form control, don’t just use a span; use the label element. Again, this is not only the most meaningful element, but it will provide plenty of practical benefit, not only to screen readers, but to all browsers.

So when it comes to forms, it’s worth sweating the details of the markup. I think it’s also worth making sure that the major chunks of your pages are correctly marked up: navigation, headings. But beyond that, don’t spend too much brain energy deciding questions like “Is this a definition list? Or a regular list?” or perhaps “Is this an aside? Or is it a footer?” Choose something that works well enough (even if that’s a div) and move on.

But if your entire document is nothing but divs and spans then you’re probably going to end up making more work for yourself when it comes to the CSS and JavaScript that you apply.

There’s a bit of a contradiction to what I’m saying here.

On the one hand, I’m saying you should usually choose the most appropriate element available because it will save you work. In other words, it’s the lazy option. Be lazy!

On the other hand, I’m saying that it’s worth taking a little time to choose the most appropriate element instead of always using a div or a span. Don’t be lazy!

I guess what I’m saying is: find a good balance. Don’t agonise over choosing appropriate HTML elements, but don’t just use divs and spans either.

Hope that helps.

Hmmm… you know, I think I might publish this response on my blog.

Cheers,

Jeremy

Tuesday, October 21st, 2014

Indie web building blocks

I was back in Nürnberg last week for the second border:none. Joschi tried an interesting format for this year’s event. The first day was a small conference-like gathering with an interesting mix of speakers, but the second day was much more collaborative, with people working together in “creator units”—part workshop, part round-table discussion.

I teamed up with Aaron to lead the session on all things indie web. It turned out to be a lot of fun. Throughout the day, we introduced the little building blocks, one by one. By the end of the day, it was amazing to see how much progress people made by taking this layered approach of small pieces, loosely stacked.

relme

The first step is: do you have a domain name?

Okay, next step: are you linking from that domain to other profiles of you on the web? Twitter, Instagram, Github, Dribbble, whatever. If so, here’s the first bit of hands-on work: add rel="me" to those links.

<a rel="me" href="https://twitter.com/adactio">Twitter</a>
<a rel="me" href="https://github.com/adactio">Github</a>
<a rel="me" href="https://www.flickr.com/people/adactio">Flickr</a>

If you don’t have any profiles on other sites, you can still mark up your telephone number or email address with rel="me". You might want to do this in a link element in the head of your HTML.

<link rel="me" href="mailto:jeremy@adactio.com" />
<link rel="me" href="sms:+447792069292" />

IndieAuth

As soon as you’ve done that, you can make use of IndieAuth. This is a technique that demonstrates a recurring theme in indie web building blocks: take advantage of the strengths of existing third-party sites. In this case, IndieAuth piggybacks on top of the fact that many third-party sites have some kind of authentication mechanism, usually through OAuth. The fact that you’re “claiming” a profile on a third-party site using rel="me"—and the third-party profile in turn links back to your site—means that we can use all the smart work that went into their authentication flow.

You can see IndieAuth in action by logging into the Indie Web Camp wiki. It’s pretty nifty.

If you’ve used rel="me" to link to a profile on something like Twitter, Github, or Flickr, you can authenticate with their OAuth flow. If you’ve used rel="me" for your email address or phone number, you can authenticate by email or SMS.

h-entry

Next question: are you publishing stuff on your site? If so, mark it up using h-entry. This involves adding a few classes to your existing markup.

<article class="h-entry">
  <div class="e-content">
    <p>Having fun with @aaronpk, helping @border_none attendees mark up their sites with rel="me" links, h-entry classes, and webmention endpoints.</p>
  </div>
  <time class="dt-published" datetime="2014-10-18 08:42:37">8:42am</time>
</article>

Now, the reason for doing this isn’t for some theoretical benefit from search engines, or browsers, but simply to make the content you’re publishing machine-parsable (which will come in handy in the next steps).

Aaron published a note on his website, inviting everyone to leave a comment. The trick is though, to leave a comment on Aaron’s site, you need to publish it on your own site.

Webmention

Here’s my response to Aaron’s post. As well as being published on my own site, it also shows up on Aaron’s. That’s because I sent a webmention to Aaron.

Webmention is basically a reimplementation of pingback, but without any of the XML silliness; it’s just a POST request with two values—the URL of the origin post, and the URL of the response.

My site doesn’t automatically send webmentions to any links I reference in my posts—I should really fix that—but that’s okay; Aaron—like me—has a form under each of his posts where you can paste in the URL of your response.

This is where those h-entry classes come in. If your post is marked up with h-entry, then it can be parsed to figure out which bit of your post is the body, which bit is the author, and so on. If your response isn’t marked up as h-entry, Aaron just displays a link back to your post. But if it is marked up in h-entry, Aaron can show the whole post on his site.

Okay. By this point, we’ve already come really far, and all people had to do was edit their HTML to add some rel attributes and class values.

For true site-to-site communication, you’ll need to have a webmention endpoint. That’s a bit trickier to add to your own site; it requires some programming. Here’s my minimum viable webmention that I wrote in PHP. But there are plenty of existing implentations you can use, like this webmention plug-in for WordPress.

Or you could request an account on webmention.io, which is basically webmention-as-a-service. Handy!

Once you have a webmention endpoint, you can point to it from the head of your HTML using a link element:

<link rel="mention" href="https://adactio.com/webmention" />

Now you can receive responses to your posts.

Here’s the really cool bit: if you sign up for Bridgy, you can start receiving responses from third-party sites like Twitter, Facebook, etc. Bridgy just needs to know who you are on those networks, looks at your website, and figures everything out from there. And it automatically turns the responses from those networks into h-entry. It feels like magic!

Here are responses from Twitter to my posts, as captured by Bridgy.

POSSE

That was mostly what Aaron and I covered in our one-day introduction to the indie web. I think that’s pretty good going.

The next step would be implementing the idea of POSSE: Publish on your Own Site, Syndicate Elsewhere.

You could do this using something as simple as If This, Then That e.g. everytime something crops up in your RSS feed, post it to Twitter, or Facebook, or both. If you don’t have an RSS feed, don’t worry: because you’re already marking your HTML up in h-entry, it can be converted to RSS easily.

I’m doing my own POSSEing to Twitter, which I’ve written about already. Since then, I’ve also started publishing photos here, which I sometimes POSSE to Twitter, and always POSSE to Flickr. Here’s my code for posting to Flickr.

I’d really like to POSSE my photos to Instagram, but that’s impossible. Instagram is a data roach-motel. The API provides no method for posting photos. The only way to post a picture to Instagram is with the Instagram app.

My only option is to do the opposite of POSSEing, which is PESOS: Publish Elsewhere, and Syndicate to your Own Site. To do that, I need to have an endpoint on my own site that can receive posts.

Micropub

Working side by side with Aaron at border:none inspired me to finally implement one more indie web building block I needed: micropub.

Having a micropub endpoint here on my own site means that I can publish from third-party sites …or even from native apps. The reason why I didn’t have one already was that I thought it would be really complicated to implement. But it turns out that, once again, the trick is to let other services do all the hard work.

First of all, I need to have something to manage authentication. Well, I already have that with IndieAuth. I got that for free just by adding rel="me" to my links to other profiles. So now I can declare indieauth.com as my authorization endpoint in the head of my HTML:

<link rel="authorization_endpoint" href="https://indieauth.com/auth" />

Now I need some way of creating and issuing authentation tokens. See what I mean about it sounding like hard work? Creating a token endpoint seems complicated.

But once again, someone else has done the hard work so I don’t have to. Tokens-as-a-service:

<link rel="token_endpoint" href="https://tokens.indieauth.com/token" />

The last piece of the puzzle is to point to my own micropub endpoint:

<link rel="micropub" href="https://adactio.com/micropub" />

That URL is where I will receive posts from third-party sites and apps (sent through a POST request with an access token in the header). It’s up to me to verify that the post is authenticated properly with a valid access token. Here’s the PHP code I’m using.

It wasn’t nearly as complicated as I thought it would be. By the time a post and a token hits the micropub endpoint, most of the hard work has already been done (authenticating, issuing a token, etc.). But there are still a few steps that I have to do:

  1. Make a GET request (I’m using cURL) back to the token endpoint I specified—sending the access token I’ve been sent in a header—verifying the token.
  2. Check that the “me” value that I get back corresponds to my identity, which is https://adactio.com
  3. Take the h-entry values that have been sent as POST variables and create a new post on my site.

I tested my micropub endpoint using Quill, a nice little posting interface that Aaron built. It comes with great documentation, including a guide to creating a micropub endpoint.

It worked.

Here’s another example: Ben Roberts has a posting interface that publishes to micropub, which means I can authenticate myself and post to my site from his interface.

Finally, there’s OwnYourGram, a service that monitors your Instagram account and posts to your micropub endpoint whenever there’s a new photo.

That worked too. And I can also hook up Bridgy to my Instagram account so that any activity on my Instagram photos also gets sent to my webmention endpoint.

Indie Web Camp

Each one of these building blocks unlocks greater and greater power:

Each one of those building blocks you implement unlocks more and more powerful tools:

But its worth remembering that these are just implementation details. What really matters is that you’re publishing your stuff on your website. If you want to use different formats and protocols to do that, that’s absolutely fine. The whole point is that this is the independent web—you can do whatever you please on your own website.

Still, if you decide to start using these tools and technologies, you’ll get the benefit of all the other people who are working on this stuff. If you have the chance to attend an Indie Web Camp, you should definitely take it: I’m always amazed by how much is accomplished in one weekend.

Some people have started referring to the indie web movement. I understand where they’re coming from; it certainly looks like a “movement” from the outside, and if you attend an Indie Web Camp, there’s a great spirit of sharing. But my underlying motivations are entirely selfish. In the same way that I don’t really care about particular formats or protocols, I don’t really care about being part of any kind of “movement.” I care about my website.

As it happens, my selfish motivations align perfectly with the principles of an indie web.

Wednesday, October 15th, 2014

Stupid brain

I went to the States to speak at the Artifact conference in Providence (which was great). I extended the trip so that I caould make it to Science Hack Day in San Francisco (which was also great). Then I made my way back with a stopover in New York for the fifth and final Brooklyn Beta (which was, you guessed it, great).

The last day of Brooklyn Beta was a big friendly affair with close to a thousand people descending on a hangar-like building in Brooklyn’s naval yard. But it was the preceding two days in the much cosier environs of The Invisible Dog that really captured the spirit of the event.

The talks were great—John Maeda! David Lowery!—but the real reason for going all the way to Brooklyn for this event was to hang out with the people. Old friends, new friends; just nice people all ‘round.

But it felt strange this year, and not just because it was the last time.

At the end of the second day, people were encouraged to spontaneously get up on stage, introduce themselves, and then introduce someone that they think is a great person, working on something interesting (that twist was Sam’s idea).

I didn’t get up on stage. The person I would’ve introduced wasn’t there. I wish she had been. Mind you, she would’ve absolutely hated being called out like that.

Chloe wasn’t there. Of course she wasn’t there. How could she be there?

But there was this stupid, stupid part of my brain that kept expecting to see her walk into the room. That stupid, stupid part of my brain that still expected that I’d spend Brooklyn Beta sitting next to Chloe because, after all, we always ended up sitting together.

(I think it must be the same stupid part of my brain that still expects to see her name pop up in my chat client every morning.)

By the time the third day rolled around in the bigger venue, I thought it wouldn’t be so bad, what with it not being in the same location. But that stupid, stupid part of my brain just wouldn’t give up. Every time I looked around the room and caught a glimpse of someone in the distance who had the same length hair as Chloe, or dressed like her, or just had a bag slung over hip just so …that stupid, stupid part of my brain would trigger a jolt of recognition, and then I’d have that horrible sinking feeling (literally, like something inside of me was sinking down) when the rational part of my brain corrected the stupid, stupid part.

I think that deep down, there’s a part of me—a stupid, stupid part of me—that still doesn’t quite believe that she’s gone.

Tuesday, October 14th, 2014

Celebrating CSS

Cascading Style Sheets turned 20 years old this week. Happy birthtime, CeeSusS!

Bruce interviewed Håkon about the creation of CSS, and it makes for fascinating reading. If you want to dig even deeper, here’s Håkon’s 1994 thesis comparing competing approaches to style sheets.

CSS gets a tough rap. I remember talking to Douglas Crockford about CSS. I’ll paraphrase his stance as “Kill it with fire!” To be fair, he was mostly talking about the lack of a decent layout system in CSS—something that’s only really getting remedied now.

Most of the flak directed at CSS comes from smart programmers, decrying its lack of power. As a declarative language, it lacks even the most basic features of even the simplest procedural language. How are serious programmers supposed to write their serious programmes with such a primitive feature set?

But I think this mindset misses out a crucial facet of understanding CSS: it’s not about us. By us, I mean professional web developers. And when I say it’s not about us, I mean it’s not only about us.

The web is for everyone. That doesn’t just mean that it’s for everyone to use—the web is for everyone to create. That means that the core building blocks of the web need to be learnable by everyone, not just programmers.

I get nervous when I see web browsers gaining powerful features that can only be accessed via a JavaScript API. Geolocation is one example: it doesn’t have any declarative equivalent to its JavaScript implementation. Counter-examples would be video and audio: you can use the JavaScript API to get exactly the behaviour you want, if you’ve got that level of knowledge …or you can use the video and audio elements if you’re okay with letting web browsers handle the complexity of display and playback.

I think that CSS hits a nice sweet spot, balancing learnability and power. I love the fact that every bit of CSS ever written comes down to the same basic pattern:

selector {
    property: value;
}

That’s it!

How amazing is it that one simple pattern can scale to encompass a whole wide world of visual design variety?

Think about the revolution that CSS has gone through in recent years: OOCSS, SMACSS, BEM …these are fundamentally new ways of approaching front-end development, and yet none of these approaches required any changes to be made to the CSS specification. The power and flexibility was already available within its simple selector-property-value pattern.

Mind you, that modularity was compromised when we got things like named animations; a pattern that breaks out of the encapsulation model of CSS. Variables in CSS also break out of the modularity pattern.

Personally, I don’t think there’s any reason to have variables in the CSS language; it’s enough to have them in pre-processing tools. Variables add enormous value for developers, and no value at all for end users. As long as developers can use variables—and they can, with Sass and LESS—I don’t think we need to further complicate CSS.

Bert Bos wrote an exhaustive list of design principles for web standards. There’s some crossover with Tim Berners-Lee’s principles of design, with ideas such as modularity and robustness. Personally, I think that Bert and Håkon did a pretty damn good job of balancing principles like learnability, extensibility, longevity, interoperability and a host of other factors while still producing something powerful enough to scale for the whole web.

There’s one important phrase I want to highlight in the abstract of the 20 year old CSS proposal:

The proposed scheme provides a simple mapping between HTML elements and presentation hints.

Hints.

Every line of CSS you write is a suggestion. You are not dictating how the HTML should be rendered; you are suggesting how the HTML should be rendered. I find that to be a very liberating and empowering idea.

My only regret is that—twenty years on from the birth of CSS—web browsers are killing the very idea of user stylesheets. Along with “view source”, this feature really drove home the idea that professional web developers are not the only ones who have a say in what gets rendered in web browsers …and that the web truly is for everyone.

Sunday, October 12th, 2014

Habitasteroids

Science Hack Day San Francisco was held in the Github offices last weekend. It was brilliant!

Hacking begins Hacking Science hacker & grumpy cat enthusiast, Keri Bean Launch pad

This was the fifth Science Hack Day in San Francisco and the 40th worldwide. That’s truly incredible. I mean, I literally can’t believe it. When I organised the very first Science Hack Day back in 2010, I had no idea how far it would go. But Ariel has been indefatigable in making it a truly global event. She is amazing. And at this year’s San Francisco event, she outdid herself in putting together a fantastic cross-section of scientists, designers, and developers: paleontology, marine biology, geology, astronomy, particle physics, and many, many more disciplines were represented in the truly diverse attendees.

Saturday breakfast with the Science Hack Day community! The Science Hack Day girls! Stargazing on GitHub's roof Demos begin!

After an inspiring set of lightning talks on the first day, ideas started getting bounced around and the hacking began to take shape. I had a vague idea for—yet another—space-related hack. What clinched it was picking the brains of NASA’s Keri Bean. She’d help me get hold of the dataset I needed for my silly little hack.

So here’s the background…

There are many possibilities for human habitats in space: Stanford tori, O’Neill cylinders, Bernal spheres. Another idea, explored in science fiction, is hollowing out asteroids (Larry Niven’s bubbleworlds). Kim Stanley Robinson explores this idea in depth in his book 2312, where he describes the process of building an asteroid terrarium. The website of the book has a delightful walkthrough of the engineering processes involved. It’s not entirely implausible.

I wanted to make that idea approachable, so I thought about the kinds of people we might want to have living with us on the interior shell of a rotating hollowed-out asteroid. How about the people you follow on Twitter?

The only question that remains then is: which asteroid is the right one for you and your Twitter friends? Keri tracked down the motherlode of asteroid data and I started hacking the simplest of mashups—Twitter meets space rocks.

Here’s the result…

Habitasteroids!

Habitasteroids

Give it your Twitter username and it will tell you exactly which one of the asteroids in the main belt is right for you (I considered adding an enterprise option that would tell you where you could store your social network in the cloud …the Oort cloud, that is).

Be default, your asteroid will have the population density of Earth, which is quite generously. But if you want a more sparsely-populated habitat—say, the population density of Australia—or a more densely-populated world—with something like the population density of Japan—then you will be assigned a larger or smaller asteroid accordingly.

You’ll also be told by how much you should increase or decrease the rotation of the asteroid to get one gee of centrifugal force on the interior. Figuring out the equations for calculating centrifugal force almost broke me, but luckily I had help from a rocket scientist and a particle physicist …I’m not even kidding. And I should point out that the calculations take some liberties—I’m assuming a spherical body, which is quite a stretch, given the lumpy nature of most asteroids.

At 13:37 on the second day, the demos began. Keri and I were first up.

Jeremy wants to colonize an asteroid Habitasteroids

Give Habitasteroids a whirl for yourself. It’s a silly little thing, but I quite like how it turned out.

Speaking of silly things …at some point in the proceedings, Keri put the call out for asteroid data to her fellow space enthusiasts on Twitter. They responded with asteroid-related puns.

They have nice asteroids though: @brianwolven, @lukedones, @paix120, @LGalache, @motorbikematt, @brx0.

Oh, and while Habitasteroids might be a silly little hack, WRANGLER just might work.

WRANGLER: Capture and De-Spin of Asteroids and Space Debris

Saturday, October 11th, 2014

When Jeremy met Jason

I first met Jason at South By Southwest 2011, having admired his work from afar. Here’s a picture of me and Jason.

When I was giving a talk on digital preservation at Beyond Tellerrand in 2013 referencing Jason’s work, I used that picture.

Then Jason gave a talk at a conference where he referenced me referencing him. Here’s a picture of Jason presenting a picture of me presenting a picture of me and Jason.

At this year’s Decentralize Camp, I spoke about digital preservation again. Here’s a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.

Then Jason gave another talk. Here’s a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.

I referenced that when I spoke in Riga a few months back. Here’s a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.

Finally, Jason gave a barnstorming talk at the final Brooklyn Beta yesterday. As we were both in the same room at the same time, Jason took the opportunity to draw a line under this back-and-forth with his final volley. Here’s a picture of me and Jason in front of a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.

Presentation Inception.