Thursday, October 23rd, 2014

Be progressive

Aaron wrote a great post a little while back called A Fundamental Disconnect. In it, he points to a worldview amongst many modern web developers, who see JavaScript as a universally-available technology in web browsers. They are, in effect, viewing a browser’s JavaScript engine as a runtime environment, and treating web development no different to any other kind of software development.

The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.

Treating JavaScript support in “the browser” as a known quantity is as much of a consensual hallucination as deciding that all viewports are 960 pixels wide. Even that phrasing—“the browser”—shows a framing that’s at odds with the reality of developing for the web; we don’t have to think about “the browser”, we have to think about browsers:

Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.

While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:

And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.

Please don’t think that either Aaron or I are saying that you shouldn’t use JavaScript. Far from it! It’s simply a matter of how you wield the power of JavaScript. If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold. But if you start by building on a classic server/client model, and then enhance with JavaScript, you can have your cake and eat it too. Modern browsers get a smooth, rich experience. Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.

Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.

Stuart takes issue with that assertion in a post called Fundamentally Disconnected. In it, he points out that the server isn’t quite the controlled environment that Aaron claims:

Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.

It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.

Stuart finishes on a stirring note:

The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.

However he wraps up by saying that…

…the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

In a post called Missed Connections, Aaron pushes back against that last point:

The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.

While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed.

Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).

I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work.

But here’s the problem with progressively enhancing from server functionality to a rich client:

A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.

Good point.

Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

Ah, there’s the rub!

When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?

Personally, I try not to completely reinvent all the business logic that I’ve already figured out on the server, and then rewrite it all in JavaScript. I prefer to use JavaScript—and specifically Ajax—as a dumb waiter, shuffling data back and forth between the client and server, where the real complexity lies.

I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.

But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.

Here’s the harsh truth: building websites with progressive enhancement is not convenient.

Building a client-side web thang that requires JavaScript to work is convenient, especially if you’re using a framework like Angular or Ember. In fact, that’s the main selling point of those frameworks: developer convenience.

The trade-off is that to get that level of developer convenience, you have to sacrifice the universal reach that the web provides, and limit your audience to the browsers that can run a pre-determined level of JavaScript. Many developers are quite willing to make that trade-off.

Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.

Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.

But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?

This is the promise of isomorphic JavaScript. It’s a terrible name for a great idea.

For me, this is the most exciting aspect of Node.js:

With Node.js, a fast, stable server-side JavaScript runtime, we can now make this dream a reality. By creating the appropriate abstractions, we can write our application logic such that it runs on both the server and the client — the definition of isomorphic JavaScript.

Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.

Interestingly, the reason why many large sites are investigating this approach isn’t about universal access—quite often they have separate siloed sites for different device classes. Instead it’s about performance. The problem with having all of your functionality wrapped up in JavaScript on the client is that, until all of that JavaScript has loaded, the user gets absolutely nothing. Compare that to rendering an HTML document sent from the server, and the perceived performance difference is very noticable.

Here’s the ideal situation:

  1. A browser requests a URL.
  2. The server sends HTML, which renders quickly, along with with some mustard-cutting JavaScript.
  3. If the browser doesn’t cut the mustard, or JavaScript fails, fall back to full page refreshes.
  4. If the browser does cut the mustard, keep all the interaction in the client, just like a single page app.

With Node.js on the server, and JavaScript in the client, steps 3 and 4 could theoretically use the same code.

So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?

Well, partly it’s back to that question of controlling the server environment.

This is something that Nicholas Zakas tackled a year ago when he wrote about Node.js and the new web front-end. He proposes a third layer that sits between the business logic and the rendered output. By applying the idea of isomorphic JavaScript, this interface layer could be run on the server (as Node.js) or on the client (as JavaScript), while still allowing you to have the rest of your server environment running whatever programming language works for you.

It’s still early days for this kind of thinking, and there are lots of stumbling blocks—trying to write JavaScript that can be executed on both the server and the client isn’t so easy. But I’m pretty excited about where this could lead. I love the idea of building in a way that provide the performance and universal access of progressive enhancement, while also providing the developer convenience of JavaScript frameworks.

In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.

But while the mood might currently seem to be in favour of using monolithic JavaScript frameworks to build client-side apps that rely on JavaScript in browsers, I think that the tide might change if we started to see poster children for progressive enhancement.

Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.

Now …who wants to do the same thing for progressive enhancement?

Wednesday, October 22nd, 2014

A question of markup

Hi,

I’m really sorry it’s taken me so long to write back to you (over a month!)—I’m really crap at email.

I’m writing to you hoping you can help me make my colleagues take html5 “seriously”. They have read your book, they know it’s the “right” thing to do, but still they write !doctype HTML and then div, div, div, div, div…

Now, if you could provide me with some answers to their “why bother?- questions” would be really appreciated.

I have to be honest, I don’t think it’s worth spending lots of time agonising over what’s the right element to use for marking up a particular piece of content.

That said, I also think it’s lazy to just use divs and spans for everything, if a more appropriate element is available.

Paragraphs, lists, figures …these are all pretty straightforward and require almost no thought.

Deciding whether something is a section or an article, though …that’s another story. It’s not so clear. And I’m not sure it’s worth the effort. Frankly, a div might be just fine in most cases.

For example, can one assume that in the future we will be pulling content directly from websites and therefore it would be smart to tell this technology which content is the article, what are the navigation and so on?

There are some third-party tools (like Readability) that pay attention to the semantics of the elements you use, but the most important use-case is assistive technology. For tools such as screen readers, there’s a massive benefit to marking up headings, lists, and other straightforward elements, as well as some of the newer additions like nav and main.

But for many situations, a div is just fine. If you’re just grouping some stuff together that doesn’t have a thematic relation (for instance, you might be grouping them together to apply a particular style), then div works perfectly well. And if you’re marking up a piece of inline text and you’re not emphasising it, or otherwise differentiating it semantically, then a span is the right element to use.

So for most situations, I don’t think it’s worth overthinking the choice of HTML elements. A moment or two should be enough to decide which element is right. Any longer than that, and you might as well just use a div or span, and move on to other decisions.

But there’s one area where I think it’s worth spending a bit longer to decide on the right element, and that’s with forms.

When you’re marking up forms, it’s really worth making sure that you’re using the right element. Never use a span or a div if you’re just going to add style and behaviour to make it look and act like a button: use an actual button instead (not only is it the correct element to use, it’s going to save you a lot of work).

Likewise, if a piece of text is labelling a form control, don’t just use a span; use the label element. Again, this is not only the most meaningful element, but it will provide plenty of practical benefit, not only to screen readers, but to all browsers.

So when it comes to forms, it’s worth sweating the details of the markup. I think it’s also worth making sure that the major chunks of your pages are correctly marked up: navigation, headings. But beyond that, don’t spend too much brain energy deciding questions like “Is this a definition list? Or a regular list?” or perhaps “Is this an aside? Or is it a footer?” Choose something that works well enough (even if that’s a div) and move on.

But if your entire document is nothing but divs and spans then you’re probably going to end up making more work for yourself when it comes to the CSS and JavaScript that you apply.

There’s a bit of a contradiction to what I’m saying here.

On the one hand, I’m saying you should usually choose the most appropriate element available because it will save you work. In other words, it’s the lazy option. Be lazy!

On the other hand, I’m saying that it’s worth taking a little time to choose the most appropriate element instead of always using a div or a span. Don’t be lazy!

I guess what I’m saying is: find a good balance. Don’t agonise over choosing appropriate HTML elements, but don’t just use divs and spans either.

Hope that helps.

Hmmm… you know, I think I might publish this response on my blog.

Cheers,

Jeremy

Tuesday, October 21st, 2014

Indie web building blocks

I was back in Nürnberg last week for the second border:none. Joschi tried an interesting format for this year’s event. The first day was a small conference-like gathering with an interesting mix of speakers, but the second day was much more collaborative, with people working together in “creator units”—part workshop, part round-table discussion.

I teamed up with Aaron to lead the session on all things indie web. It turned out to be a lot of fun. Throughout the day, we introduced the little building blocks, one by one. By the end of the day, it was amazing to see how much progress people made by taking this layered approach of small pieces, loosely stacked.

relme

The first step is: do you have a domain name?

Okay, next step: are you linking from that domain to other profiles of you on the web? Twitter, Instagram, Github, Dribbble, whatever. If so, here’s the first bit of hands-on work: add rel="me" to those links.

<a rel="me" href="https://twitter.com/adactio">Twitter</a>
<a rel="me" href="https://github.com/adactio">Github</a>
<a rel="me" href="https://www.flickr.com/people/adactio">Flickr</a>

If you don’t have any profiles on other sites, you can still mark up your telephone number or email address with rel="me". You might want to do this in a link element in the head of your HTML.

<link rel="me" href="mailto:jeremy@adactio.com" />
<link rel="me" href="sms:+447792069292" />

IndieAuth

As soon as you’ve done that, you can make use of IndieAuth. This is a technique that demonstrates a recurring theme in indie web building blocks: take advantage of the strengths of existing third-party sites. In this case, IndieAuth piggybacks on top of the fact that many third-party sites have some kind of authentication mechanism, usually through OAuth. The fact that you’re “claiming” a profile on a third-party site using rel="me"—and the third-party profile in turn links back to your site—means that we can use all the smart work that went into their authentication flow.

You can see IndieAuth in action by logging into the Indie Web Camp wiki. It’s pretty nifty.

If you’ve used rel="me" to link to a profile on something like Twitter, Github, or Flickr, you can authenticate with their OAuth flow. If you’ve used rel="me" for your email address or phone number, you can authenticate by email or SMS.

h-entry

Next question: are you publishing stuff on your site? If so, mark it up using h-entry. This involves adding a few classes to your existing markup.

<article class="h-entry">
  <div class="e-content">
    <p>Having fun with @aaronpk, helping @border_none attendees mark up their sites with rel="me" links, h-entry classes, and webmention endpoints.</p>
  </div>
  <time class="dt-published" datetime="2014-10-18 08:42:37">8:42am</time>
</article>

Now, the reason for doing this isn’t for some theoretical benefit from search engines, or browsers, but simply to make the content you’re publishing machine-parsable (which will come in handy in the next steps).

Aaron published a note on his website, inviting everyone to leave a comment. The trick is though, to leave a comment on Aaron’s site, you need to publish it on your own site.

Webmention

Here’s my response to Aaron’s post. As well as being published on my own site, it also shows up on Aaron’s. That’s because I sent a webmention to Aaron.

Webmention is basically a reimplementation of pingback, but without any of the XML silliness; it’s just a POST request with two values—the URL of the origin post, and the URL of the response.

My site doesn’t automatically send webmentions to any links I reference in my posts—I should really fix that—but that’s okay; Aaron—like me—has a form under each of his posts where you can paste in the URL of your response.

This is where those h-entry classes come in. If your post is marked up with h-entry, then it can be parsed to figure out which bit of your post is the body, which bit is the author, and so on. If your response isn’t marked up as h-entry, Aaron just displays a link back to your post. But if it is marked up in h-entry, Aaron can show the whole post on his site.

Okay. By this point, we’ve already come really far, and all people had to do was edit their HTML to add some rel attributes and class values.

For true site-to-site communication, you’ll need to have a webmention endpoint. That’s a bit trickier to add to your own site; it requires some programming. Here’s my minimum viable webmention that I wrote in PHP. But there are plenty of existing implentations you can use, like this webmention plug-in for WordPress.

Or you could request an account on webmention.io, which is basically webmention-as-a-service. Handy!

Once you have a webmention endpoint, you can point to it from the head of your HTML using a link element:

<link rel="mention" href="https://adactio.com/webmention" />

Now you can receive responses to your posts.

Here’s the really cool bit: if you sign up for Bridgy, you can start receiving responses from third-party sites like Twitter, Facebook, etc. Bridgy just needs to know who you are on those networks, looks at your website, and figures everything out from there. And it automatically turns the responses from those networks into h-entry. It feels like magic!

Here are responses from Twitter to my posts, as captured by Bridgy.

POSSE

That was mostly what Aaron and I covered in our one-day introduction to the indie web. I think that’s pretty good going.

The next step would be implementing the idea of POSSE: Publish on your Own Site, Syndicate Elsewhere.

You could do this using something as simple as If This, Then That e.g. everytime something crops up in your RSS feed, post it to Twitter, or Facebook, or both. If you don’t have an RSS feed, don’t worry: because you’re already marking your HTML up in h-entry, it can be converted to RSS easily.

I’m doing my own POSSEing to Twitter, which I’ve written about already. Since then, I’ve also started publishing photos here, which I sometimes POSSE to Twitter, and always POSSE to Flickr. Here’s my code for posting to Flickr.

I’d really like to POSSE my photos to Instagram, but that’s impossible. Instagram is a data roach-motel. The API provides no method for posting photos. The only way to post a picture to Instagram is with the Instagram app.

My only option is to do the opposite of POSSEing, which is PESOS: Publish Elsewhere, and Syndicate to your Own Site. To do that, I need to have an endpoint on my own site that can receive posts.

Micropub

Working side by side with Aaron at border:none inspired me to finally implement one more indie web building block I needed: micropub.

Having a micropub endpoint here on my own site means that I can publish from third-party sites …or even from native apps. The reason why I didn’t have one already was that I thought it would be really complicated to implement. But it turns out that, once again, the trick is to let other services do all the hard work.

First of all, I need to have something to manage authentication. Well, I already have that with IndieAuth. I got that for free just by adding rel="me" to my links to other profiles. So now I can declare indieauth.com as my authorization endpoint in the head of my HTML:

<link rel="authorization_endpoint" href="https://indieauth.com/auth" />

Now I need some way of creating and issuing authentation tokens. See what I mean about it sounding like hard work? Creating a token endpoint seems complicated.

But once again, someone else has done the hard work so I don’t have to. Tokens-as-a-service:

<link rel="token_endpoint" href="https://tokens.indieauth.com/token" />

The last piece of the puzzle is to point to my own micropub endpoint:

<link rel="micropub" href="https://adactio.com/micropub" />

That URL is where I will receive posts from third-party sites and apps (sent through a POST request with an access token in the header). It’s up to me to verify that the post is authenticated properly with a valid access token. Here’s the PHP code I’m using.

It wasn’t nearly as complicated as I thought it would be. By the time a post and a token hits the micropub endpoint, most of the hard work has already been done (authenticating, issuing a token, etc.). But there are still a few steps that I have to do:

  1. Make a GET request (I’m using cURL) back to the token endpoint I specified—sending the access token I’ve been sent in a header—verifying the token.
  2. Check that the “me” value that I get back corresponds to my identity, which is https://adactio.com
  3. Take the h-entry values that have been sent as POST variables and create a new post on my site.

I tested my micropub endpoint using Quill, a nice little posting interface that Aaron built. It comes with great documentation, including a guide to creating a micropub endpoint.

It worked.

Here’s another example: Ben Roberts has a posting interface that publishes to micropub, which means I can authenticate myself and post to my site from his interface.

Finally, there’s OwnYourGram, a service that monitors your Instagram account and posts to your micropub endpoint whenever there’s a new photo.

That worked too. And I can also hook up Bridgy to my Instagram account so that any activity on my Instagram photos also gets sent to my webmention endpoint.

Indie Web Camp

Each one of these building blocks unlocks greater and greater power:

Each one of those building blocks you implement unlocks more and more powerful tools:

But its worth remembering that these are just implementation details. What really matters is that you’re publishing your stuff on your website. If you want to use different formats and protocols to do that, that’s absolutely fine. The whole point is that this is the independent web—you can do whatever you please on your own website.

Still, if you decide to start using these tools and technologies, you’ll get the benefit of all the other people who are working on this stuff. If you have the chance to attend an Indie Web Camp, you should definitely take it: I’m always amazed by how much is accomplished in one weekend.

Some people have started referring to the indie web movement. I understand where they’re coming from; it certainly looks like a “movement” from the outside, and if you attend an Indie Web Camp, there’s a great spirit of sharing. But my underlying motivations are entirely selfish. In the same way that I don’t really care about particular formats or protocols, I don’t really care about being part of any kind of “movement.” I care about my website.

As it happens, my selfish motivations align perfectly with the principles of an indie web.

Wednesday, October 15th, 2014

Stupid brain

I went to the States to speak at the Artifact conference in Providence (which was great). I extended the trip so that I caould make it to Science Hack Day in San Francisco (which was also great). Then I made my way back with a stopover in New York for the fifth and final Brooklyn Beta (which was, you guessed it, great).

The last day of Brooklyn Beta was a big friendly affair with close to a thousand people descending on a hangar-like building in Brooklyn’s naval yard. But it was the preceding two days in the much cosier environs of The Invisible Dog that really captured the spirit of the event.

The talks were great—John Maeda! David Lowery!—but the real reason for going all the way to Brooklyn for this event was to hang out with the people. Old friends, new friends; just nice people all ‘round.

But it felt strange this year, and not just because it was the last time.

At the end of the second day, people were encouraged to spontaneously get up on stage, introduce themselves, and then introduce someone that they think is a great person, working on something interesting (that twist was Sam’s idea).

I didn’t get up on stage. The person I would’ve introduced wasn’t there. I wish she had been. Mind you, she would’ve absolutely hated being called out like that.

Chloe wasn’t there. Of course she wasn’t there. How could she be there?

But there was this stupid, stupid part of my brain that kept expecting to see her walk into the room. That stupid, stupid part of my brain that still expected that I’d spend Brooklyn Beta sitting next to Chloe because, after all, we always ended up sitting together.

(I think it must be the same stupid part of my brain that still expects to see her name pop up in my chat client every morning.)

By the time the third day rolled around in the bigger venue, I thought it wouldn’t be so bad, what with it not being in the same location. But that stupid, stupid part of my brain just wouldn’t give up. Every time I looked around the room and caught a glimpse of someone in the distance who had the same length hair as Chloe, or dressed like her, or just had a bag slung over hip just so …that stupid, stupid part of my brain would trigger a jolt of recognition, and then I’d have that horrible sinking feeling (literally, like something inside of me was sinking down) when the rational part of my brain corrected the stupid, stupid part.

I think that deep down, there’s a part of me—a stupid, stupid part of me—that still doesn’t quite believe that she’s gone.

Tuesday, October 14th, 2014

Celebrating CSS

Cascading Style Sheets turned 20 years old this week. Happy birthtime, CeeSusS!

Bruce interviewed Håkon about the creation of CSS, and it makes for fascinating reading. If you want to dig even deeper, here’s Håkon’s 1994 thesis comparing competing approaches to style sheets.

CSS gets a tough rap. I remember talking to Douglas Crockford about CSS. I’ll paraphrase his stance as “Kill it with fire!” To be fair, he was mostly talking about the lack of a decent layout system in CSS—something that’s only really getting remedied now.

Most of the flak directed at CSS comes from smart programmers, decrying its lack of power. As a declarative language, it lacks even the most basic features of even the simplest procedural language. How are serious programmers supposed to write their serious programmes with such a primitive feature set?

But I think this mindset misses out a crucial facet of understanding CSS: it’s not about us. By us, I mean professional web developers. And when I say it’s not about us, I mean it’s not only about us.

The web is for everyone. That doesn’t just mean that it’s for everyone to use—the web is for everyone to create. That means that the core building blocks of the web need to be learnable by everyone, not just programmers.

I get nervous when I see web browsers gaining powerful features that can only be accessed via a JavaScript API. Geolocation is one example: it doesn’t have any declarative equivalent to its JavaScript implementation. Counter-examples would be video and audio: you can use the JavaScript API to get exactly the behaviour you want, if you’ve got that level of knowledge …or you can use the video and audio elements if you’re okay with letting web browsers handle the complexity of display and playback.

I think that CSS hits a nice sweet spot, balancing learnability and power. I love the fact that every bit of CSS ever written comes down to the same basic pattern:

selector {
    property: value;
}

That’s it!

How amazing is it that one simple pattern can scale to encompass a whole wide world of visual design variety?

Think about the revolution that CSS has gone through in recent years: OOCSS, SMACSS, BEM …these are fundamentally new ways of approaching front-end development, and yet none of these approaches required any changes to be made to the CSS specification. The power and flexibility was already available within its simple selector-property-value pattern.

Mind you, that modularity was compromised when we got things like named animations; a pattern that breaks out of the encapsulation model of CSS. Variables in CSS also break out of the modularity pattern.

Personally, I don’t think there’s any reason to have variables in the CSS language; it’s enough to have them in pre-processing tools. Variables add enormous value for developers, and no value at all for end users. As long as developers can use variables—and they can, with Sass and LESS—I don’t think we need to further complicate CSS.

Bert Bos wrote an exhaustive list of design principles for web standards. There’s some crossover with Tim Berners-Lee’s principles of design, with ideas such as modularity and robustness. Personally, I think that Bert and Håkon did a pretty damn good job of balancing principles like learnability, extensibility, longevity, interoperability and a host of other factors while still producing something powerful enough to scale for the whole web.

There’s one important phrase I want to highlight in the abstract of the 20 year old CSS proposal:

The proposed scheme provides a simple mapping between HTML elements and presentation hints.

Hints.

Every line of CSS you write is a suggestion. You are not dictating how the HTML should be rendered; you are suggesting how the HTML should be rendered. I find that to be a very liberating and empowering idea.

My only regret is that—twenty years on from the birth of CSS—web browsers are killing the very idea of user stylesheets. Along with “view source”, this feature really drove home the idea that professional web developers are not the only ones who have a say in what gets rendered in web browsers …and that the web truly is for everyone.

Sunday, October 12th, 2014

Habitasteroids

Science Hack Day San Francisco was held in the Github offices last weekend. It was brilliant!

Hacking begins Hacking Science hacker & grumpy cat enthusiast, Keri Bean Launch pad

This was the fifth Science Hack Day in San Francisco and the 40th worldwide. That’s truly incredible. I mean, I literally can’t believe it. When I organised the very first Science Hack Day back in 2010, I had no idea how far it would go. But Ariel has been indefatigable in making it a truly global event. She is amazing. And at this year’s San Francisco event, she outdid herself in putting together a fantastic cross-section of scientists, designers, and developers: paleontology, marine biology, geology, astronomy, particle physics, and many, many more disciplines were represented in the truly diverse attendees.

Saturday breakfast with the Science Hack Day community! The Science Hack Day girls! Stargazing on GitHub's roof Demos begin!

After an inspiring set of lightning talks on the first day, ideas started getting bounced around and the hacking began to take shape. I had a vague idea for—yet another—space-related hack. What clinched it was picking the brains of NASA’s Keri Bean. She’d help me get hold of the dataset I needed for my silly little hack.

So here’s the background…

There are many possibilities for human habitats in space: Stanford tori, O’Neill cylinders, Bernal spheres. Another idea, explored in science fiction, is hollowing out asteroids (Larry Niven’s bubbleworlds). Kim Stanley Robinson explores this idea in depth in his book 2312, where he describes the process of building an asteroid terrarium. The website of the book has a delightful walkthrough of the engineering processes involved. It’s not entirely implausible.

I wanted to make that idea approachable, so I thought about the kinds of people we might want to have living with us on the interior shell of a rotating hollowed-out asteroid. How about the people you follow on Twitter?

The only question that remains then is: which asteroid is the right one for you and your Twitter friends? Keri tracked down the motherlode of asteroid data and I started hacking the simplest of mashups—Twitter meets space rocks.

Here’s the result…

Habitasteroids!

Habitasteroids

Give it your Twitter username and it will tell you exactly which one of the asteroids in the main belt is right for you (I considered adding an enterprise option that would tell you where you could store your social network in the cloud …the Oort cloud, that is).

Be default, your asteroid will have the population density of Earth, which is quite generously. But if you want a more sparsely-populated habitat—say, the population density of Australia—or a more densely-populated world—with something like the population density of Japan—then you will be assigned a larger or smaller asteroid accordingly.

You’ll also be told by how much you should increase or decrease the rotation of the asteroid to get one gee of centrifugal force on the interior. Figuring out the equations for calculating centrifugal force almost broke me, but luckily I had help from a rocket scientist and a particle physicist …I’m not even kidding. And I should point out that the calculations take some liberties—I’m assuming a spherical body, which is quite a stretch, given the lumpy nature of most asteroids.

At 13:37 on the second day, the demos began. Keri and I were first up.

Jeremy wants to colonize an asteroid Habitasteroids

Give Habitasteroids a whirl for yourself. It’s a silly little thing, but I quite like how it turned out.

Speaking of silly things …at some point in the proceedings, Keri put the call out for asteroid data to her fellow space enthusiasts on Twitter. They responded with asteroid-related puns.

They have nice asteroids though: @brianwolven, @lukedones, @paix120, @LGalache, @motorbikematt, @brx0.

Oh, and while Habitasteroids might be a silly little hack, WRANGLER just might work.

WRANGLER: Capture and De-Spin of Asteroids and Space Debris

Saturday, October 11th, 2014

When Jeremy met Jason

I first met Jason at South By Southwest 2011, having admired his work from afar. Here’s a picture of me and Jason.

When I was giving a talk on digital preservation at Beyond Tellerrand in 2013 referencing Jason’s work, I used that picture.

Then Jason gave a talk at a conference where he referenced me referencing him. Here’s a picture of Jason presenting a picture of me presenting a picture of me and Jason.

At this year’s Decentralize Camp, I spoke about digital preservation again. Here’s a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.

Then Jason gave another talk. Here’s a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.

I referenced that when I spoke in Riga a few months back. Here’s a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.

Finally, Jason gave a barnstorming talk at the final Brooklyn Beta yesterday. As we were both in the same room at the same time, Jason took the opportunity to draw a line under this back-and-forth with his final volley. Here’s a picture of me and Jason in front of a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.

Presentation Inception.

Thursday, October 2nd, 2014

Polyfills and products

I was chatting about polyfills recently with Bruce and Remy—who coined the term:

A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.

I mentioned that I think that one of the earliest examples of what we would today call a polyfill was the IE7 script by Dean Edwards.

Dean wrote this (amazing) piece of JavaScript back when Internet Explorer 6 was king of the hill and Microsoft had stopped development of their browser entirely. It was a pretty shitty time in browserland back then. While other browsers were steaming ahead with browser support, Dean’s script pulled IE6 up by its bootstraps and made it understand CSS2.1 features. Crucially, you didn’t have to write your CSS any differently for the IE7 script to work—the classic hallmark of a polyfill.

Scott has a great post over on the Filament Group blog asking To Picturefill, or not to Picturefill?. Therein, he raises the larger issue of when to use polyfills of any kind. After all, every polyfill you use is a little bit of a tax that the end user must pay with a download.

Polyfills typically come at a cost to users as well, since they require users to download and execute JavaScript in order to work. Sometimes, frequently even, that cost outweighs the benefits that the polyfill would bring. For that reason, the question of whether or not to use any polyfill should be taken seriously.

Scott takes a very thoughtful approach to using any polyfill, and I try to do the same. I feel that it’s important to have an exit strategy for every polyfill you decide to use. After all, the whole point of a polyfill is that it’s a stop-gap measure until a particular feature is more widely supported.

And that’s where I run into one of the issues of working at an agency. At Clearleft, our time working with a client usually lasts a few months. At the end of that time, we’ll have delivered whatever the client needs: sometimes that’s design work; sometimes its design and a front-end pattern library.

Every now and then we get to revisit a project—like with Code for America—but that’s the exception rather than the rule. We’ve had to get very, very good at handover precisely because we won’t be the ones maintaining the code that we deliver (though we always try to budget in time to revisit the developers who are working with the code to answer any questions they might have).

That makes it very tricky to include a polyfill in our deliverables. We’d need to figure out a way of also including a timeline for revisiting that polyfill and evaluating when it’s time to drop it. That’s not an impossible task, but it’s much, much easier if you’re a developer working on a product (as opposed to a developer working at an agency). If you’re going to be the same person working on the code in the future—as well as working on it right now—it gets a lot easier to plan for evaluating polyfill usage further down the line. Set a recurring item in your calendar and you should be all set.

It’s a similar situation with vendor prefixes. Vendor prefixes were never intended to be a long-lasting part of any style sheet. Like polyfills, they’re supposed to be used with an exit strategy in mind: when the time is right, remove the prefixed styles, leaving only the unprefixed standardised CSS. Again, that’s a lot easier to do if you’re working on a product and you know that you’ll be the one revisiting the CSS later on. That’s harder to do at an agency where you’re handing over CSS to someone else.

I’m quite reluctant to use any vendor prefixes at all—which is at should be; vendor prefixes should not be used lightly. Sometimes they’re unavoidable, but that shouldn’t stop us thinking about how to remove them at a later date.

I’m mostly just thinking out loud here. I guess my point is that certain front-end development techniques and technologies feel like they’re better suited to product work rather than agency work. Although I’m sure there are plenty of counter-examples out there too of tools that really fit the agency model and are less useful for working on the same product over a long period.

But even though the agency world and the product world are very different in lots of ways, both of them require us to think about the future. How will long will the code you’re writing today last? And do you have a plan for when it needs updating or replacing?

Thursday, September 25th, 2014

On tour

I’ve just returned from a little European tour of Germany, Italy, and Romania, together with Jessica.

More specifically, I was at Smashing Conference in Freiburg, From The Front in Bologna, and SmartWeb in Bucharest. They were all great events, and it was particularly nice to attend events that focussed on their local web community. Oh, and they were all single-track events, which I really appreciate.

Now my brain is full of all the varied things that all the excellent speakers covered. I’ll need some time to digest it all.

I wasn’t just at those events to soak up knowledge; I also gave a talk at From The Front and SmartWeb—banging on about progressive enhancement again. In both cases, I was able to do that first thing and then I could relax and enjoy the rest of the talks.

I didn’t speak at Smashing Conf. Well, I did speak, but I wasn’t speaking …I mean, I was speaking, but I wasn’t speaking …I didn’t give a talk, is what I’m trying to say here.

Instead, I was MCing (and I’ve just realised that “Master of Ceremonies” sounds like a badass job title, so excuse me for a moment while I go and update the Clearleft website again). It sounds like a cushy number but it was actually a fair bit of work.

I’ve never MC’d an event that wasn’t my own before. It wasn’t just a matter of introducing each speaker—there was also a little chat with each speaker after their talk, so I had to make sure I was paying close attention to each and every talk, thinking of potential questions and conversation points. After two days of that, I was a bit knackered. But it was good fun. And I had the pleasure of introducing Dave as the mystery speaker—and it really was a surprise for most people.

It’s always funny to return to Freiburg, the town that Jessica and I called home for about six years back in the nineties. The town where I first started dabbling in this whole “world wide web” thing.

It was also fitting that our Italian sojourn was to Bologna, the city that Jessica and I have visited on many occassions …well, we are both foodies, after all.

But neither of us had ever been to Bucharest, so it was an absolute pleasure to go somewhere new, meet new people, and of course, try new foods and wines.

I’m incredibly lucky that my job allows me to travel like this. I get to go to interesting locations and get paid to geek out about web stuff that I’d be spouting on about anyway. I hope I never come to take that for granted.

My next speaking gig is much closer to home; the Generate conference in London tomorrow. After that, it’s straight off to the States for Artifact in Providence.

I’m going to extend that trip so I can get to Science Hack Day in San Francisco before bouncing back to the east coast for the final Brooklyn Beta. I’m looking forward to all those events, but alas, Jessica won’t be coming with me on this trip, so my enjoyment will be bittersweet—I’ll be missing her the whole time.

Thank goodness for Facetime.

Saturday, September 13th, 2014

Other days, other voices

I think that Mandy’s talk at this year’s dConstruct might be one of the best talks I’ve ever heard at any conference ever. If you haven’t listened to it yet, you really should.

There are no videos from this year’s dConstruct—you kind of had to be there—but Mandy’s talk works astoundingly well as a purely audio experience. In fact, it’s remarkable how powerful many of this year’s talks are as audio pieces. From Warren’s thoughtful opening words to Cory’s fiery closing salvo, these are talks packed so full of ideas that revisiting them really pays off.

That holds true for previous years as well—James Burke’s talk from two years ago really is a must-listen—but there’s something about this year’s presentations that really comes through in the audio recordings.

Then again, I’m something of a sucker for the spoken word. There’s something about having to use the input from one sensory channel—my ears—to create moving images in my mind, that often results in a more powerful experience than audio and video together.

We often talk about the internet as a revolutionary new medium, and it is. But it is revolutionary in the way that it collapses geographic and temporal distance; we can have instant access to almost any information from almost anywhere in the world. That’s great, but it doesn’t introduce anything fundamentally new to our perception of the world. Instead, the internet accelerates what was already possible.

Even that acceleration is itself part of a longer technological evolution that began with the telegraph—something that Brian drove home in in his talk when he referred to Tom Standage’s excellent book, The Victorian Internet. It’s probably true to say that the telegraph was a more revolutionary technology than the internet.

To find the last technology that may have fundamentally altered how we perceive the world and our place in it, I propose the humble gramophone.

On the face of it, the ability to play back recorded audio doesn’t sound like a particularly startling or world-changing shift in perspective. But as Sarah pointed out in her talk at last year’s dConstruct, the gramophone allowed people to hear, for the first time, the voices of people who aren’t here …including the voices of the dead.

Today we listen to the voices of the dead all the time. We listen to songs being sung by singers long gone. But can you imagine what it must have been like the first time that human beings heard the voices of people who were no longer alive?

There’s something about the power of the human voice—divorced from the moving image—that still gets to me. It’s like slow glass for the soul.

In the final year of her life, Chloe started publishing audio versions of some of her blog posts. I find myself returning to them again and again. I can look at pictures of Chloe, I can re-read her writing, I can even watch video …but there’s something so powerful about just hearing her voice.

I miss her so much.