Sunday, March 22nd, 2015

Codebar Brighton

There’s been a whole series of events going on in Brighton this month under the banner of Spring Forward:

Spring Forward is a month-long celebration of the role of women in digital culture and runs throughout March in parallel with Women’s History Month.

Luckily for me, a lot of the events have been happening at 68 Middle Street—home of Clearleft—so I’ve been taking full advantage of as many as I can (also, if I go to an event that means that Tessa doesn’t have to stick around every night of the week to lock up afterwards). Charlotte has been going to even more.

I managed to get to Tech In Ten—run by She Codes Brighton—which was great, but I missed out on Pixels and Prosecco by Press Fire To Win which sounded like it was a lot of fun. And there are more events still to come, like She Says and Ladies That UX.

What’s great about Spring Forward events like She Codes, 300 Seconds, She Says, and Ladies That UX is that they aren’t one-offs; they’re happening all-year round, along with other great regular Brighton events like Async and UX Brighton.

And then there’s Codebar. I had heard about Codebar before, but Spring Forward was the first chance I had to get stuck in—it was being hosted at 68 Middle Street, so I said I’d stick around to lock up afterwards. I’m so glad I did. It was great!

In a nutshell, Codebar offers a chance for people who are under-represented in the world of programming and technology to get some free training by pairing them with tutors who volunteer their time. I offered to help out anyone who was learning HTML and CSS (after tamping down the inevitable inner voice of imposter syndrome that was asking “who are you to be teaching anyone anything?”).

I really, really enjoyed it. It was so nice to meet people from outside the world of web design and development. It was also a terrific reminder that the act of making websites is something that everybody should be able to participate in. This is for everyone.

Codebar Brighton takes place once a week, changing up the venue on rotation. As you can imagine, it takes a lot of work to maintain that momentum. It’s thanks to the tireless efforts of the seemingl indefatigable Ruby programmers Rosa and Dot that it’s such a great success. I am in their debt.

Sunday, March 15th, 2015

Star wheels

This list has been making the rounds lately. It’s the list of (probably apocryphal) rules underlying the world of Road Runner and Wile E. Coyote. Design principles if you will. Like “The Road Runner cannot harm the Coyote except by going ‘meep, meep’” and “All tools, weapons, or mechanical conveniences must be obtained from the Acme Corporation.”

These are patterns that we are all subconsciously aware of anyway, but there’s something about seeing them enumerated that makes us go “oh, yeah” in recognition.

This reminds me of a silly idea I had when I was younger. It’s about Star Wars (of course). Specifically it’s about a possible rule—or design principle—underlying the kitbashed used-universe design of that galaxy far, far away.

Now I know this is going to sound crazy, but hear me out…

What if the wheel has never been invented in the world of Star Wars?

It’s probably not a deliberate omission, but we never actually see a single wheel in the original trilogy (the prequels, as always, are another matter entirely). Sure, there are wheels implied under the imperial mouse droid or under R2-D2’s legs but you never actually see them. Even the sandcrawler, which uses tracks, hides its internal workings.

Instead, this is a universe where everything travels via some kind of maglev antigravity even when it seems completely unnecessary—couldn’t you just slap a carbonite Han Solo on a gurney? Whenever a spaceship extends its landing gear we see …skids. Always skids. Never wheels. And what kind of mechanical engineer would actually design something like an AT-AT if it weren’t for a prohibition on wheels?

I know you’re probably thinking “this is so stupid”, but I bet you’re also trying to think of an explicit instance of a wheel in the original trilogy. You may also be feeling a growing urge to watch the films again. And whenever you do end up watching the trilogy again, and you find yourself looking at the undercarriage of every vehicle, you’ll realise that I’ve planted this idea Inception-like in your head.

Anyway, like I said, the prequels put paid to my little theory. I was genuinely disappointed when those droidekas rolled down that corridor. Remember that feeling of “oh, please!” when R2-D2 used his thrusters to fly in Attack Of The Clones? You felt cheated, right? The film was breaking the rules of its own universe. Well, a little part of me felt that way when my silly theory was squashed.

But just go with it here for a minute. Suppose the wheel had never been invented. Would it be possible for a space-faring civilisation to evolve? It’s generally assumed that you’d need to at least invent fire to achieve any kind of mechanical advances, but what about the wheel?

Imagine if George Lucas had actually been playing a design fiction long con. My younger self liked to imagine that lists of instructions were passed around ILM, along the same lines as those Road Runner rules. And one of those instructions would’ve been the cryptic injunction against showing wheels in any vehicle designs. Then imagine what it would have been like if, decades later, Lucas casually dropped the bombshell that the wheel was never invented in this galaxy far, far away. It would’ve blown. Our. Minds.

Ah, but it was just a dream. A crazy, apopheniac dream.

Tuesday, March 10th, 2015

Inlining critical CSS for first-time visits

After listening to Scott rave on about how much of a perceived-performance benefit he got from inlining critical CSS on first load, I thought I’d give it a shot over at The Session. On the chance that this might be useful for others, I figured I’d document what I did.

The idea here is that you can give a massive boost to the perceived performance of the first page load on a site by putting the most important CSS in the head of the page. Then you cache the full stylesheet. For subsequent visits you only ever use the external stylesheet. So if you’re squeamish at the thought of munging your CSS into your HTML (and that’s a perfectly reasonable reaction), don’t worry—this is a temporary workaround just for initial visits.

My particular technology stack here is using Grunt, Apache, and PHP with Twig templates. But I’m sure you can adapt this for other technology stacks: what’s important here isn’t the technology, it’s the thinking behind it. And anyway, the end user never sees any of those technologies: the end user gets HTML, CSS, and JavaScript. As long as that’s what you’re outputting, the specifics of the technology stack really don’t matter.

Generating the critical CSS

Okay. First question: how do you figure out which CSS is critical and which CSS can be deferred?

To help answer that, and automate the task of generating the critical CSS, Filament Group have made a Grunt task called grunt-criticalcss. I added that to my project and updated my Gruntfile accordingly:

grunt.initConfig({
    // All my existing Grunt configuration goes here.
    criticalcss: {
        dist: {
            options: {
                url: 'http://thesession.dev',
                width: 1024,
                height: 800,
                filename: '/path/to/main.css',
                outputfile: '/path/to/critical.css'
            }
        }
    }
});

I’m giving it the name of my locally-hosted version of the site and some parameters to judge which CSS to prioritise. Those parameters are viewport width and height. Now, that’s not a perfect way of judging which CSS matters most, but it’ll do.

Then I add it to the list of Grunt tasks:

// All my existing Grunt tasks go here.
grunt.loadNpmTasks('grunt-criticalcss');

grunt.registerTask('default', ['sass', etc., 'criticalcss']);

The end result is that I’ve got two CSS files: the full stylesheet (called something like main.css) and a stylesheet that only contains the critical styles (called critical.css).

Cache-busting CSS

Okay, this is a bit of a tangent but trust me, it’s going to be relevant…

Most of the time it’s a very good thing that browsers cache external CSS files. But if you’ve made a change to that CSS file, then that feature becomes a bug: you need some way of telling the browser that the CSS file has been updated. The simplest way to do this is to change the name of the file so that the browser sees it as a whole new asset to be cached.

You could use query strings to do this cache-busting but that has some issues. I use a little bit of Apache rewriting to get a similar effect. I point browsers to CSS files like this:

<link rel="stylesheet" href="/css/main.20150310.css">

Now, there isn’t actually a file named main.20150310.css, it’s just called main.css. To tell the server where the actual file is, I use this rewrite rule:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.+).(d+).(js|css)$ $1.$3 [L]

That tells the server to ignore those numbers in JavaScript and CSS file names, but the browser will still interpret it as a new file whenever I update that number. You can do that in a .htaccess file or directly in the Apache configuration.

Right. With that little detour out of the way, let’s get back to the issue of inlining critical CSS.

Differentiating repeat visits

That number that I’m putting into the filenames of my CSS is something I update in my Twig template, like this (although this is really something that a Grunt task could do, I guess):

{% set cssupdate = '20150310' %}

Then I can use it like this:

<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

I can also use JavaScript to store that number in a cookie called csscached so I’ll know if the user has a cached version of this revision of the stylesheet:

<script>
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

The absence or presence of that cookie is going to be what determines whether the user gets inlined critical CSS (a first-time visitor, or a visitor with an out-of-date cached stylesheet) or whether the user gets a good ol’ fashioned external stylesheet (a repeat visitor with an up-to-date version of the stylesheet in their cache).

Here are the steps I’m going through:

First of all, set the Twig cssupdate variable to the last revision of the CSS:

{% set cssupdate = '20150310' %}

Next, check to see if there’s a cookie called csscached that matches the value of the latest revision. If there is, great! This is a repeat visitor with an up-to-date cache. Give ‘em the external stylesheet:

{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">

If not, then dump the critical CSS straight into the head of the document:

{% else %}
<style>
{% include '/css/critical.css' %}
</style>

Now I still want to load the full stylesheet but I don’t want it to be a blocking request. I can do this using JavaScript. Once again it’s Filament Group to the rescue with their loadCSS script:

 <script>
    // include loadCSS here...
    loadCSS('/css/main.{{ cssupdate }}.css');

While I’m at it, I store the value of cssupdate in the csscached cookie:

    document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>

Finally, consider the possibility that JavaScript isn’t available and link to the full CSS file inside a noscript element:

<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

And we’re done. Phew!

Here’s how it looks all together in my Twig template:

{% set cssupdate = '20150310' %}
{% if _cookie.csscached == cssupdate %}
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
{% else %}
<style>
{% include '/css/critical.css' %}
</style>
<script>
// include loadCSS here...
loadCSS('/css/main.{{ cssupdate }}.css');
document.cookie = 'csscached={{ cssupdate }};expires="Tue, 19 Jan 2038 03:14:07 GMT";path=/';
</script>
<noscript>
<link rel="stylesheet" href="/css/main.{{ cssupdate }}.css">
</noscript>
{% endif %}

You can see the production code from The Session in this gist. I’ve tweaked the loadCSS script slightly to match my preferred JavaScript style but otherwise, it’s doing exactly what I’ve outlined here.

The result

According to Google’s PageSpeed Insights, I done good.

Optimising https://thesession.org/

Monday, March 9th, 2015

Huffduffing video

You know what would be awesome? If you could huffduff the audio from videos on YouTube, Vimeo, and other video hosting sites.

To give you an example, A List Apart recently started running online events and once the events are over, they pop ‘em onto YouTube. Now, I’m not saying I don’t want to look at those lovely faces for an hour, but if truth be told, it’s the audio that I’m really interested in.

In the past, my only recourse would’ve been to pester the good people at A List Apart to export audio as well as video, in much the same way as I’ve pestered conference organisers in the past:

I wish conference organisers would export the audio of any talks that they’re publishing as video. Creating the sound file at that point is a simple one-click step. But once the videos are up online—be it on YouTube or Vimeo—it’s a lot, lot harder to get just the audio.

Not everyone wants to watch video. In fact, I bet there are plenty of people who listen to conference talks by opening the video in a separate tab so they can listen to it while they do something else.

Some people have come up with clever workarounds to get the audio track from videos into Huffduffer but they’re fairly convoluted.

Until now!

The brilliant Ryan Barrett has just launched huffduff-video:

He has created a bookmarklet you can use whenever you’re on a YouTube or Vimeo page that you want to huffduff. It works a treat—I’ve already used to huffduff that A List Apart event and a talk by Matt Jones.

It takes a little while to do the initial conversion but you can just leave the pop-up window open while it works its magic. I’ve incorporated it into the Huffduffer bookmarklet now too. So if you’re on a YouTube or Vimeo page, you can hit the usual bookmarklet and it’ll route you through Ryan’s clever creation.

That’s makes two fantastic pieces of software from Ryan that have improved my online life immeasurably: first Bridgy and now huffduff-video. So nifty!

Monday, March 2nd, 2015

Responsive Day Out tickets tomorrow

Tickets for the third and final Responsive Day Out go on sale at 11am tomorrow, Tuesday, March 3rd. Here’s the direct link to the ticket page.

I recommend getting in there pretty sharpish. Tickets are less than a hundred quid, which is a steal considering the amazing line-up of speakers who will be bursting your brain with their knowledge of design, process, CSS, JavaScript, user experience, performance, accessibility, and everything else associated with responsive web design (which, let’s face it, is pretty much everything).

Oh, and that line-up just got even better. The one and only Jason Grigsby will be speaking! If you’ve seen Jason speak before, then you know how fantastic his talks are. If you haven’t seen Jason speak before, you’re in for a real treat. I’m guessing he’ll be dropping knowledge bombs on responsive images. He’s the Jedi master when it comes to that stuff. He’s got a real knack for taking a complex subject and making it understandable …something that could be said of all the other fantastic speakers too.

So set your calendar alarm now. Get your ticket tomorrow morning. And I’ll see you here in Brighton on Friday, June 19th for Responsive Day Out 3: The Final Breakpoint!

Tuesday, February 17th, 2015

Cerf rocks

After I wrote about digital preservation and the need to save everything, not just the so-called “important” stuff, Jason wrote a lovely piece with his own thoughts on the matter:

In order to write a history, you need evidence of what happened. When we talk about preserving the stuff we make on the web, it isn’t because we think a Facebook status update, or those GeoCities sites have such significance now. It’s because we can’t know.

In a timely coincidence, Vint Cerf also spoke about the importance of digital preservation:

When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history.

He warns of the dangers of rapidly-obsoleting file formats:

We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised.

It was a little weird that the Guardian headline refers to Vint Cerf as “Google boss”. On the BBC he’s labelled as “Google’s Vint Cerf”. Considering he’s one of the creators of the internet itself, it’s a bit like referring to Neil Armstrong as a NASA employee.

I have to say, I just love listening to him talk. He’s so smooth. I’m sure that the character of The Architect from The Matrix Reloaded is modelled on him.

Vint Cerf knows a thing or two about long-term thinking when it comes to data formats. He has written many RFCs for the IETF (my favourite being RFC 2468). Back in 1969, he wrote RFC 20, proposing the ASCII format for network interchange. If you’ve ever used the keypress event in JavaScript and wondered why, for example, the number 13 corresponds to a carriage return, this is where all those numbers come from.

Last month, over 45 years after the RFC’s original publication, it became an official standard.

So when Vint Cerf warns about the dangers of digitising into file formats that could become unreadable, I think we should pay attention to him.

Monday, February 16th, 2015

Tickets for the last Responsive Day Out

When he was writing up the Clearleft weeknotes for last week, Jon described my activity thusly:

Jeremy—besides working alongside myself and Charlotte this week—has been scheming on Responsive Day Out, and he seems quite pleased with himself. Pretty sure I heard a sinister ‘my plans are coming together almost too well’-type laugh today.

Well, my dastardly schemes are working out perfectly. I’m ridiculously pleased to announce that Rosie Campbell and Aaron Gustafson have been added to the line up for Responsive Day Out 3: The Final Breakpoint.

That means that as well as Rosie and Aaron, you’ll also hear from Zoe, Jake, Alice, Peter, Rachel, Ruth, Heydon, and Alla …and that’s not even the final line-up! There are still more speaker announcements to come, and if my scheming pays off, they’re going to be quite special.

I hope that you’ve already added June 19th (the date of the conference) to your calendar, but I’ve got another date for your diary: March 3rd. That’s when tickets will go on sale.

As with last year’s event—Responsive Day Out 2: The Squishening—tickets will be a measly £80 plus VAT (a total of £96). All those fantastic talks for less than a hundred squid.

So make sure you’re at the ready on 11am on Tuesday, the 3rd of March.

And then I’ll see you for a packed day of knowledge bomb dropping on Friday, the 19th of June.

Wednesday, February 11th, 2015

Ordinary plenty

Aaron asked a while back “What do we own?”

I love the idea of owning your content and then syndicating it out to social networks, photo sites, and the like. It makes complete sense… Web-based services have a habit of disappearing, so we shouldn’t rely on them. The only Web that is permanent is the one we control.

But he quite rightly points out that we never truly own our own domains: we rent them. And when it comes to our servers, most of us are renting those too.

It looks like print is a safer bet for long-term storage. Although when someone pointed out that print isn’t any guarantee of perpetuity either, Aaron responded:

Sure, print pieces can be destroyed, but important works can be preserved in places like the Beinecke

Ah, but there’s the crux—that adjective, “important”. Print’s asset—the fact that it is made of atoms, not bits—is also its weak point: there are only so many atoms to go around. And so we pick and choose what we save. Inevitably, we choose to save the works that we deem to be important.

The problem is that we can’t know today what the future value of a work will be. A future president of the United States is probably updating their Facebook page right now. The first person to set foot on Mars might be posting a picture to her Instagram feed at this very moment.

One of the reasons that I love the Internet Archive is that they don’t try to prioritise what to save—they save it all. That’s in stark contrast to many national archival schemes that only attempt to save websites from their own specific country. And because the Internet Archive isn’t a profit-driven enterprise, it doesn’t face the business realities that caused Google to back-pedal from its original mission. Or, as Andy Baio put it, never trust a corporation to do a library’s job.

But even the Internet Archive, wonderful as it is, suffers from the same issue that Aaron brought up with the domain name system—it’s centralised. As long as there is just one Internet Archive organisation, all of our preservation eggs are in one magnificent basket:

Should we be concerned that the technical expertise and infrastructure for doing this work is becoming consolidated in a single organization?

Which brings us back to Aaron’s original question. Perhaps it’s less about “What do we own?” and more about “What are we responsible for?” If we each take responsibility for our own words, our own photos, our own hopes, our own dreams, we might not be able guarantee that they’ll survive forever, but we can still try everything in our power to keep them online. Maybe by acknowledging that responsibility to preserve our own works, instead of looking for some third party to do it for us, we’re taking the most important first step.

My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.

There was a beautiful moment in Cory Doctorow’s closing keynote at last year’s dConstruct. It was an aside to his main argument but it struck like a hammer. Listen in at the 20 minute mark:

They’re the raw stuff of communication. Same for tweets, and Facebook posts, and the whole bit. And this is where some cynic usually says, “Pah! This is about preserving all that rubbish on Facebook? All that garbage on Twitter? All those pictures of cats?” This is the emblem of people who want to dismiss all the stuff that happens on the internet.

And I’m supposed to turn around and say “No, no, there’s noble things on the internet too. There’s people talking about surviving abuse, and people reporting police violence, and so on.” And all that stuff is important but I’m going to speak for the banal and the trivial here for a moment.

Because when my wife comes down in the morning—and I get up first; I get up at 5am; I’m an early riser—when my wife comes down in the morning and I ask her how she slept, it’s not because I want to know how she slept. I sleep next to my wife. I know how my wife slept. The reason I ask how my wife slept is because it is a social signal that says:

I see you. I care about you. I love you. I’m here.

And when someone says something big and meaningful like “I’ve got cancer” or “I won” or “I lost my job”, the reason those momentous moments have meaning is because they’ve been built up out of this humus of a million seemingly-insignificant transactions. And if someone else’s insignificant transactions seem banal to you, it’s because you’re not the audience for that transaction.

The medieval scribes of Ireland, out on the furthermost edges of Europe, worked to preserve the “important” works. But occasionally they would also note down their own marginalia like:

Pleasant is the glint of the sun today upon these margins, because it flickers so.

Short observations of life in fewer than 140 characters. Like this lovely example written in ogham, a morse-like system of encoding the western alphabet in lines and scratches. It reads simply “latheirt”, which translates to something along the lines of “massive hangover.”

I’m glad that those “unimportant” words have also been preserved.

Centuries later, the Irish poet Patrick Kavanagh would write about the desire to “wallow in the habitual, the banal”:

Wherever life pours ordinary plenty.

Isn’t that a beautiful description of the web?

Saturday, February 7th, 2015

Hackfarming Blood Buddies

Every year at Clearleft, there’s a week where we step away from client work, go off the grid, and disappear into the countryside to work on something fun. We call it Hack Farm.

Hack Farm usually takes place around November, but due to various complexities, Hack Farm 2014 wound up getting pushed back to the start of 2015. Last week we formed a convoy, stocked up on the bare essentials (food, post-it notes, and booze), and drove west for four hours until we were in Herefordshire at a place called The Colloquy—a return to the site of the first ever Hack Farm.

Arrival at The Colloquy.

I kept notes on each day.

Day Zero

We arrive in the late afternoon, settle into our respective rooms, and eat some wonderful home-cooked food. After dinner, even though everyone’s pretty knackered, we agree that it’s best to figure out what everyone will be working on for the next few days.

Everyone gets a chance to pitch their ideas, and then we all do some dot-voting to whittle down the options. In short order, we arrive at four different projects for four different teams.

One of my ideas is chosen. This is something I’ve been pitching every single year at Hack Farm, and every single year it ends up narrowly missing out. This year, it’s finally going to happen!

On my team I’ve got Rich, Batesy, Andy P, and Tessa.

Day One

We choose a room to use as our home base and begin.

We start by agreeing on a hypothesis—more of an assumption, really—that we’ll be basing everything upon:

People are more like to give blood if they are not alone.

Hypothesis

We start writing down questions that people might ask related to giving blood. Some of these questions might well turn out to be out of scope for this project, or can already be better answered by an existing service like blood.co.uk e.g.:

  • Can I give blood?
  • How often can I give blood?
  • Will it hurt?
  • How long will it take?

Other questions are potentially open to us providing answers:

  • Where can I give blood?
  • When I can I give blood?
  • Who else is giving blood?

That last one is a question that doesn’t seem to be answered anywhere else.

We brain-dump potential data sources that answered the “who”, “when”, and “where” questions? The data from blood.co.uk could potentially answer the “when” and “where” questions e.g. when and where is the next donation? Data from Twitter, Facebook, or your address book could answer the “who” questions e.g. who are you, and who are your friends?

We brainstorm potential outputs of the project. The obvious choices are a website or a native app, but there could also potentially be email, SMS, or even posters and postcards.

We think about potential incentives for the users of this service: peer pressure, gamification, bragging rights, reassurance, etc.

So there’s a lot of divergent thinking going on: at this stage, there are no bad ideas (no, really!).

We also establish the goals of the project—what we would like to see happen as a result of this service existing. The very minimum success criteria is:

Someone gives blood who hasn’t given blood before.

There’s a follow-on criteria for measuring longer-term success:

A group gives blood regularly.

We split into two groups to work on a propositional statement, then come together to merge what we came up with. Here it is:

For people who want to give blood, who need encouragement and motivation, Blood Buddies brings together people you know to make it a shared experience. That way, you’re more likely to give blood.

Unlike blood.co.uk, it frames giving blood as a shared, rewarding activity.

Proposition James and Tessa

Blood Buddies is a codename for now. The final service might have a different name, like Bluddies maybe.

After lunch, we start to work on user stories and personas. After a while, we think we’ve got a pretty clear idea for the minimal viable user journey.

Now we take a little break and stretch our legs.

A stroll through the fields.

When we regroup, we start researching technical possibilities (like Twitter authentication, GMail address book, Facebook contacts, etc.), while also throwing ideas around to do with branding, tone of voice, etc. James Box comes in and helps us out with a handy branding exercise.

In an effort the name the thing, we create a page filled with relevant words that might be combined into a name. Eventually we reach the “just fucking end it!” moment. The service is called “Blood Buddies” after all. The tagline is …drumroll… “Get plastered together!”

Meanwhile, having investigated the technical possibilities, it looks like Twitter’s API will be the easiest (relatively) to start with.

Vocabulary Kanban

We write out our epics and create a little kanban board. We have our tasks figured out:

  • implement sign-in with Twitter,
  • create a style guide,
  • mock up the homepage,
  • mock up a sign-up form,
  • and more.

Tomorrow everyone can assign a task to themselves and get cracking (some people have started already).

Day Two

After a late Superbowl night, we arise and begin tackling the day’s tasks.

I managed to get a very rudimentary Twitter sign-in working (eventually!) so now my task is to do something with the data that Twitter is returning …namely, storing it in a database. And because this relies on signing in with Twitter to get any results, this needs to get on to an actual web server as soon as possible.

Cue a day of wrangling with PHP, MySQL, OAuth, Git, Apache, SSH keys, and DNS settings …with an intermittent internet connection that drops out at the most inconvenient of times.

Andy is storyboarding the promo video that will help sell the story of Blood Buddies.

Storyboard

Meanwhile James and Tessa are hammering out a visual language for Blood Buddies. So the work is being approached from two different ends: the server side (how it works behind the scenes) and the interface (how it looks to the end user). In the middle is the user flow, and that’s what Richard is working on, also looking ahead past the minimal viable product to include features that can be added later.

By late afternoon the most basic server-side functionality is done, and the site is live at bloodbuddies.co.uk. Of course, there’s very, very, very little to see there, but at least our team can start adding themselves to the database.

So now the task is to join up the back-end functionality with the visual design and copy. As these strands come together, it feels like we’re getting back to a more collaborative phase: whereas yesterday involved lots of group activity, today was more splintered. But that’s going to change now that we’re going to join up the individual pieces into a unified interface.

Today felt quite productive considering that three out of the five people on our team are on cooking duty.

Spaghetti and meatballs Dinnertime

Day Three

Today is a day of rest. It’s a beautiful day. We go for a drive through the countryside, pop into a pub for some grub, and go walking on the hills.

Walking west to Wales.

Day Four

We’re down to just three team members today. Tessa is working on a different project and Andy is spending the day sleeping, puking, and generally recovering from a heavy night. N00b.

We get cracking on with integrating the visual design with the back-end functionality. That means bashing out some CSS. After an hour or two, we’ve got something basic in place.

While James works on refining the visuals—including a kick-ass logo—Richard is writing lots and lots of copy, and figuring out user flows.

Meanwhile I’m trying to get server-side stuff in place, fiddling with DNS and email; not my favourite activity.

Once the DNS is pointed to the Digital Ocean server, and with the Twitter sign-in working okay, we realise that we’ve actually launched! Admittedly it’s very basic and it needs plenty of refinement, but it’s a start.

We head out for the evening meal together. Just one more day to go.

The Stagg Inn

Day Five

James starts the day by finishing up his kick-ass Blood Buddies logo.

Richard is writing and editing lots of witty copy.

Andy is storyboarding a promotional video.

Rich, me, James, and Andy

I’m trying to get emails working, so that when someone you know signs up to Blood Buddies, we can email you to let you know. By lunchtime, we’ve got it all working.

Lots of the details are in place now: the logo, web fonts, an error page, a favicon …it feels good to be iterating on a live site.

Kanban progress Final day tasks

Device testing

After lunch, James, Richard, and I work on expanding out the home page. Once everything is in pretty good shape, we all come together (with Andy and Tessa) to talk about what the next steps could be after this minimum viable product.

There’s consensus that the most important step would be adding more ways of signing into the site, instead of just Twitter. Also, there’s a lot of functionality we could add if we can scrape the data from blood.co.uk

But that’s for another day. Right now we’ve got a barebones site, but it’s working.

We shipped.

Friday, January 30th, 2015

Extensibility

I’ve said it before, but I’m going to reiterate my conflicted feelings about Web Components:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

There are broadly two ways that they could potentially be used:

  1. Web Components are used by developers to incrementally add more powerful elements to their websites. This evolutionary approach feels very much in line with the thinking behind the extensible web manifesto. Or:
  2. Web Components are used by developers as a monolithic platform, much like Angular or Ember is used today. The end user either gets everything or they get nothing.

The second scenario is a much more revolutionary approach—sweep aside the web that has come before, and usher in a new golden age of Web Components. Personally, I’m not comfortable with that kind of year-zero thinking. I prefer evolution over revolution:

Revolutions sometimes change the world to the better. Most often, however, it is better to evolve an existing design rather than throwing it away. This way, authors don’t have to learn new models and content will live longer. Specifically, this means that one should prefer to design features so that old content can take advantage of new features without having to make unrelated changes. And implementations should be able to add new features to existing code, rather than having to develop whole separate modes.

The evolutionary model is exemplified by the design of HTML 5.

The revolutionary model is exemplified by the design of XHTML 2.

I really hope that the Web Components model goes down the first route.

Up until recently, my inner Web Components pendulum was swinging towards the hopeful end of my spectrum of anticipation. That was mainly driven by the ability of custom elements to extend existing HTML elements.

So, for example, instead of creating a new element like this:

<taco-button>...</taco-button>

…you can piggyback off the existing semantics of the button element like this:

<button is="taco-button">...</button>

For a real-world example, see Github’s use of <time is="time-ago">.

I wrote about creating responsible Web Components:

That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Peter Gasston also has a great post on best practice for creating custom elements:

It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.

But now it looks like this superpower of custom elements is being nipped in the bud:

It also does not address subclassing normal elements. Again, while that seems desirable the current ideas are not attractive long term solutions. Punting on it in order to ship a v1 available everywhere seems preferable.

Now, I’m not particularly wedded to the syntax of using the is="" attribute to extend existing elements …but I do think that the ability to extend existing elements declaratively is vital. I’m not alone, although I may very well be in the minority.

Bruce has outlined some use cases and Steve Faulkner has enumerated the benefits of declarative extensibility:

I think being able to extend existing elements has potential value to developers far beyond accessibility (it just so happens that accessibility is helped a lot by re-use of existing HTML features.)

Bruce concurs:

Like Steve, I’ve no particularly affection (or enmity) towards the <input type="radio" is="luscious-radio"> syntax. But I’d like to know, if it’s dropped, how progressive enhancement can be achieved so we don’t lock out users of browsers that don’t have web components capabilities, JavaScript disabled or proxy browsers. If there is a concrete plan, please point me to it. If there isn’t, it’s irresponsible to drop a method that we can see working in the example above with nothing else to replace it.

He adds:

I also have a niggling worry that this may affect the uptake of web components.

I think he’s absolutely right. I think there are many developers out there in a similar position to me, uncertain exactly what to make of this new technology. I was looking forward to getting really stuck into Web Components and figuring out ways of creating powerful little extensions that I could start using now. But if Web Components turn out to be an all-or-nothing technology—a “platform”, if you will—then I will not only not be using them, I’ll be actively arguing against their use.

I really hope that doesn’t happen, but I must admit I’m not hopeful—my inner pendulum has swung firmly back towards the nervous end of my anticipation spectrum. That’s because I’m getting the distinct impression that the priorities being considered for Web Components are those of JavaScript framework creators, rather than web developers looking to add incremental improvements while maintaining backward compatibility.

If that’s the case, then Web Components will be made in the image of existing monolithic MVC frameworks that require JavaScript to do anything, even rendering content. To me, that’s a dystopian vision, one I can’t get behind.