Tuesday, December 16th, 2014

The Session trad tune machine

Most pundits call it “the Internet of Things” but there’s another phrase from Andy Huntington that I first heard from Russell Davies: “the Geocities of Things.” I like that.

I’ve never had much exposure to this world of hacking electronics. I remember getting excited about the possibilities at a Brighton BarCamp back in 2008:

I now have my own little arduino kit, a bread board and a lucky bag of LEDs. Alas, know next to nothing about basic electronics so I’m really going to have to brush up on this stuff.

I never did do any brushing up. But that all changed last week.

Seb is doing a new two-day workshop. He doesn’t call it Internet Of Things. He doesn’t call it Geocities Of Things. He calls it Stuff That Talks To The Interwebs, or STTTTI, or ST4I. He needed some guinea pigs to test his workshop material on, so Clearleft volunteered as tribute.

In short, it was great! And this time, I didn’t stop hacking when I got home.

First off, every workshop attendee gets a hand-picked box of goodies to play with and keep: an arduino mega, a wifi shield, sensors, screens, motors, lights, you name it. That’s the hardware side of things. There are also code samples and libraries that Seb has prepared in advance.

Getting ready to workshop with @Seb_ly. Unwrapping some Christmas goodies from Santa @Seb_ly.

Now, remember, I lack even the most basic knowledge of electronics, but after two days of fiddling with this stuff, it started to click.

Blinkenlights. Hello, little fella.

On the first workshop day, we all did the same exercises, connected things up, getting them to talk to the internet, that kind of thing. For the second workshop day, Seb encouraged us to think about what we might each like to build.

I was quite taken with the ability of the piezo buzzer to play rudimentary music. I started to wonder if there was a way to hook it up to The Session and have it play the latest jigs, reels, and hornpipes that have been submitted to the site in ABC notation. A little bit of googling revealed that someone had already taken a stab at writing an ABC parser for arduino. I didn’t end up using that code, but it convinced me that what I was trying to do wasn’t crazy.

So I built a machine that plays Irish traditional music from the internet.

Playing with hardware and software, making things that go beep in the night.

The hardware has a piezo buzzer, an “on” button, an “off” button, a knob for controlling the speed of the tune, and an obligatory LED.

The software has a countdown timer that polls a URL every minute or so. The URL is http://tune.adactio.com/. That in turn uses The Session’s read-only API to grab the latest tune activity and then get the ABC notation for whichever tune is at the top of that list. Then it does some cleaning up—removing some of the more advanced ABC stuff—and outputs a single line of notes to be played. I’m fudging things a bit: the device has the range of a tin whistle, and expects tunes to be in the key of D or G, but seeing as that’s at least 90% of Irish traditional music, it’s good enough.

Whenever there’s a new tune, it plays it. Or you can hit the satisfying “on” button to manually play back the latest tune (and yes, you can hit the equally satisfying “off” button to stop it). Being able to adjust the playback speed with a twiddly knob turns out to be particularly handy if you decide to learn the tune.

I added one more lo-fi modification. I rolled up a piece of paper and placed it over the piezo buzzer to amplify the sound. It works surprisingly well. It’s loud!

Rolling my own speaker cone, quite literally.

I’ll keep tinkering with it. It’s fun. I realise I’m coming to this whole hardware-hacking thing very late, but I get it now: it really does feel similar to that feeling you would get when you first figured out how to make a web page back in the days of Geocities. I’ve built something that’s completely pointless for most people, but has special meaning for me. It’s ugly, and it’s inefficient, but it works. And that’s a great feeling.

(P.S. Seb will be running his workshop again on the 3rd and 4th of February, and there will a limited amount of early-bird tickets available for one hour, between 11am and midday this Thursday. I highly recommend you grab one.)

Monday, December 8th, 2014

Responsible Web Components

Bruce has written a great article called On the accessibility of web components. Again. In it, he takes issue with the tone of a recent presentation on web components, wherein Dimitri Glazkov declares:

Custom elements is really neat. It basically says, “HTML it’s been a pleasure”.

Bruce paraphrases this as:

Bye-bye HTML; you weren’t useful enough. Hello, brave new world of custom elements.

Like Bruce, I’m worried about this year-zero thinking. First of all, I think it’s self-defeating. In my experience, the web technologies that succeed are the ones that build upon what already exists, rather than sweeping everything aside. Evolution, not revolution.

Secondly, web components—or more specifically, custom elements—already allow us to extend existing HTML elements. That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

But, as Bruce asks:

Snarking aside, why do so few people talk about extending existing HTML elements with web components? Why’s all the talk about brand new custom elements? I don’t know.

Patrick leaves a comment with his answer:

The issue of not extending existing HTML elements is exactly the same that we’ve seen all this time, even before web components: developers who are tip-top JavaScripters, who already plan on doing all the visual feedback/interactions (for mouse users like themselves) in script anyway themselves, so they just opt for the most neutral starting point…a div or a span. Web components now simply gives the option of then sweeping all that non-semantic junk under a nice, self-contained rug.

That’s a depressing thought. But it might very well be true.

Stuart also comments:

Why aren’t web components required to be created with is=“some-component” on an existing HTML element? This seems like an obvious approach; sure, someone who wants to make something meaningless will just do <div is=my-thing> or (worse) <body is=my-thing> but it would provide a pretty heavy hint that you’re supposed to be doing things The Right Way, and you’d get basic accessibility stuff thrown in for free.

That’s a good question. After all, writing <new-shiny></new-shiny> is basically the same as <span is=“new-shiny”></span>. It might not make much of a difference in the case of a span or div, but it could make an enormous difference in the case of, say, form elements.

Take a look at IBM’s library of web components. They’re well-written and they look good, but time and time again, they create new custom elements instead of extending existing HTML.

Although, as Bruce points out:

Of course, not every new element you’ll want to make can extend an existing HTML element.

But I still think that the majority of web components could, and should, extend existing elements. Addy Osmani has put together some design principles for web components and Steve Faulkner has created a handy punch-list for web components, but I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Rather than just complain about this kind of thing, I figured I’d try my hand at putting it into practice…

Dave recently made a really nice web component for playing back podcast audio files. I could imagine using something like this on Huffduffer. It’s called podcast-player and you use it thusly:

<podcast-player src="my.mp3"></podcast-player>

One option for providing fallback content would be to include it within the custom element tags:

<podcast-player src="my.mp3">
    <a href="my.mp3">Listen</a>
</podcast-player>

That would require minimum change to Dave’s code. I’d just need to make sure that the fallback content within podcast-player elements is removed in supporting browsers.

I forked Dave’s code to try out another idea. I figured that if the starting point was a regular link to the audio file, that would also be a way of providing fallback for browsers that don’t cut the web component mustard:

<a href="my.mp3" is="podcast-player">Listen</a>

It required surprisingly few changes to the code. I needed to remove the fallback content (that “Listen” text), and I needed to prevent the default behaviour (following the href), but it was fairly straightforward.

However, I’m sure it could be improved in one of two ways:

  1. I should probably supply an ARIA role to the extended link. I’m not sure what would be the right one, though …menu or menubar perhaps?
  2. Perhaps a link isn’t the right element to extend. Really I should be extending an audio element (which itself allows for fallback content). When I tried that, I found it too hard to overcome the default browser rules for hiding anything between the opening and closing tags. But you’re smarter than me, so I bet you could create <audio is=“podcast-player”>.

Fork the code and have at it.

Thursday, December 4th, 2014

Mindcraft

As something of a science geek, I’m a big fan of the work of the Wellcome Trust:

We support the brightest minds in biomedical research and the medical humanities. Our breadth of support includes public engagement, education and the application of research to improve health.

I was very excited when Clearleft had the opportunity to work with them—we redesigned the Wellcome Library a while back. That was a fun responsive project, and an early use of a pattern portfolio as the deliverable.

We’ve been working with them on some other projects since then. We helped out with Mosaic, their terrific magazine site. I really enjoyed popping in to their fantastic building to chat with their talented designers.

The most recent Clearleft/Wellcome collaboration is something called Mindcraft. This started as a completely open-ended project—no one was quite sure what form the finished result would take. Over time it developed into a narrative-based series of historical events brought to life with browser technologies.

I didn’t work on this project but I loved watching it come together. The source material made for an interesting work environment.

Crazy wall Maps and legends.

Graham and Danielle did the front-end development, bringing Mikey’s designs to life, once Rich and Ben figured out the flow (all overseen by Jess).

The press release for Mindcraft describes it as “immersive” which immediately sets alarm bells ringing in expectation of big, scrolljacking pages …and to be honest, Mindcraft does have elements of that. It’s primarily intended to be visited on a large screen with a fast connection (although it’ll work on any sized-screen). But I think it manages to strike a pretty healthy balance of performance and “richness.” It certainly doesn’t feel gratuitous. The use of sound, imagery, and interaction is all in service to the story.

And boy, what a story!

Mindcraft explores a century of madness, murder and mental healing, from the arrival in Paris of Franz Anton Mesmer with his theories of ‘animal magnetism’ to the therapeutic power of hypnotism used by Freud.

I suggest you put on some headphones, make your browser window fullscreen, and start your journey.

It’s creepy, atmospheric, entertaining, and educational, all at the same time. I really like it. And I’m not just saying that because of Clearleft’s involvement. Like I said, I’m a science geek.

Wednesday, December 3rd, 2014

Commons People

Creative Commons licences have a variety of attributes, that can be combined together:

  • No-derivatives: the work can be reused, but not altered.
  • Attribution: the work must be credited.
  • Share-alike: any derivates must share the same licence.
  • Non-commercial: the work can be used, but not for commercial purposes.

That last one is important. If you don’t attach a non-commercial licence to your work, then your work can be resold for profit (it might be remixed first, or it might have to include your name—that all depends on what other attributes you’ve included in the licence).

If you’re not comfortable with anyone reselling your work, you should definitely choose a non-commercial licence.

Flickr is planning to sell canvas prints of photos that have been licensed under Creative Commons licenses that don’t include the non-commercial clause. They are perfectly within their rights to do this—this is exactly what the licence allows—but some people are very upset about it.

Jeffrey says it’s short-sighted and sucky because it violates the spirit in which the photos were originally licensed. I understand that feeling, but that’s simply not the way that the licences work. If you want to be able to say “It’s okay for some people to use my work for profit, but it’s not okay for others”, then you need to apply a more restrictive licence (like copyright, or Creative Commons Non-commercial) and then negotiate on a case-by-case basis for each usage.

But if you apply a licence that allows commercial usage, you must accept that there will be commercial usages that you aren’t comfortable with. Frankly, Flickr selling canvas prints of your photos is far from a worst-case scenario.

I licence my photos under a Creative Commons Attribution licence. That means they can be used anywhere—including being resold for profit—as long as I’m credited as the photographer. Because of that, my photos have shown up in all sorts of great places: food blogs, Wikipedia, travel guides, newspapers. But they’ve also shown up in some awful places, like Techcrunch. I might not like that, but it’s no good me complaining that an organisation (even one whose values I disagree with) is using my work exactly as the licence permits.

Before allowing commercial use of your creative works, you should ask “What’s the worst that could happen?” The worst that could happen includes scenarios like white supremacists, misogynists, or whacko conspiracy theorists using your work on their websites, newsletters, and billboards (with your name included if you’ve used an attribution licence). If you aren’t willing to live with that, do not allow commercial use of your work.

When I chose to apply a Creative Commons Attribution licence to my photographs, it was because I decided I could live with those worst-case scenarios. I decided that the potential positives outweighed the potential negatives. I stand by that decision. My photos might appear on a mudsucking site like Techcrunch, or get sold as canvas prints to make money for Flickr, but I’m willing to accept those usages in order to allow others to freely use my photos.

Some people have remarked that this move by Flickr to sell photos for profit will make people think twice about allowing commercial use of their work. To that I say …good! It has become clear that some people haven’t put enough thought into their licensing choices—they never asked “What’s the worst that could happen?”

And let’s be clear here: this isn’t some kind of bait’n’switch by Flickr. It’s not like liberal Creative Commons licensing is the default setting for photos hosted on that site. The default setting is copyright, all rights reserved. You have to actively choose a more liberal licence.

So I’m trying to figure out how it ended up that people chose the wrong licence for their photos. Because I want this to be perfectly clear: if you chose a licence that allows for commercial usage of your photos, but you’re now upset that a company is making commercial usage of your photos, you chose the wrong licence.

Perhaps the licence-choosing interface could have been clearer. Instead of simply saying “here’s what attribution means” or “here’s what non-commercial means”, perhaps it should also include lists of pros and cons: “here’s some of the uses you’ll be enabling”, but also “here’s the worst that could happen.”

Jen suggests a new Creative Commons licence that essentially inverts the current no-derivates licence; this would be a “derivative works only” licence. But unfortunately it sounds a bit too much like a read-my-mind licence:

What if I want to allow someone to use a photo in a conference slide deck, even if they are paid to present, but I don’t want to allow a company that sells stock photos to snatch up my photo and resell it?

Jen’s post is entitled I Don’t Want “Creative Commons By” To Mean You Can Rip Me Off …but that’s exactly what a Creative Commons licence without a non-commercial clause can mean. Of course, it’s not the only usage that such a licence allows (it allows many, many positive scenarios), but it’s no good pretending it were otherwise. If you’re not comfortable with that use-case, don’t enable it. Personally, I’m okay with that use-case because I believe it is offset by the more positive usages.

And that’s an important point: this is a personal decision, and not one to be taken lightly. Personally, I’m not a professional or even amateur photographer, so commercial uses of my photos are fine with me. Most professional photographers wouldn’t dream of allowing commercial use of their photos without payment, and rightly so. But even for non-professionals like myself, there are implications to allowing commercial use (one of those implications being that there will be usages you won’t necessarily be happy about).

So, going back to my earlier question, does the licence-choosing interface on Flickr make the implications of your choice clear?

Here’s the page for applying licences. You get to it by going to “Settings”, then “Privacy and Permissions,” then under “Defaults for new uploads,” the setting “What license will your content have.”

On that page, there’s a heading “Which license is right for you?” That has three hyperlinks:

  1. A page on Creative Commons about the licences,
  2. Frequently Asked Questions,
  3. A page of issues specifically related to images.

In that list of Frequently Asked Questions, there’s What things should I think about before I apply a Creative Commons license? and How should I decide which license to choose? There’s some good advice in there (like when in doubt, talk to a lawyer), but at no point does it suggest that you should ask yourself “What’s the worst that could happen?”

So it certainly seems that Flickr could be doing a better job of making the consequences of your licensing choice clearer. That might have the effect of making it a scarier choice, and it might put some people off using Creative Commons licences. But I don’t think that’s a bad thing. I would much rather that people made an informed decision.

When I chose to apply a Creative Commons Attribution licence to my photos, I did not make the decision lightly. I assumed that others who made the same choice also understood the consequences of that decision. Now I’m not so sure. Now I think that some people made uninformed licensing decisions in the past, which explains why they’re upset now (and I’m not blaming them for making the wrong decision—Flickr, and even Creative Commons, could have done a better job of providing relevant, easily understable information).

But this is one Internet Outrage train that I won’t be climbing aboard. Alas, that means I must now be considered a corporate shill who’s sold out to The Man.

Pointing out that a particular Creative Commons licence allows the Klu Klux Klan to use your work isn’t the same as defending the Klu Klux Klan.

Pointing out that a particular Creative Commons licence allows a hardcore porn film to use your music isn’t the same as defending hardcore porn.

Pointing out that a particular Creative Commons licence allows Yahoo to flog canvas prints of your photos isn’t the same as defending Yahoo.

Tuesday, November 25th, 2014

Interstelling

Jessica and I entered the basement of The Dukes at Komedia last weekend to listen to Sarah and her band Spacedog provide live musical accompaniment to short sci-fi films from the end of the nineteenth and start of the twentieth centuries.

It was part of the Cine City festival, which is still going on here in Brighton—Spacedog will also be accompanying a performance of John Wyndham’s The Midwich Cuckoos, and there’s going to be a screening of François Truffaut’s brilliant film version of Ray Bradbury’s Fahrenheit 451 in the atmospheric surroundings of Brighton’s former reference library. I might try to get along to that, although there’s a good chance that I might cry at my favourite scene. Gets me every time.

Those 100-year old sci-fi shorts featured familiar themes—time travel, monsters, expeditions to space. I was reminded of a recent gathering in San Francisco with some of my nerdiest of nerdy friends, where we discussed which decade might qualify as the golden age of science fiction cinema. The 1980s certainly punched above their weight—1982 and 1985 were particularly good years—but I also said that I think we’re having a bit of a sci-fi cinematic golden age right now. This year alone we’ve had Edge Of Tomorrow, Guardians Of The Galaxy, and Interstellar.

Ah, Interstellar!

If you haven’t seen it yet, now would be a good time to stop reading. Imagine that I’ve written the word “spoilers” in all-caps, followed by many many line breaks before continuing.

Ten days before we watched Spacedog accompanying silent black and white movies in a tiny basement theatre, Jessica and I watched Interstellar on the largest screen we could get to. We were in Seattle, which meant we had the pleasure of experiencing the film projected in 70mm IMAX at the Pacific Science Center, right by the space needle.

I really, really liked it. Or, at least, I’ve now decided that I really, really liked it. I wasn’t sure when I first left the cinema. There were many things that bothered me, and those things battled against the many, many things that I really enjoyed. But having thought about it more—and, boy, does this film encourage thought and discussion—I’ve been able to resolve quite a few of the issues I was having with the film.

I hate to admit that most of my initial questions were on the science side of things. I wish I could’ve switched off that part of my brain.

There’s an apocryphal story about an actor asking “Where’s the light coming from?”, and being told “Same place as the music.” I distinctly remember thinking that very same question during Interstellar. The first planetfall of the film lands the actors and the audience on a world in orbit around a black hole. So where’s the light coming from?

The answer turns out to be that the light is coming from the accretion disk of that black hole.

But wouldn’t the radiation from the black hole instantly fry any puny humans that approach it? Wouldn’t the planet be ripped apart by the gravitational tides?

Not if it’s a rapidly-spinning supermassive black hole with a “gentle” singularity.

These are nit-picky questions that I wish I wasn’t thinking of. But I like the fact that there are answers to those questions. It’s just that I need to seek out those answers outside the context of the movie—I should probably read Kip Thorne’s book. The movie gives hints at resolving those questions—there’s just one mention of the gentle singularity—but it’s got other priorities: narrative, plot, emotion.

Still, I wish that Interstellar had managed to answer my questions while the film was still happening. This is something that Inception managed brilliantly: for all its twistiness, you always know exactly what’s going on, which is no mean feat. I’m hoping and expecting that Interstellar will reward repeated viewings. I’m certainly really looking forward to seeing it again.

In the meantime, I’ll content myself with re-watching Inception, which makes a fascinating companion piece to Interstellar. Both films deal with time and gravity as malleable, almost malevolent forces. But whereas Cobb travels as far inward as it is possible for a human to go, Coop travels as far outward as it is possible for our species to go.

Interstellar is kind of a mess. There’s plenty of sub-par dialogue and strange narrative choices. But I can readily forgive all that because of the sheer ambition and imagination on display. I’m not just talking about the imagination and ambition of the film-makers—I’m talking about the ambition and imagination of the human race.

That’s at the heart of the film, and it’s a message I can readily get behind.

Before we even get into space, we’re shown a future that, by any reasonable definition, would be considered a dystopia. The human race has been reduced to a small fraction of its former population, technological knowledge has been lost, and the planet is dying. And yet, where this would normally be the perfect storm required to show roving bands of road warriors pillaging their way across the dusty landscape, here we get an agrarian society with no hint of violence. The nightmare scenario is not that the human race is wiped out through savagery, but that the human race dies out through a lack of ambition and imagination.

Religion isn’t mentioned once in this future, but Interstellar does feature a deus ex machina in the shape of a wormhole that saves the day for the human race. I really like the fact that this deus ex machina isn’t something that’s revealed at the end of the movie—it’s revealed very early on. The whole plot turns out to be a glorious mash-up of two paradoxes: the bootstrap paradox and the twin paradox.

The end result feels like a mixture of two different works by Arthur C. Clarke: The Songs Of Distant Earth and 2001: A Space Odyssey.

2001 is the more obvious work to compare it to, and the film readily invites that comparison. Many reviewers have been quite to point out that Interstellar doesn’t reach the same heights as Kubrick’s 2001. That’s a fair point. But then again, I’m not sure that any film can ever reach the bar set by 2001. I honestly think it’s as close to perfect as any film has ever come.

But I think it’s worth pointing out that when 2001 was released, it was not greeted with universal critical acclaim. Quite the opposite. Many reviewers found it tedious, cold, and baffling. It divided opinion greatly …much like Interstellar is doing now.

In some ways, Interstellar offers a direct challenge to 2001—what if mankind’s uplifting is not caused by benevolent alien beings, but by the distant descendants of the human race?

This is revealed as a plot twist, but it was pretty clearly signposted from early in the film. So, not much of a plot twist then, right?

Well, maybe not. What if Coop’s hypothesis—that the wormhole is the creation of future humans—isn’t entirely correct? He isn’t the only one who crosses the event horizon. He is accompanied by the robot TARS. In the end, the human race is saved by the combination of Coop the human’s connection to his daughter, and the analysis carried out by TARS. Perhaps what we’re witnessing there is a glimpse of the true future for our species; human-machine collaboration. After all, if humanity is going to transcend into a fifth-dimensional species at some future point, it’s unlikely to happen through biology alone. But if you combine the best of the biological—a parent’s love for their child—with the best of technology, then perhaps our post-human future becomes not only plausible, but inevitable.

Deus ex machina.

Thinking about the future of the species in this co-operative way helps alleviate the uncomfortable feeling I had that Interstellar was promoting a kind of Manifest Destiny for the human race …although I’m not sure that I’m any more comfortable with that being replaced by a benevolent technological determinism.

Tuesday, November 18th, 2014

Webiness

John Gruber quite rightly skewers a paywall-proteced “sky is falling” piece in the Wall Street Journal called The Web Is Dying; Apps Are Killing It, writing that native apps are part of the web:

They’re just superior clients to open Internet services.

This is something I wrote about earlier this year:

There’s a whole category of native apps that could just as easily be described as “artisanal web browsers” (and if someone wants to write a browser extension that replaces every mention of “native app” with “artisanal web browser” that would be just peachy).

Instagram’s native app is a web browser.

Facebook’s native app is a web browser.

Twitter’s native app is a web browser.

In that same piece, I try to define exactly what the web is:

Well, the unsexy definition I’ve used in the past is that the web consists of files (e.g. HTML, CSS, JavaScript), accessible at URLs, delivered over HTTP.

John also gives a defintion of what the web is:

There are two big four-letter “H” acronyms that powered the web from the beginning: HTML (client), and HTTP (networking protocol). Native apps are just an alternative to HTML running in a web browser (and many native apps still use HTML web views embedded within the apps themselves to render parts of their interface). Almost all native apps use HTTP/S for networking, though.

Notice the difference? Whereas John talks about two things that define the web (HTTP/S and HTML), I talk about three: HTTP(S), HTML, and URLs:

But to be honest, I don’t think that the Hypertext Transfer Protocol is the important part of the web; it’s the URLs that really matter. It’s the addressability of the files that’s the killer app of the web in my opinion.

URLs are what give the web is its reach, and that’s what’s still missing from native apps.

But John’s fundamental point that native apps and the web are not fundumentally opposed? I completely agree with that. They are complementary. Irakli Nadareishvili wrote about this false dichotomy recently in a post called Responsive Web Design or Native Mobile Apps?:

Native mobile applications are not going anywhere and the future of all websites is to be responsive. These two assertions are not mutually exclusive, they are complementary – don’t create apps when what you actually need is a website; but also don’t pretend webapps can completely replace native applications, because they can’t.

It’s also worth remembering that even if you’re using a native app—like, say, Facebook or Twitter—you’re still going to spend a lot of time following links and reading stuff that’s rendered in the app, but that lives out on the world wide web. And the reason why those apps can access those resources is because those resources have URLs.

URLs are not an implementation detail. The URI is the thing.

Sunday, November 16th, 2014

Home

There’s nothing quite so tedious as blogging about blogging, but I came across a few heart-warming thoughts recently that it would be remiss of me to let go unremarked, so please indulge me for a moment as I wallow in some meta-blogging.

Marco Arment talks about the trend that many others have noticed, of personal publishing dying out in favour of tweeting:

Too much of my writing in the last few years has gone exclusively into Twitter. I need to find a better balance.

As he rightly points out:

Twitter is a complementary medium to blogging, but it’s not a replacement.

Andy noticed a similar trend in his own writing:

Twitter and Waxy Links cannibalized all the smaller posts, and as my reach grew, I started reserving blogging for more “serious” stuff — mostly longer-form research and investigative writing.

Well, fuck that.

Amber Hewitt also talks about reviving the personal blog:

Someone made an analogy that describes social networks very well. Facebook is your neighborhood, Twitter is your local bar, and your blog is your home. (I guess Instagram is the cafe? “Look what I’m eating!”)

This made me realized I’m neglecting my home. My posts and photos are spread out on different networks and there is no centralized hub.

That reminds me of what Frank said about his site:

In light of the noisy, fragmented internet, I want a unified place for myself—the internet version of a quiet, cluttered cottage in the country.

The wonderful Gina Trapani—who has has publishing on her own site for years now—follows Andy’s lead with some guidelines for short-form blogging:

  • If it’s a paragraph, it’s a post.
  • Negotiate a comfort zone.
  • Traffic is irrelevant.
  • Simplify, simplify.
  • Ask for trusted collaborator feedback.
  • Have fun.

Good advice.

Monday, November 3rd, 2014

Just what is it that you want to do?

The supersmart Scott Jenson just gave a talk at The Web Is in Cardiff, which was by all accounts, excellent. I wish I could have seen it, but I’m currently chilling out in Florida and I haven’t mastered the art of bilocation.

Last week, Scott wrote a blog post called My Issue with Progressive Enhancement (he wrote it on Google+, which is why you might not have seen it).

In it, he takes to task the idea that—through progressive enhancement—you should be able to offer all functionality to all browsers, thereby foregoing the use of newer technologies that aren’t universally supported.

If that were what progressive enhancement meant, I’d be with him all the way. But progressive enhancement is not about offering all functionality; progressive enhancement is about making sure that your core functionality is available to everyone. Everything after that is, well, an enhancement (the clue is in the name).

The trick to doing this well is figuring out what is core functionality, and what is an enhancement. There are no hard and fast rules.

Sometimes it’s really obvious. Web fonts? They’re an enhancement. Rounded corners? An enhancement. Gradients? An enhancement. Actually, come to think of it, all of your CSS is an enhancement. Your content, on the other hand, is not. That should be available to everyone. And in the case of task-based web thangs, that means the fundamental tasks should be available to everyone …but you can still layer more tasks on top.

If you’re building an e-commerce site, then being able to add items to a shopping cart and being able to check out are your core tasks. Once you’ve got that working with good ol’ HTML form elements, then you can go crazy with your enhancements: animating, transitioning, swiping, dragging, dropping …the sky’s the limit.

This is exactly what Orde Saunders describes:

I’m not suggesting that you try and replicate all your JavaScript functionality when it’s disabled, above all that’s just not practical. What you should be aiming for is being able to complete the basics - for example adding a product to a shopping cart and then checking out. This is necessarily going to be clunky as judged by current standards and I suggest you don’t spend much time on optimising this process.

Scott asked about building a camera app with progressive enhancement:

Here again, the real question to ask is “what is the core functionality?” Building a camera app is a means to an end, not the end itself. You need to ask what the end goal is. Perhaps it’s “enable people to share photos with their friends.” Going back to good ol’ HTML, you can accomplish that task with:

<input type="file" accept="image/*">

Now that you’ve got that out of the way, you can spend the majority of your time making the best damn camera app you can, using all the latest browser technologies. (Perhaps WebRTC? Maybe use a canvas element to display the captured image data and apply CSS filters on top?)

Scott says:

My point is that not everything devolves to content. Sometimes the functionality is the point.

I agree wholeheartedly. In fact, I would say that even in the case of “content” sites, functionality is still the point—the functionality would be reading/hearing/accessing content. But I think that Scott is misunderstanding progressive enhancement if he think it means providing all the functionality that one can possibly provide.

Mat recently pointed out that there are plenty of enhancements on the Boston Globe site that require JavaScript, but the core functionality is available to everyone:

Scott again:

What I’m chaffing at is the belief that when a page is offering specific functionality, Let’s say a camera app or a chat app, what does it mean to progressively enhance it?

Again, a realtime chat app is a means to an end. What is it enabling? The ability for people to talk to each other over the web? Okay, we can do that using good ol’ HTML—text and form elements—with full page refreshes. That won’t be realtime. That’s okay. The realtime part is an enhancement. Use Web Sockets and WebRTC (in the browsers that support them) to provide the realtime experience. But everyone gets the core functionality.

Like I said, the trick is figuring out what’s core functionality and what’s an enhancement.

Ethan provides another example. Let’s say you’re building a browser-based rich text editor, that uses JavaScript to do all sorts of formatting on the fly. The core functionality is not the formatting on the fly; the core functionality is being able to edit text:

If progressive enhancement truly meant making all functionality available to everyone, then it would be unworkable. I think that’s a common misconception around progressive enhancement; there’s this idea that using progressive enhancement means that you’re going to spend all your time making stuff work in older browsers. In fact, it’s the exact opposite. As long as you spend a little bit of time at the start making sure that the core functionality works with good ol’ fashioned HTML, then you can spend most of your time trying out the latest and greatest browser technologies.

As Orde put it:

What you are going to be spending the majority of your time and effort on is the enhanced JavaScript version as that is how the majority of your customers will be experiencing your site.

The other Scott—Scott Jehl—wrote a while back:

For us, building with Progressive Enhancement moves almost all of our development time and costs to newer browsers, not older ones.

Progressive Enhancement frees us to focus on the costs of building features for modern browsers, without worrying much about leaving anyone out. With a strongly qualified codebase, older browser support comes nearly for free.

Approaching browser support this way requires a different way of thinking. For everything you’re building, you need to ask “is this core functionality, or is it an enhancment?” and build accordingly. It takes a bit of getting used to, but it gets easier the more you do it (until, after a while, it becomes second nature).

But if you’re thinking about progressive enhancement as “devolving” down—as Scott Jenson describes in his post—then I think you’re on the wrong track. Instead it’s about taking care of the core functionality quickly and then spending your time “enhancing” up.

Scott asks:

Shouldn’t we be allowed to experiment? Isn’t it reasonable to build things that push the envelope?

Absolutely! And the best and safest way to do that is to make sure that you’re providing your core functionality for everyone. Once you do that, you can go nuts with the latest and greatest experimental envelope-pushing technologies, secure in the knowledge that you don’t even need to worry about the fact that they don’t work in older browsers. Geolocation! Offline storage! Device APIs! Anything you can think of, you can use as a powerful enhancement on top of your core tasks.

Once you realise this, it’s immensely liberating to use progressive enhancement. You can have the best of both worlds: universal access to core functionality, combined with all the latest cuting-edge technology too.

Thursday, October 23rd, 2014

Be progressive

Aaron wrote a great post a little while back called A Fundamental Disconnect. In it, he points to a worldview amongst many modern web developers, who see JavaScript as a universally-available technology in web browsers. They are, in effect, viewing a browser’s JavaScript engine as a runtime environment, and treating web development no different to any other kind of software development.

The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.

Treating JavaScript support in “the browser” as a known quantity is as much of a consensual hallucination as deciding that all viewports are 960 pixels wide. Even that phrasing—“the browser”—shows a framing that’s at odds with the reality of developing for the web; we don’t have to think about “the browser”, we have to think about browsers:

Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.

While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:

And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.

Please don’t think that either Aaron or I are saying that you shouldn’t use JavaScript. Far from it! It’s simply a matter of how you wield the power of JavaScript. If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold. But if you start by building on a classic server/client model, and then enhance with JavaScript, you can have your cake and eat it too. Modern browsers get a smooth, rich experience. Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.

Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.

Stuart takes issue with that assertion in a post called Fundamentally Disconnected. In it, he points out that the server isn’t quite the controlled environment that Aaron claims:

Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.

It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.

Stuart finishes on a stirring note:

The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.

However he wraps up by saying that…

…the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

In a post called Missed Connections, Aaron pushes back against that last point:

The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.

While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed.

Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).

I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work.

But here’s the problem with progressively enhancing from server functionality to a rich client:

A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.

Good point.

Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

Ah, there’s the rub!

When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?

Personally, I try not to completely reinvent all the business logic that I’ve already figured out on the server, and then rewrite it all in JavaScript. I prefer to use JavaScript—and specifically Ajax—as a dumb waiter, shuffling data back and forth between the client and server, where the real complexity lies.

I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.

But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.

Here’s the harsh truth: building websites with progressive enhancement is not convenient.

Building a client-side web thang that requires JavaScript to work is convenient, especially if you’re using a framework like Angular or Ember. In fact, that’s the main selling point of those frameworks: developer convenience.

The trade-off is that to get that level of developer convenience, you have to sacrifice the universal reach that the web provides, and limit your audience to the browsers that can run a pre-determined level of JavaScript. Many developers are quite willing to make that trade-off.

Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.

Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.

But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?

This is the promise of isomorphic JavaScript. It’s a terrible name for a great idea.

For me, this is the most exciting aspect of Node.js:

With Node.js, a fast, stable server-side JavaScript runtime, we can now make this dream a reality. By creating the appropriate abstractions, we can write our application logic such that it runs on both the server and the client — the definition of isomorphic JavaScript.

Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.

Interestingly, the reason why many large sites are investigating this approach isn’t about universal access—quite often they have separate siloed sites for different device classes. Instead it’s about performance. The problem with having all of your functionality wrapped up in JavaScript on the client is that, until all of that JavaScript has loaded, the user gets absolutely nothing. Compare that to rendering an HTML document sent from the server, and the perceived performance difference is very noticable.

Here’s the ideal situation:

  1. A browser requests a URL.
  2. The server sends HTML, which renders quickly, along with with some mustard-cutting JavaScript.
  3. If the browser doesn’t cut the mustard, or JavaScript fails, fall back to full page refreshes.
  4. If the browser does cut the mustard, keep all the interaction in the client, just like a single page app.

With Node.js on the server, and JavaScript in the client, steps 3 and 4 could theoretically use the same code.

So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?

Well, partly it’s back to that question of controlling the server environment.

This is something that Nicholas Zakas tackled a year ago when he wrote about Node.js and the new web front-end. He proposes a third layer that sits between the business logic and the rendered output. By applying the idea of isomorphic JavaScript, this interface layer could be run on the server (as Node.js) or on the client (as JavaScript), while still allowing you to have the rest of your server environment running whatever programming language works for you.

It’s still early days for this kind of thinking, and there are lots of stumbling blocks—trying to write JavaScript that can be executed on both the server and the client isn’t so easy. But I’m pretty excited about where this could lead. I love the idea of building in a way that provide the performance and universal access of progressive enhancement, while also providing the developer convenience of JavaScript frameworks.

In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.

But while the mood might currently seem to be in favour of using monolithic JavaScript frameworks to build client-side apps that rely on JavaScript in browsers, I think that the tide might change if we started to see poster children for progressive enhancement.

Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.

Now …who wants to do the same thing for progressive enhancement?

Wednesday, October 22nd, 2014

A question of markup

Hi,

I’m really sorry it’s taken me so long to write back to you (over a month!)—I’m really crap at email.

I’m writing to you hoping you can help me make my colleagues take html5 “seriously”. They have read your book, they know it’s the “right” thing to do, but still they write !doctype HTML and then div, div, div, div, div…

Now, if you could provide me with some answers to their “why bother?- questions” would be really appreciated.

I have to be honest, I don’t think it’s worth spending lots of time agonising over what’s the right element to use for marking up a particular piece of content.

That said, I also think it’s lazy to just use divs and spans for everything, if a more appropriate element is available.

Paragraphs, lists, figures …these are all pretty straightforward and require almost no thought.

Deciding whether something is a section or an article, though …that’s another story. It’s not so clear. And I’m not sure it’s worth the effort. Frankly, a div might be just fine in most cases.

For example, can one assume that in the future we will be pulling content directly from websites and therefore it would be smart to tell this technology which content is the article, what are the navigation and so on?

There are some third-party tools (like Readability) that pay attention to the semantics of the elements you use, but the most important use-case is assistive technology. For tools such as screen readers, there’s a massive benefit to marking up headings, lists, and other straightforward elements, as well as some of the newer additions like nav and main.

But for many situations, a div is just fine. If you’re just grouping some stuff together that doesn’t have a thematic relation (for instance, you might be grouping them together to apply a particular style), then div works perfectly well. And if you’re marking up a piece of inline text and you’re not emphasising it, or otherwise differentiating it semantically, then a span is the right element to use.

So for most situations, I don’t think it’s worth overthinking the choice of HTML elements. A moment or two should be enough to decide which element is right. Any longer than that, and you might as well just use a div or span, and move on to other decisions.

But there’s one area where I think it’s worth spending a bit longer to decide on the right element, and that’s with forms.

When you’re marking up forms, it’s really worth making sure that you’re using the right element. Never use a span or a div if you’re just going to add style and behaviour to make it look and act like a button: use an actual button instead (not only is it the correct element to use, it’s going to save you a lot of work).

Likewise, if a piece of text is labelling a form control, don’t just use a span; use the label element. Again, this is not only the most meaningful element, but it will provide plenty of practical benefit, not only to screen readers, but to all browsers.

So when it comes to forms, it’s worth sweating the details of the markup. I think it’s also worth making sure that the major chunks of your pages are correctly marked up: navigation, headings. But beyond that, don’t spend too much brain energy deciding questions like “Is this a definition list? Or a regular list?” or perhaps “Is this an aside? Or is it a footer?” Choose something that works well enough (even if that’s a div) and move on.

But if your entire document is nothing but divs and spans then you’re probably going to end up making more work for yourself when it comes to the CSS and JavaScript that you apply.

There’s a bit of a contradiction to what I’m saying here.

On the one hand, I’m saying you should usually choose the most appropriate element available because it will save you work. In other words, it’s the lazy option. Be lazy!

On the other hand, I’m saying that it’s worth taking a little time to choose the most appropriate element instead of always using a div or a span. Don’t be lazy!

I guess what I’m saying is: find a good balance. Don’t agonise over choosing appropriate HTML elements, but don’t just use divs and spans either.

Hope that helps.

Hmmm… you know, I think I might publish this response on my blog.

Cheers,

Jeremy