Wednesday, January 7th, 2015

Events in 2015

Quite a significant chunk of my time last year was spent organising dConstruct 2014. The final result was worth it, but it really took it out of me. It got kind of stressful there for a while: ticket sales weren’t going as well as previous years, so I had to dip my toes into the world of… (shudder) marketing.

That was my third year organising dConstruct, and I’m immensely proud of all three events. dConstruct 2012—also known as “the one with James Burke”—remains a highlight of my life. But—especially after the particularly draining 2014 event—I’m going to pass on organising it this year.

To be honest, I think that dConstruct 2014, the tenth one, could stand as a perfectly fine final event. It’s not like it needs to run forever, right?

Andy has been pondering this very question, but he’s up for giving dConstruct at least one more go in 2015:

As we prepare for our tenth anniversary, we’ve also been asking whether it should be our last—at least for a while. The jury is still out, and we probably won’t make any decisions till after the event.

Y’know, it could turn out that dConstruct in 2015 might reinvigorate my energy, but for now, I’m just too burned out to contemplate taking it on myself. Anyway, I know that the other Clearlefties are more than capable of putting together a fantastic event.

But dConstruct wasn’t the only event I organised last year. 2014’s Responsive Day Out was a wonderful event, and much less stressful to organise. That’s mostly because it’s a very different beast to dConstruct; much looser, smaller, and easy-going, with fewer expectations. That makes for a fun day out all ‘round.

I wasn’t even sure if there was going to be a second Responsive Day Out, but I’m really glad we did it. In fact, I think there’s room for one last go.

I’ve already started putting a line-up together (and I’m squeeing with excitement about it already!), and this will definitely be the last Responsive Day Out, but keep your calendar clear on Friday, June 19th for…

Responsive Day Out 3: The Final Breakpoint.

Tuesday, January 6th, 2015

HTTPS

Tim Berners-Lee is quite rightly worried about linkrot:

The disappearance of web material and the rotting of links is itself a major problem.

He brings up an interesting point that I hadn’t fully considered: as more and more sites migrate from HTTP to HTTPS (A Good Thing), and the W3C encourages this move, isn’t there a danger of creating even more linkrot?

…perhaps doing more damage to the web than any other change in its history.

I think that may be a bit overstated. As many others point out, almost all sites making the switch are conscientious about maintaining redirects with a 301 status code.

(There’s also a similar 308 status code that I hadn’t come across, but after a bit of investigating, that looks to be a bit of mess.)

Anyway, the discussion does bring up some interesting points. Transport Layer Security is something that’s handled between the browser and the server—does it really need to be visible in the protocol portion of the URL? Or is that visibility a positive attribute that makes it clear that the URL is “good”?

And as more sites move to HTTPS, should browsers change their default behaviour? Right now, typing “example.com” into a browser’s address bar will cause it to automatically expand to http://example.com …shouldn’t browsers look for https://example.com first?

All good food for thought.

There’s a Google Doc out there with some advice for migrating to HTTPS. Unfortunately, the trickiest part—getting and installing certificates—is currently an owl-drawing tutorial, but hopefully it will get expanded.

If you’re looking for even more reasons why enabling TLS for your site is a good idea, look no further than the latest shenanigans from ISPs in the UK (we lost the battle for net neutrality in this country some time ago).

They can’t do that to pages served over HTTPS.

Sunday, January 4th, 2015

Soundcloudbusting

Matt wrote a great article called Ten Years of Podcasting: Fighting Human Nature (although I’m not entirely sure why he put it on Ev’s site instead of—or in addition to—his own). It’s a look back at the history of podcasting, and how it has grown out of its nerdy origins to become more of a mainstream activity. In it, he kindly gives a shout-out to Huffduffer:

…a way to make piecemeal meta-podcasts on the fly built up from random shows (here’s my feed).

Matt has written about how he uses Huffduffer before: a quick introduction to adding your Huffduffer feed to Instacast. It’s equally straightforward with Overcast, and most other iOS podcast apps.

If you use the iOS app Workflow, there’s a nifty tutorial for extracting the audio from YouTube videos, posting the audio to Dropbox, and subscribing in Huffduffer. I’m letting the side down somewhat though: Huffduffer’s API is currently read-only, but it would so much more powerful if you could post from other apps. I need to wrap my head around OAuth to do this. I was hoping to do OwnYourGram-style API with IndieAuth and micropub (once your Huffduffer profile has your website URL, and that URL has rel="me" links to OAuth providers like Twitter, Flickr, or Github, all the pieces should be in place), but alas IndieAuth only works on a domain or subdomain basis so /username URLs are out.

Anyway, back to Matt’s article about podcasting. He writes:

Personally, I like it when new podcasts use Soundcloud for their hosting, because on a desktop computer it means I can easily dip into their archives and play random episodes, scrub to certain segments and get a feel for the show before I subscribe.

It’s true that if you’re sitting in front of a desktop computer, Soundcloud is a great way to listen to an audio file there and then. But it’s a lousy way to host a podcast.

The whole point of podcasting is that it’s time-shifted. You get to listen to the audio you want, when you want. The whole point of Soundcloud is that you listen to audio then and there. That’s great if you’re a musician, looking to make sure that people can’t make copies of your music, but it’s terrible if you’re a podcaster.

To be fair, Soundcloud’s primary audience is still musicians, rather than podcasters, so it makes sense for them to prioritise that use-case. But still, they really go out of their way to obfuscate the actual audio file. Even if the publisher has checked the right box to allow users to download the audio file, the result is a very literal interpretation of that: you can download the file, but you can’t copy the URL and paste it into, say, an app for listening later (and you certainly can’t huffduff it).

Case in point: Matt finishes his article with:

If you don’t have time to read the above, it’s available as a 14min audio file…

That audio file is hosted on Soundcloud. You can listen to it there, or you can listen to it through the embedded player on the article itself. But that’s it. You can’t take it with you. You can’t listen to it later. You can’t, for example, listen to it in your car, even though as Matt says:

…for most Americans, killing time listening to podcasts in a car is a great place.

If you can figure out a way to get at Matt’s audio file (and maybe even huffduff it), I’d be much obliged.

Like Merlin says:

Wednesday, December 31st, 2014

It’s the end of the year as we know it.

It’s the last day of the year. I won’t be going out tonight. I’m going to stay in with Jessica in our cosy home.

The general consensus is that 2014 was a crappy year for human beings on planet Earth. In actuality, and contrary to popular belief, the human race continued its upward trend of improvement in almost all areas. Less violence, less disease, fewer wars, a record-breaking minimum of air crashes, and while the disparity between the richest and the poorest has increased, the baseline level of what constitutes poverty continues to increase throughout the world.

This trend is often met with surprise, or even disbelief. Just ask Matt Ridley and Steven Pinker. We tend to over-inflate the negative and undervalue the positive. And we seem to do it more and more with each passing year (which, in itself, can be seen as part of the overall positive trend: the fact that violence and inequality outrages us now more than ever is, on balance, a good thing). It seems to be part of our modern human nature to allow the bad to overwhelm the good in its importance.

Take my past year, for example. There was so much that was good. It was a good year for Clearleft and I travelled to marvellous places (Tel Aviv, Munich, Seattle, Austin, San Diego, Riga, Freiburg, Bologna, Florida, and more). I ate wonderful food. I read. I wrote. I listened. I spoke. I attended some workshops. I ran some workshops. I learned. I taught. I went to some great events. I organised Responsive Day Out 2 and dConstruct. I even wrote the occasional bit of code.

But despite all of that, 2014 is a year that feels dominated by death.

It started at the beginning of the year with the death of Jessica’s beloved Oma. The only positive spin I can put on it is that she had a long life, and she died surrounded by her family (Jessica included). But it was still a horrible event.

For the first half of the year, the web community was united behind Eric as he went through the unimaginable. Then, in June, Rebecca died. And the web community was united in sorrow. It was such an outrage against all that is good in this world.

I visited Eric that day. I tried to convey how much the people of the web were feeling for him. I couldn’t possibly convey it, but I had to try. I offered what comfort I could, but some situations are so far beyond normalcy that literally nothing can be done.

That death, the death of a child …there’s something so wrong, so obscene about it.

One month later, Chloe killed herself.

I miss her. I miss her so much.

So I understand why, despite the upward trends in human achievement, despite all the positive events of the last twelve months, 2014 feels like a year of dread and grief. I understand why so many people are happy to see the back of 2014. Good riddance, right?

But I still don’t want to let the bad—and boy, was it ever bad—crush the good. I’m seeing out the year as I mean to go on: eating good food, drinking good wine, reading, writing, and being alive.

It’s the last day of the year. I won’t be going out tonight. I’m going to stay in with Jessica in our cosy home.

Tuesday, December 16th, 2014

The Session trad tune machine

Most pundits call it “the Internet of Things” but there’s another phrase from Andy Huntington that I first heard from Russell Davies: “the Geocities of Things.” I like that.

I’ve never had much exposure to this world of hacking electronics. I remember getting excited about the possibilities at a Brighton BarCamp back in 2008:

I now have my own little arduino kit, a bread board and a lucky bag of LEDs. Alas, know next to nothing about basic electronics so I’m really going to have to brush up on this stuff.

I never did do any brushing up. But that all changed last week.

Seb is doing a new two-day workshop. He doesn’t call it Internet Of Things. He doesn’t call it Geocities Of Things. He calls it Stuff That Talks To The Interwebs, or STTTTI, or ST4I. He needed some guinea pigs to test his workshop material on, so Clearleft volunteered as tribute.

In short, it was great! And this time, I didn’t stop hacking when I got home.

First off, every workshop attendee gets a hand-picked box of goodies to play with and keep: an arduino mega, a wifi shield, sensors, screens, motors, lights, you name it. That’s the hardware side of things. There are also code samples and libraries that Seb has prepared in advance.

Getting ready to workshop with @Seb_ly. Unwrapping some Christmas goodies from Santa @Seb_ly.

Now, remember, I lack even the most basic knowledge of electronics, but after two days of fiddling with this stuff, it started to click.

Blinkenlights. Hello, little fella.

On the first workshop day, we all did the same exercises, connected things up, getting them to talk to the internet, that kind of thing. For the second workshop day, Seb encouraged us to think about what we might each like to build.

I was quite taken with the ability of the piezo buzzer to play rudimentary music. I started to wonder if there was a way to hook it up to The Session and have it play the latest jigs, reels, and hornpipes that have been submitted to the site in ABC notation. A little bit of googling revealed that someone had already taken a stab at writing an ABC parser for arduino. I didn’t end up using that code, but it convinced me that what I was trying to do wasn’t crazy.

So I built a machine that plays Irish traditional music from the internet.

Playing with hardware and software, making things that go beep in the night.

The hardware has a piezo buzzer, an “on” button, an “off” button, a knob for controlling the speed of the tune, and an obligatory LED.

The software has a countdown timer that polls a URL every minute or so. The URL is http://tune.adactio.com/. That in turn uses The Session’s read-only API to grab the latest tune activity and then get the ABC notation for whichever tune is at the top of that list. Then it does some cleaning up—removing some of the more advanced ABC stuff—and outputs a single line of notes to be played. I’m fudging things a bit: the device has the range of a tin whistle, and expects tunes to be in the key of D or G, but seeing as that’s at least 90% of Irish traditional music, it’s good enough.

Whenever there’s a new tune, it plays it. Or you can hit the satisfying “on” button to manually play back the latest tune (and yes, you can hit the equally satisfying “off” button to stop it). Being able to adjust the playback speed with a twiddly knob turns out to be particularly handy if you decide to learn the tune.

I added one more lo-fi modification. I rolled up a piece of paper and placed it over the piezo buzzer to amplify the sound. It works surprisingly well. It’s loud!

Rolling my own speaker cone, quite literally.

I’ll keep tinkering with it. It’s fun. I realise I’m coming to this whole hardware-hacking thing very late, but I get it now: it really does feel similar to that feeling you would get when you first figured out how to make a web page back in the days of Geocities. I’ve built something that’s completely pointless for most people, but has special meaning for me. It’s ugly, and it’s inefficient, but it works. And that’s a great feeling.

(P.S. Seb will be running his workshop again on the 3rd and 4th of February, and there will a limited amount of early-bird tickets available for one hour, between 11am and midday this Thursday. I highly recommend you grab one.)

Monday, December 8th, 2014

Responsible Web Components

Bruce has written a great article called On the accessibility of web components. Again. In it, he takes issue with the tone of a recent presentation on web components, wherein Dimitri Glazkov declares:

Custom elements is really neat. It basically says, “HTML it’s been a pleasure”.

Bruce paraphrases this as:

Bye-bye HTML; you weren’t useful enough. Hello, brave new world of custom elements.

Like Bruce, I’m worried about this year-zero thinking. First of all, I think it’s self-defeating. In my experience, the web technologies that succeed are the ones that build upon what already exists, rather than sweeping everything aside. Evolution, not revolution.

Secondly, web components—or more specifically, custom elements—already allow us to extend existing HTML elements. That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

But, as Bruce asks:

Snarking aside, why do so few people talk about extending existing HTML elements with web components? Why’s all the talk about brand new custom elements? I don’t know.

Patrick leaves a comment with his answer:

The issue of not extending existing HTML elements is exactly the same that we’ve seen all this time, even before web components: developers who are tip-top JavaScripters, who already plan on doing all the visual feedback/interactions (for mouse users like themselves) in script anyway themselves, so they just opt for the most neutral starting point…a div or a span. Web components now simply gives the option of then sweeping all that non-semantic junk under a nice, self-contained rug.

That’s a depressing thought. But it might very well be true.

Stuart also comments:

Why aren’t web components required to be created with is=“some-component” on an existing HTML element? This seems like an obvious approach; sure, someone who wants to make something meaningless will just do <div is=my-thing> or (worse) <body is=my-thing> but it would provide a pretty heavy hint that you’re supposed to be doing things The Right Way, and you’d get basic accessibility stuff thrown in for free.

That’s a good question. After all, writing <new-shiny></new-shiny> is basically the same as <span is=“new-shiny”></span>. It might not make much of a difference in the case of a span or div, but it could make an enormous difference in the case of, say, form elements.

Take a look at IBM’s library of web components. They’re well-written and they look good, but time and time again, they create new custom elements instead of extending existing HTML.

Although, as Bruce points out:

Of course, not every new element you’ll want to make can extend an existing HTML element.

But I still think that the majority of web components could, and should, extend existing elements. Addy Osmani has put together some design principles for web components and Steve Faulkner has created a handy punch-list for web components, but I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Rather than just complain about this kind of thing, I figured I’d try my hand at putting it into practice…

Dave recently made a really nice web component for playing back podcast audio files. I could imagine using something like this on Huffduffer. It’s called podcast-player and you use it thusly:

<podcast-player src="my.mp3"></podcast-player>

One option for providing fallback content would be to include it within the custom element tags:

<podcast-player src="my.mp3">
    <a href="my.mp3">Listen</a>
</podcast-player>

That would require minimum change to Dave’s code. I’d just need to make sure that the fallback content within podcast-player elements is removed in supporting browsers.

I forked Dave’s code to try out another idea. I figured that if the starting point was a regular link to the audio file, that would also be a way of providing fallback for browsers that don’t cut the web component mustard:

<a href="my.mp3" is="podcast-player">Listen</a>

It required surprisingly few changes to the code. I needed to remove the fallback content (that “Listen” text), and I needed to prevent the default behaviour (following the href), but it was fairly straightforward.

However, I’m sure it could be improved in one of two ways:

  1. I should probably supply an ARIA role to the extended link. I’m not sure what would be the right one, though …menu or menubar perhaps?
  2. Perhaps a link isn’t the right element to extend. Really I should be extending an audio element (which itself allows for fallback content). When I tried that, I found it too hard to overcome the default browser rules for hiding anything between the opening and closing tags. But you’re smarter than me, so I bet you could create <audio is=“podcast-player”>.

Fork the code and have at it.

Thursday, December 4th, 2014

Mindcraft

As something of a science geek, I’m a big fan of the work of the Wellcome Trust:

We support the brightest minds in biomedical research and the medical humanities. Our breadth of support includes public engagement, education and the application of research to improve health.

I was very excited when Clearleft had the opportunity to work with them—we redesigned the Wellcome Library a while back. That was a fun responsive project, and an early use of a pattern portfolio as the deliverable.

We’ve been working with them on some other projects since then. We helped out with Mosaic, their terrific magazine site. I really enjoyed popping in to their fantastic building to chat with their talented designers.

The most recent Clearleft/Wellcome collaboration is something called Mindcraft. This started as a completely open-ended project—no one was quite sure what form the finished result would take. Over time it developed into a narrative-based series of historical events brought to life with browser technologies.

I didn’t work on this project but I loved watching it come together. The source material made for an interesting work environment.

Crazy wall Maps and legends.

Graham and Danielle did the front-end development, bringing Mikey’s designs to life, once Rich and Ben figured out the flow (all overseen by Jess).

The press release for Mindcraft describes it as “immersive” which immediately sets alarm bells ringing in expectation of big, scrolljacking pages …and to be honest, Mindcraft does have elements of that. It’s primarily intended to be visited on a large screen with a fast connection (although it’ll work on any sized-screen). But I think it manages to strike a pretty healthy balance of performance and “richness.” It certainly doesn’t feel gratuitous. The use of sound, imagery, and interaction is all in service to the story.

And boy, what a story!

Mindcraft explores a century of madness, murder and mental healing, from the arrival in Paris of Franz Anton Mesmer with his theories of ‘animal magnetism’ to the therapeutic power of hypnotism used by Freud.

I suggest you put on some headphones, make your browser window fullscreen, and start your journey.

It’s creepy, atmospheric, entertaining, and educational, all at the same time. I really like it. And I’m not just saying that because of Clearleft’s involvement. Like I said, I’m a science geek.

Wednesday, December 3rd, 2014

Commons People

Creative Commons licences have a variety of attributes, that can be combined together:

  • No-derivatives: the work can be reused, but not altered.
  • Attribution: the work must be credited.
  • Share-alike: any derivates must share the same licence.
  • Non-commercial: the work can be used, but not for commercial purposes.

That last one is important. If you don’t attach a non-commercial licence to your work, then your work can be resold for profit (it might be remixed first, or it might have to include your name—that all depends on what other attributes you’ve included in the licence).

If you’re not comfortable with anyone reselling your work, you should definitely choose a non-commercial licence.

Flickr is planning to sell canvas prints of photos that have been licensed under Creative Commons licenses that don’t include the non-commercial clause. They are perfectly within their rights to do this—this is exactly what the licence allows—but some people are very upset about it.

Jeffrey says it’s short-sighted and sucky because it violates the spirit in which the photos were originally licensed. I understand that feeling, but that’s simply not the way that the licences work. If you want to be able to say “It’s okay for some people to use my work for profit, but it’s not okay for others”, then you need to apply a more restrictive licence (like copyright, or Creative Commons Non-commercial) and then negotiate on a case-by-case basis for each usage.

But if you apply a licence that allows commercial usage, you must accept that there will be commercial usages that you aren’t comfortable with. Frankly, Flickr selling canvas prints of your photos is far from a worst-case scenario.

I licence my photos under a Creative Commons Attribution licence. That means they can be used anywhere—including being resold for profit—as long as I’m credited as the photographer. Because of that, my photos have shown up in all sorts of great places: food blogs, Wikipedia, travel guides, newspapers. But they’ve also shown up in some awful places, like Techcrunch. I might not like that, but it’s no good me complaining that an organisation (even one whose values I disagree with) is using my work exactly as the licence permits.

Before allowing commercial use of your creative works, you should ask “What’s the worst that could happen?” The worst that could happen includes scenarios like white supremacists, misogynists, or whacko conspiracy theorists using your work on their websites, newsletters, and billboards (with your name included if you’ve used an attribution licence). If you aren’t willing to live with that, do not allow commercial use of your work.

When I chose to apply a Creative Commons Attribution licence to my photographs, it was because I decided I could live with those worst-case scenarios. I decided that the potential positives outweighed the potential negatives. I stand by that decision. My photos might appear on a mudsucking site like Techcrunch, or get sold as canvas prints to make money for Flickr, but I’m willing to accept those usages in order to allow others to freely use my photos.

Some people have remarked that this move by Flickr to sell photos for profit will make people think twice about allowing commercial use of their work. To that I say …good! It has become clear that some people haven’t put enough thought into their licensing choices—they never asked “What’s the worst that could happen?”

And let’s be clear here: this isn’t some kind of bait’n’switch by Flickr. It’s not like liberal Creative Commons licensing is the default setting for photos hosted on that site. The default setting is copyright, all rights reserved. You have to actively choose a more liberal licence.

So I’m trying to figure out how it ended up that people chose the wrong licence for their photos. Because I want this to be perfectly clear: if you chose a licence that allows for commercial usage of your photos, but you’re now upset that a company is making commercial usage of your photos, you chose the wrong licence.

Perhaps the licence-choosing interface could have been clearer. Instead of simply saying “here’s what attribution means” or “here’s what non-commercial means”, perhaps it should also include lists of pros and cons: “here’s some of the uses you’ll be enabling”, but also “here’s the worst that could happen.”

Jen suggests a new Creative Commons licence that essentially inverts the current no-derivates licence; this would be a “derivative works only” licence. But unfortunately it sounds a bit too much like a read-my-mind licence:

What if I want to allow someone to use a photo in a conference slide deck, even if they are paid to present, but I don’t want to allow a company that sells stock photos to snatch up my photo and resell it?

Jen’s post is entitled I Don’t Want “Creative Commons By” To Mean You Can Rip Me Off …but that’s exactly what a Creative Commons licence without a non-commercial clause can mean. Of course, it’s not the only usage that such a licence allows (it allows many, many positive scenarios), but it’s no good pretending it were otherwise. If you’re not comfortable with that use-case, don’t enable it. Personally, I’m okay with that use-case because I believe it is offset by the more positive usages.

And that’s an important point: this is a personal decision, and not one to be taken lightly. Personally, I’m not a professional or even amateur photographer, so commercial uses of my photos are fine with me. Most professional photographers wouldn’t dream of allowing commercial use of their photos without payment, and rightly so. But even for non-professionals like myself, there are implications to allowing commercial use (one of those implications being that there will be usages you won’t necessarily be happy about).

So, going back to my earlier question, does the licence-choosing interface on Flickr make the implications of your choice clear?

Here’s the page for applying licences. You get to it by going to “Settings”, then “Privacy and Permissions,” then under “Defaults for new uploads,” the setting “What license will your content have.”

On that page, there’s a heading “Which license is right for you?” That has three hyperlinks:

  1. A page on Creative Commons about the licences,
  2. Frequently Asked Questions,
  3. A page of issues specifically related to images.

In that list of Frequently Asked Questions, there’s What things should I think about before I apply a Creative Commons license? and How should I decide which license to choose? There’s some good advice in there (like when in doubt, talk to a lawyer), but at no point does it suggest that you should ask yourself “What’s the worst that could happen?”

So it certainly seems that Flickr could be doing a better job of making the consequences of your licensing choice clearer. That might have the effect of making it a scarier choice, and it might put some people off using Creative Commons licences. But I don’t think that’s a bad thing. I would much rather that people made an informed decision.

When I chose to apply a Creative Commons Attribution licence to my photos, I did not make the decision lightly. I assumed that others who made the same choice also understood the consequences of that decision. Now I’m not so sure. Now I think that some people made uninformed licensing decisions in the past, which explains why they’re upset now (and I’m not blaming them for making the wrong decision—Flickr, and even Creative Commons, could have done a better job of providing relevant, easily understable information).

But this is one Internet Outrage train that I won’t be climbing aboard. Alas, that means I must now be considered a corporate shill who’s sold out to The Man.

Pointing out that a particular Creative Commons licence allows the Klu Klux Klan to use your work isn’t the same as defending the Klu Klux Klan.

Pointing out that a particular Creative Commons licence allows a hardcore porn film to use your music isn’t the same as defending hardcore porn.

Pointing out that a particular Creative Commons licence allows Yahoo to flog canvas prints of your photos isn’t the same as defending Yahoo.

Tuesday, November 25th, 2014

Interstelling

Jessica and I entered the basement of The Dukes at Komedia last weekend to listen to Sarah and her band Spacedog provide live musical accompaniment to short sci-fi films from the end of the nineteenth and start of the twentieth centuries.

It was part of the Cine City festival, which is still going on here in Brighton—Spacedog will also be accompanying a performance of John Wyndham’s The Midwich Cuckoos, and there’s going to be a screening of François Truffaut’s brilliant film version of Ray Bradbury’s Fahrenheit 451 in the atmospheric surroundings of Brighton’s former reference library. I might try to get along to that, although there’s a good chance that I might cry at my favourite scene. Gets me every time.

Those 100-year old sci-fi shorts featured familiar themes—time travel, monsters, expeditions to space. I was reminded of a recent gathering in San Francisco with some of my nerdiest of nerdy friends, where we discussed which decade might qualify as the golden age of science fiction cinema. The 1980s certainly punched above their weight—1982 and 1985 were particularly good years—but I also said that I think we’re having a bit of a sci-fi cinematic golden age right now. This year alone we’ve had Edge Of Tomorrow, Guardians Of The Galaxy, and Interstellar.

Ah, Interstellar!

If you haven’t seen it yet, now would be a good time to stop reading. Imagine that I’ve written the word “spoilers” in all-caps, followed by many many line breaks before continuing.

Ten days before we watched Spacedog accompanying silent black and white movies in a tiny basement theatre, Jessica and I watched Interstellar on the largest screen we could get to. We were in Seattle, which meant we had the pleasure of experiencing the film projected in 70mm IMAX at the Pacific Science Center, right by the space needle.

I really, really liked it. Or, at least, I’ve now decided that I really, really liked it. I wasn’t sure when I first left the cinema. There were many things that bothered me, and those things battled against the many, many things that I really enjoyed. But having thought about it more—and, boy, does this film encourage thought and discussion—I’ve been able to resolve quite a few of the issues I was having with the film.

I hate to admit that most of my initial questions were on the science side of things. I wish I could’ve switched off that part of my brain.

There’s an apocryphal story about an actor asking “Where’s the light coming from?”, and being told “Same place as the music.” I distinctly remember thinking that very same question during Interstellar. The first planetfall of the film lands the actors and the audience on a world in orbit around a black hole. So where’s the light coming from?

The answer turns out to be that the light is coming from the accretion disk of that black hole.

But wouldn’t the radiation from the black hole instantly fry any puny humans that approach it? Wouldn’t the planet be ripped apart by the gravitational tides?

Not if it’s a rapidly-spinning supermassive black hole with a “gentle” singularity.

These are nit-picky questions that I wish I wasn’t thinking of. But I like the fact that there are answers to those questions. It’s just that I need to seek out those answers outside the context of the movie—I should probably read Kip Thorne’s book. The movie gives hints at resolving those questions—there’s just one mention of the gentle singularity—but it’s got other priorities: narrative, plot, emotion.

Still, I wish that Interstellar had managed to answer my questions while the film was still happening. This is something that Inception managed brilliantly: for all its twistiness, you always know exactly what’s going on, which is no mean feat. I’m hoping and expecting that Interstellar will reward repeated viewings. I’m certainly really looking forward to seeing it again.

In the meantime, I’ll content myself with re-watching Inception, which makes a fascinating companion piece to Interstellar. Both films deal with time and gravity as malleable, almost malevolent forces. But whereas Cobb travels as far inward as it is possible for a human to go, Coop travels as far outward as it is possible for our species to go.

Interstellar is kind of a mess. There’s plenty of sub-par dialogue and strange narrative choices. But I can readily forgive all that because of the sheer ambition and imagination on display. I’m not just talking about the imagination and ambition of the film-makers—I’m talking about the ambition and imagination of the human race.

That’s at the heart of the film, and it’s a message I can readily get behind.

Before we even get into space, we’re shown a future that, by any reasonable definition, would be considered a dystopia. The human race has been reduced to a small fraction of its former population, technological knowledge has been lost, and the planet is dying. And yet, where this would normally be the perfect storm required to show roving bands of road warriors pillaging their way across the dusty landscape, here we get an agrarian society with no hint of violence. The nightmare scenario is not that the human race is wiped out through savagery, but that the human race dies out through a lack of ambition and imagination.

Religion isn’t mentioned once in this future, but Interstellar does feature a deus ex machina in the shape of a wormhole that saves the day for the human race. I really like the fact that this deus ex machina isn’t something that’s revealed at the end of the movie—it’s revealed very early on. The whole plot turns out to be a glorious mash-up of two paradoxes: the bootstrap paradox and the twin paradox.

The end result feels like a mixture of two different works by Arthur C. Clarke: The Songs Of Distant Earth and 2001: A Space Odyssey.

2001 is the more obvious work to compare it to, and the film readily invites that comparison. Many reviewers have been quite to point out that Interstellar doesn’t reach the same heights as Kubrick’s 2001. That’s a fair point. But then again, I’m not sure that any film can ever reach the bar set by 2001. I honestly think it’s as close to perfect as any film has ever come.

But I think it’s worth pointing out that when 2001 was released, it was not greeted with universal critical acclaim. Quite the opposite. Many reviewers found it tedious, cold, and baffling. It divided opinion greatly …much like Interstellar is doing now.

In some ways, Interstellar offers a direct challenge to 2001—what if mankind’s uplifting is not caused by benevolent alien beings, but by the distant descendants of the human race?

This is revealed as a plot twist, but it was pretty clearly signposted from early in the film. So, not much of a plot twist then, right?

Well, maybe not. What if Coop’s hypothesis—that the wormhole is the creation of future humans—isn’t entirely correct? He isn’t the only one who crosses the event horizon. He is accompanied by the robot TARS. In the end, the human race is saved by the combination of Coop the human’s connection to his daughter, and the analysis carried out by TARS. Perhaps what we’re witnessing there is a glimpse of the true future for our species; human-machine collaboration. After all, if humanity is going to transcend into a fifth-dimensional species at some future point, it’s unlikely to happen through biology alone. But if you combine the best of the biological—a parent’s love for their child—with the best of technology, then perhaps our post-human future becomes not only plausible, but inevitable.

Deus ex machina.

Thinking about the future of the species in this co-operative way helps alleviate the uncomfortable feeling I had that Interstellar was promoting a kind of Manifest Destiny for the human race …although I’m not sure that I’m any more comfortable with that being replaced by a benevolent technological determinism.

Tuesday, November 18th, 2014

Webiness

John Gruber quite rightly skewers a paywall-proteced “sky is falling” piece in the Wall Street Journal called The Web Is Dying; Apps Are Killing It, writing that native apps are part of the web:

They’re just superior clients to open Internet services.

This is something I wrote about earlier this year:

There’s a whole category of native apps that could just as easily be described as “artisanal web browsers” (and if someone wants to write a browser extension that replaces every mention of “native app” with “artisanal web browser” that would be just peachy).

Instagram’s native app is a web browser.

Facebook’s native app is a web browser.

Twitter’s native app is a web browser.

In that same piece, I try to define exactly what the web is:

Well, the unsexy definition I’ve used in the past is that the web consists of files (e.g. HTML, CSS, JavaScript), accessible at URLs, delivered over HTTP.

John also gives a defintion of what the web is:

There are two big four-letter “H” acronyms that powered the web from the beginning: HTML (client), and HTTP (networking protocol). Native apps are just an alternative to HTML running in a web browser (and many native apps still use HTML web views embedded within the apps themselves to render parts of their interface). Almost all native apps use HTTP/S for networking, though.

Notice the difference? Whereas John talks about two things that define the web (HTTP/S and HTML), I talk about three: HTTP(S), HTML, and URLs:

But to be honest, I don’t think that the Hypertext Transfer Protocol is the important part of the web; it’s the URLs that really matter. It’s the addressability of the files that’s the killer app of the web in my opinion.

URLs are what give the web is its reach, and that’s what’s still missing from native apps.

But John’s fundamental point that native apps and the web are not fundumentally opposed? I completely agree with that. They are complementary. Irakli Nadareishvili wrote about this false dichotomy recently in a post called Responsive Web Design or Native Mobile Apps?:

Native mobile applications are not going anywhere and the future of all websites is to be responsive. These two assertions are not mutually exclusive, they are complementary – don’t create apps when what you actually need is a website; but also don’t pretend webapps can completely replace native applications, because they can’t.

It’s also worth remembering that even if you’re using a native app—like, say, Facebook or Twitter—you’re still going to spend a lot of time following links and reading stuff that’s rendered in the app, but that lives out on the world wide web. And the reason why those apps can access those resources is because those resources have URLs.

URLs are not an implementation detail. The URI is the thing.