Archive: December, 2013


                    5th                     10th                     15th                     20th                     25th                     30th     


Saturday, December 28th, 2013

In dependence

Jason Kottke wrote an end-of-the-year piece for the Nieman Journalism Lab called The blog is dead, long live the blog:

Sometime in the past few years, the blog died. In 2014, people will finally notice.

But the second part of the article’s title is as important as the first:

Over the past 16 years, the blog format has evolved, had social grafted onto it, and mutated into Facebook, Twitter, and Pinterest and those new species have now taken over.

Jason’s piece prompted some soul-searching. John Scalzi wrote The Death of the Blog, Again, Again. Colin Devroe wrote The blog isn’t dead. It is just sleeping.:

The advantages to using Facebook should be brought out onto the web. There should be no real disadvantage to using one platform or another. In fact, there should be an advantage to using your own platform rather than those of a startup that could go out of business at any moment.

That’s a common thread in amongst a number of the responses: the specific medium of the blog may certainly be waning, but the idea of independent publishing still burns brightly. Ben Werdmuller sums that feeling up, saying the blog might be dying, but the web’s about to fight back:

If you buy the idea that articles aren’t dying - and anecdotally, I know I read as much as I ever did online - then a blog is simply the delivery mechanism. It’s fine for that to die. Even welcome. In some ways, that death is due to the ease of use of the newer, siloed sites, and makes the way for new, different kinds of content consumption; innovation in delivery.

Kartik Prabhu writes about The Blogging Dead:

In any case, let’s not ‘blog’, let’s just write—on our own personal place on the Web.

In fact, Jason’s article was preceded by a lovely post from Jeffrey called simply This is a website:

Me, I regret the day I started calling what I do here “blogging.”

I know how he feels. I still call what I write here my “journal” rather than my “blog”. Call it what you like, publishing on your own website can be a very powerful move, now more than ever:

Blogging may have been a fad, a semi-comic emblem of a time, like CB Radio and disco dancing, but independent writing and publishing is not. Sharing ideas and passions on the only free medium the world has known is not a fad or joke.

One of the most overused buzzwords of today’s startup scene is the word “disruption”. Young tech upstarts like to proclaim how they’re going to “disrupt” some incumbent industry of the old world and sweep it away in a bright new networked way. But on today’s web of monolithic roach-motel silos like Facebook and Twitter, I can’t imagine a more disruptive act than choosing to publish on your own website.

It’s not a new idea. Far from it. Jeffrey launched a project called Independent’s Day in 2001:

No one is in control of this space. No one can tell you how to design it, how much to design it, when to “dial it down.” No one will hold your hand and structure it for you. No one will create the content for you.

Those words are twelve years old, but they sound pretty damn disruptive to me today.

Frank is planting his flag in his own sand with his minifesto Homesteading 2014

I’m returning to a personal site, which flips everything on its head. Rather than teasing things apart into silos, I can fuse together different kinds of content.

So, I’m doubling down on my personal site in 2014.

He is not alone. Many of us are feeling an increasing unease, even disgust, with the sanitised, shrink-wrapped, handholding platforms that make it oh-so-easy to get your thoughts out there …on their terms …for their profit.

Of course independent publishing won’t be easy. Facebook, Pinterest, Medium, Twitter, and Tumblr are all quicker, easier, more seductive. But I take great inspiration from the work being done at Indie Web Camp. Little, simple formats and protocols—like webmentions—can have a powerful effect in aggregate. Small pieces, loosely joined.

Mind you, it’s worth remembering that not everybody wants to be independent. Tyler Fisher wrote about this on Medium—“because it is easier and hopefully more people will see it”— in a piece called I’m 22 years old and what is this. :

Fighting to get the open web back sounds great. But I don’t know what that means.

If we don’t care about how the web works, how can we understand why it is important to own our data? Why would we try if what we can do now is so easy?

Therein lies the rub. Publishing on your own website is still just too damn geeky. The siren-call of the silos is backed up with genuinely powerful, easy to use, well-designed tools. I don’t know if independent publishing can ever compete with that.

In all likelihood, the independent web will never be able to match the power and reach of the silos. But that won’t stop me (and others) from owning our own words. If nothing else, we can at least demonstrate that the independent path is an option—even if that option requires more effort.

Like Tyler Fisher, Josh Miller describes his experience with a web of silos—the only web he has ever known:

Some folks are adamant that you should own your own words when you publish online. For example, to explain why he doesn’t use services like Quora, Branch, and Google-Plus, Dave Winer says: “I’m not going to put my writing in spaces that I have no control over. I’m tired of playing the hamster.”

As someone who went through puberty with social media, it is hard to relate to this sentiment. I have only ever “leased,” from the likes of LiveJournal (middle school), Myspace (middle school), Facebook (high school), and Twitter (college).

There’s a wonderful response from Gina Trapani:

For me, publishing on a platform I have some ownership and control over is a matter of future-proofing my work. If I’m going to spend time making something I really care about on the web—even if it’s a tweet, brevity doesn’t mean it’s not meaningful—I don’t want to do it somewhere that will make it inaccessible after a certain amount of time, or somewhere that might go away, get acquired, or change unrecognizably.

This! This is why owning your own words matters.

I have a horrible feeling that many of the people publishing with the easy-to-use tools of today’s social networks don’t realise how fragile their repository is, not least because everyone keeps repeating the lie that “the internet never forgets.”

Stephanie Georgopulos wrote a beautiful piece called Blogging Ourselves to Live—published on Medium, alas—describing the power of that lie:

We were told — warned, even — that what we put on the internet would be forever; that we should think very carefully about what we commit to the digital page. And a lot of us did. We put thought into it, we put heart into, we wrote our truths. We let our real lives bleed onto the page, onto the internet, onto the blog. We were told, “Once you put this here, it will remain forever.” And we acted accordingly.

Sadly, when you uncover the deceit of that lie, it is usually through bitter experience:

Occasionally I become consumed by the idea that I can somehow find — somehow restore — all the droppings I’ve left on the internet over the last two decades. I want back the IMed conversations that caused tears to roll from my eyes, I want back the alt girl e-zines I subscribed to, wrote poetry for. I fill out AOL’s Reset Password form and send new passwords to email addresses I don’t own anymore; I use the Way Back Machine to search for the diary I kept in 1999. I am hunting for tracks of my former self so I can take a glimpse or kill it or I don’t know what. The end result is always the same, of course; these things are gone, they have been wiped away, they do not exist.

I’m going to continue to publish here on my own website, journal, blog, or whatever you want to call it. It’s still possible that I might lose everything but I’d rather take the responsibility for that, rather than placing my trust in ”the cloud” someone else’s server. I’m owning my own words.

The problem is …I publish more than words. I publish pictures too, even the occasional video. I have the originals on my hard drive, but I’m very, very uncomfortable with the online home for my photos being in the hands of Yahoo, the same company that felt no compunction about destroying the cultural wealth of GeoCities.

Flickr has been a magnificent shining example of the web done right, but it is in an inevitable downward spiral. There are some good people still left there, but they are in the minority and I fear that they cannot fight off the douchtastic consultants of growth-hacking that have been called in to save the patient by killing it.

I’ve noticed that I’m taking fewer and fewer photos these days. I think that subconsciously, I’ve started the feel that publishing my photos to a third-party site—even one as historically excellent as Flickr—is a fragile, hollow experience.

In 2014, I hope to figure out a straightforward way to publish my own photos to my own website …while still allowing third-party sites to have a copy. It won’t be easy—binary formats are trickier to work with than text—but I want that feeling of independence.

I hope that you too will be publishing on your own website in 2014.

Windows of New York | A weekly illustrated atlas

Lovely little graphics inspired by New York architecture. · The Origin of Tweet

A fascinating bit of linguistic spelunking from Craig Hockenberry, in which he tracks down the earliest usage of “tweet” as a verb relating to Twitter.

Basically, it’s all Blaine’s fault.

Friday, December 27th, 2013

The Console Living Room : Free Software : Download

Here’s a nice Christmas gift from Jason and the archinauts at the Internet Archive: tons of games for living room consoles of the early ’80s, all playable in your browser, thanks to emulation in JavaScript.

Thursday, December 26th, 2013

That was my jam

Those lovely people at the jam factory have reprised their Jam Odyssey for 2013—this time it’s an underwater dive …through jam.

Looking back through my jams, I thought that they made for nice little snapshots of the year.

  1. : Meat Abstract by Therapy? …because apparently I had a dream about Therapy?
  2. : Jubilee Street by Nick Cave And The Bad Seeds …because I had just been to the gig/rehearsal that Jessica earned us tickets to. That evening was definitely a musical highlight of the year.
  3. : Atlanta Lie Low by Robert Forster …because I was in Atlanta for An Event Apart.
  4. : Larsen B by British Sea Power …because I had just seen them play a gig (on their Brighton home turf) and this was the song they left us with.
  5. : Tramp The Dirt Down by Elvis Costello …because it was either this or Ding Dong, The Witch Is Dead! (or maybe Margaret In A Guillotine). I had previously “jammed” it in August 2012, saying “Elvis Costello (Davy Spillane, Donal Lunny, and Steve Wickham) in 1989. Still waiting.”
  6. : It’s A Shame About Ray by The Lemonheads …because Ray Harryhausen died.
  7. : Summertime In England by Van Morrison …because it was a glorious Summer’s day and this was playing on the stereo in the coffee shop I popped into for my morning flat white.
  8. : Spaceteam by 100 Robots …because Jim borrowed my space helmet for the video.
  9. : Higgs Boson Blues by Nick Cave And The Bad Seeds …because this was stuck in my head the whole time I was at hacking at CERN (most definitely a highlight of 2013).
  10. : Hey, Manhattan by Prefab Sprout …because I was in New York.
  11. : Pulsar by Vangelis …because I was writing about Jocelyn Bell Burnell.
  12. : Romeo Had Juliette by Lou Reed …because Lou Reed died, and also: this song is pure poetry.

I like This Is My Jam. On the one hand, it’s a low-maintenance little snippet of what’s happening right now. On the other hand, it makes for a lovely collage over time.

Or, as Matt put it back in 2010:

We’ve all been so distracted by The Now that we’ve hardly noticed the beautiful comet tails of personal history trailing in our wake.

Without deliberate planning, we have created amazing new tools for remembering. The real-time web might just be the most elaborate and widely-adopted architecture for self-archival ever created.

Sunday, December 22nd, 2013

Frank Chimero × Blog × Homesteading 2014

I’m with Frank. He’s going Indie Web for 2014:

I’m returning to a personal site, which flips everything on its head. Rather than teasing things apart into silos, I can fuse different kinds of content together.

Homesteading instead of sharecropping:

So, I’m doubling down on my personal site in 2014.

Saturday, December 21st, 2013

WarGames Magazine Identified By Michael Walden

Now this is what I call research:

Through the use of my knowledge of computer magazines, my sharp eyes, and other technical knowledge, I have overcome the limited amount of information available in the video content of WarGames and with complete certainty identified the exact name and issue number of the magazine read on screen by David L. Lightman in WarGames.

Friday, December 20th, 2013

Neave’s Notes — Why I create for the web

Follow this link to receive a love letter to the humble hyperlink.

Thursday, December 19th, 2013

About Variables in CSS and Abstractions in Web Languages | CSS-Tricks

Chris has a written a response to my post (which was itself inspired by his excellent An Event Apart presentation) all about CSS, variables, and abstractions.

I love this kind of old-school blog-to-blog discussion.

Spimes: A Happy Birthday Story «

Expanding on an exercise from last year’s Hackfarm, Brian and Mike have written a deliciously dystopian near-future short story.

Happy 17th Birthday CSS | Web Directions

A lovely history lesson on CSS from John.

Tuesday, December 17th, 2013

Myth - CSS the way it was imagined.

This looks interesting: a CSS postprocessor that polyfills support for perfectly cromulent styles.

Earth wind map

A beautiful real-time visualisation of winds on our planet.


Emil has been playing around with CSS variables (or “custom properties” as they should more correctly be known), which have started landing in some browsers. It’s well worth a read. He does a great job of explaining the potential of this new CSS feature.

For now though, most of us will be using preprocessors like Sass to do our variabling for us. Sass was the subject of Chris’s talk at An Event Apart in San Francisco last week—an excellent event as always.

At one point, Chris briefly mentioned that he’s quite happy for variables (or constants, really) to remain in Sass and not to be part of the CSS spec. Alas, I didn’t get a chance to chat with Chris about that some more, but I wonder if his thinking aligns with mine. Because I too believe that CSS variables should remain firmly in the realm of preprocessers rather than browsers.

Hear me out…

There are a lot of really powerful programmatic concepts that we could add to CSS, all of which would certainly make it a more powerful language. But I think that power would come at an expense.

Right now, CSS is a relatively-straightforward language:

CSS isn’t voodoo, it’s a simple and straightforward language where you declare an element has a style and it happens.

That’s a somewhat-simplistic summation, and there’s definitely some complexity to certain aspects of CSS—like specificity or margin collapsing—but on the whole, it has a straightforward declarative syntax:

selector {
    property: value;

That’s it. I think that this simplicity is quite beautiful and surprisingly powerful.

Over at my collection of design principles, I’ve got a section on Bert Bos’s essay What is a good standard? In theory, it’s about designing standards in general, but it matches very closely to CSS in particular. Some of the watchwords are maintainability, modularity, extensibility, simplicity, and learnability. A lot of those principles are clearly connected. I think CSS does a pretty good job of balancing all of those principles, while still providing authors with quite a bit of power.

Going back to that fundamental pattern of CSS, you’ll notice that is completely modular:

selector {
    property: value;

None of those pieces (selector, property, value) reference anything elsewhere in the style sheet. But as soon as you introduce variables, that modularity is snapped apart. Now you’ve got a value that refers to something defined elsewhere in the style sheet (or even in a completely different style sheet).

But variables aren’t the first addition to CSS that sacrifices modularity. CSS animations already do that. If you want to invoke a keyframe animation, you have to define it. The declaration and the invocation happen in separate blocks:

selector {
    animation-name: myanimation;
@keyframes myanimation {
    from {
        property: value;
    to {
        property: value;

I’m not sure that there’s any better way to provide powerful animations in CSS, but this feature does sacrifice modularity …and I believe that has a knock-on effect for learnability and readability.

So CSS variables (or custom properties) aren’t the first crack in the wall of the design principles behind CSS. To mix my metaphors, the slippery slope began with @keyframes (and maybe @font-face too).

But there’s no denying that having variables/constants in CSS provide a lot of power. There’s plenty of programming ideas (like loops and functions) that would provide lots of power to CSS. I still don’t think it’s a good idea to mix up the declarative and the programmatic. That way lies XSLT—a strange hybrid beast that’s sort of a markup language and sort of a programming language.

I feel very strongly that HTML and CSS should remain learnable languages. I don’t just mean for professionals. I believe it’s really important that anybody should be able to write and style a web page.

Now does that mean that CSS must therefore remain hobbled? No, I don’t think so. Thanks to preprocessors like Sass, we can have our cake and eat it too. As professionals, we can use tools like Sass to wield the power of variables, functions (mixins) and other powerful concepts from the programming world.

Preprocessors cut the Gordian knot that’s formed from the tension in CSS between providing powerful features and remaining relatively easy to learn. That’s why I’m quite happy for variables, mixins, nesting and the like to remain firmly in the realm of Sass.

Incidentally, at An Event Apart, Chris was making the case that Sass’s power comes from the fact that it’s an abstraction. I don’t think that’s necessarily true—I think the fact that it provides a layer of abstraction might be a red herring.

Chris made the case for abstractions being inherently A Good Thing. Certainly if you go far enough down the stack (to Assembly Language), that’s true. But not all abstractions are good abstractions, and I’m not just talking about Spolky’s law of leaky abstractions.

Let’s take two different abstractions that share a common origin story:

  • Sass is an abstraction layer for CSS.
  • Haml is an abstraction layer for HTML.

If abstractions were inherently A Good Thing, then they would both provide value to some extent. But whereas Sass is a well-designed tool that allows CSS-savvy authors to write their CSS more easily, Haml is a steaming pile of poo.

Here’s the crucial difference: Sass doesn’t force you to write all your CSS in a completely new way. In fact, every .css file is automatically a valid .scss file. You are then free to use—or ignore—the features of Sass at your own pace.

Haml, on the other hand, forces you to use a completely new whitespace-significant syntax that maps on to HTML. There are no half-measures. It is an abstraction that is not only opinionated, it refuses to be reasoned with.

So I don’t think that Sass is good because it’s an abstraction; I think that Sass is good because it’s a well-designed abstraction. Crucially, it’s also easy to learn …just like CSS.

Salter Cane: Sorrow by rozeink on deviantART

Wow …somebody has a tattoo of Salter Cane cover artwork.

Monday, December 16th, 2013

xkcd: Syllable Planning

This is called expletive infixation.

I’ll always remember the “Phila-fucking-delphia” example from Steven Pinker’s The Language Instinct:

If you said “Philadel-fucking-phia”, you’d be laughed out of the pool hall.

rem : fullfrontalconf2013 on Huffduffer

Get these down your earholes!

Remy has huffduffed all the audio from this year’s Full Frontal conference.

The Creation Engine No. 2: Osprey Therian

I was going to say that this is a really lovely post from Jim about Second Life, but it’s no actually about Second Life at all: it’s about a person.

Sunday, December 15th, 2013


Ajax was a really big deal six, seven, eight years ago. My second book was all about Ajax. I spoke about Ajax at conferences and gave workshops all about using Ajax and progressive enhancement.

During those workshops, I would often point out that Ajax had the potential to be abused terribly. Until the advent of Ajax, it was very clear to a user when data was being submitted to a server: you’d have to click a link or submit a form. As soon as you introduce asynchronous communication, it’s possible for the server to get information from the client even without a full-page refresh.

Imagine, for example, that you’re typing a message into a textarea. You might begin by typing, “Why, you stuck up, half-witted, scruffy-looking nerf…” before calming down and thinking better of it. Before Ajax, there was no way that what you had typed could ever reach the server. But now, it’s entirely possible to send data via Ajax with every key press.

It was just a thought experiment. I wasn’t actually that worried that anyone would ever do something quite so creepy.

Then I came across this article by Jennifer Golbeck in Slate all about Facebook tracking what’s entered—but then erased—within its status update form:

Unfortunately, the code that powers Facebook still knows what you typed—even if you decide not to publish it. It turns out that the things you explicitly choose not to share aren’t entirely private.

Initially I thought there must have been some mistake. I erronously called out Jen Golbeck when I found the PDF of a paper called The Post that Wasn’t: Exploring Self-Censorship on Facebook. The methodology behind the sample group used for that paper was much more old-fashioned than using Ajax:

First, participants took part in a weeklong diary study during which they used SMS messaging to report all instances of unshared content on Facebook (i.e., content intentionally self-censored). Participants also filled out nightly surveys to further describe unshared content and any shared content they decided to post on Facebook. Next, qualified participants took part in in-lab interviews.

But the Slate article was referencing a different paper that does indeed use Ajax to track instances of deleted text:

This research was conducted at Facebook by Facebook researchers. We collected self-censorship data from a random sample of approximately 5 million English-speaking Facebook users who lived in the U.S. or U.K. over the course of 17 days (July 6-22, 2012).

So what I initially thought was a case of alarmism—conflating something as simple as simple as a client-side character count with actual server-side monitoring—turned out to be a pretty accurate reading of the situation. I originally intended to write a scoffing post about Slate’s linkbaiting alarmism (and call it “The shocking truth behind the latest Facebook revelation”), but it turns out that my scoffing was misplaced.

That said, the article has been updated to reflect that the Ajax requests are only sending information about deleted characters—not the actual content. Still, as we learned very clearly from the NSA revelations, there’s not much practical difference between logging data and logging metadata.

The nerds among us may start firing up our developer tools to keep track of unexpected Ajax requests to the server. But what about everyone else?

This isn’t the first time that the power of JavaScript has been abused. Every browser now ships with an option to block pop-up windows. That’s because the ability to spawn new windows was so horribly misused. Maybe we’re going to see similar preference options to avoid firing Ajax requests on keypress.

It would be depressingly reductionist to conclude that any technology that can be abused will be abused. But as long as there are web developers out there who are willing to spawn pop-up windows or force persistent cookies or use Ajax to track deleted content, the depressingly reductionist conclusion looks like self-fulfilling prophecy.

Time - YouTube

The video of my closing talk at this year’s Full Frontal conference, right here in Brighton.

I had a lot of fun with this, although I was surprisingly nervous before I started: I think it was because I didn’t want to let Remy down.

Defining the damn thang

Chris recently documented the results from his survey which asked:

Is it useful to distinguish between “web apps” and “web sites”?

His conclusion:

There is just nothing but questions, exemptions, and gray area.

This is something I wrote about a while back:

Like obscenity and brunch, web apps can be described but not defined.

The results of Chris’s poll are telling. The majority of people believe there is a difference between sites and apps …but nobody can agree on what it is. The comments make for interesting reading too. The more people chime in an attempt to define exactly what a “web app” is, the more it proves the point that the the term “web app” isn’t a useful word (in the sense that useful words should have an agreed-upon meaning).

Tyler Sticka makes a good point:

By this definition, web apps are just a subset of websites.

I like that. It avoids the false dichotomy that a product is either a site or an app.

But although it seems that the term “web app” can’t be defined, there are a lot of really smart people who still think it has some value.

I think Cennydd is right. I think the differences exist …but I also think we’re looking for those differences at the wrong scale. Rather than describing an entire product as either a website or an web app, I think it makes much more sense to distinguish between patterns.

Let’s take those two modifiers—behavioural and informational. But let’s apply them at the pattern level.

The “get stuff” sites that Jake describes will have a lot of informational patterns: how best to present a flow of text for reading, for example. Typography, contrast, whitespace; all of those attributes are important for an informational pattern.

The “do stuff” sites will probably have a lot of behavioural patterns: entering information or performing an action. Feedback, animation, speed; these are some of the possible attributes of a behavioural pattern.

But just about every product out there on the web contains a combination of both types of pattern. Like I said:

Is Wikipedia a website up until the point that I start editing an article? Are Twitter and Pinterest websites while I’m browsing through them but then flip into being web apps the moment that I post something?

Now you could make an arbitrary decision that any product with more than 50% informational patterns is a website, and any product with more than 50% behavioural patterns is a web app, but I don’t think that’s very useful.

Take a look at Brad’s collection of responsive patterns. Some of them are clearly informational (tables, images, etc.), while some of them are much more behavioural (carousels, notifications, etc.). But Brad doesn’t divide his collection into two, saying “Here are the patterns for websites” and “Here are the patterns for web apps.” That would be a dumb way to divide up his patterns, and I think it’s an equally dumb way to divide up the whole web.

What I’m getting at here is that, rather than trying to answer the question “what is a web app, anyway?”, I think it’s far more important to answer the other question I posed:


Why do you want to make that distinction? What benefit do you gain by arbitrarily dividing the entire web into two classes?

I think by making the distinction at the pattern level, that question starts to become a bit easier to answer. One possible answer is to do with the different skills involved.

For example, I know plenty of designers who are really, really good at informational patterns—they can lay out content in a beautiful, clear way. But they are less skilled when it comes to thinking through all the permutations involved in behavioural patterns—the “arrow of time” that’s part of so much interaction design. And vice-versa: a skilled interaction designer isn’t necessarily the best at old-skill knowledge of type, margins, and hierarchy. But both skillsets will be required on an almost every project on the web.

So I do believe there is value in distinguishing between behaviour and information …but I don’t believe there is value in trying to shoehorn entire products into just one of those categories. Making the distinction at the pattern level, though? That I can get behind.


Incidentally, some of the respondents to Chris’s poll shared my feeling that the term “web app” was often used from a marketing perspective to make something sound more important and superior:

Perhaps it’s simply fashion. Perhaps “website” just sounds old-fashioned, and “web app” lends your product a more up-to-date, zingy feeling on par with the native apps available from the carefully-curated walled gardens of app stores.

Approaching things from the patterns perspective, I wonder if those same feelings of inferiority and superiority are driving the recent crop of behavioural patterns for informational content: parallaxy, snowfally, animation patterns are being applied on top of traditional informational patterns like hierarchy, measure, and art direction. I’m not sure that the juxtaposition is working that well. Taking the single interaction involved in long-form informational patterns (that interaction would be scrolling) and then using it as a trigger for all kinds of behavioural patterns feels …uncanny.

Brian Aldiss: ‘These days I don’t read any science fiction. I only read Tolstoy’ | Books | The Guardian

A profile of Brian Aldiss in The Guardian.

I still can’t quite believe I managed to get him for last year’s Brighton SF.

Saturday, December 14th, 2013


My debit card is due to expire so my bank has sent me a new card to replace it. I’ve spent most of the day updating my billing details on various online services that I pay for with my card.

I’m sure I’ll forget about one or two. There’s the obvious stuff like Netflix and iTunes, but there are also the many services that I use to help keep my websites running smoothly:

But there’s one company that will not be receiving my new debit card details: Adobe. That’s not because of any high-and-mighty concerns I might have about monopolies on the design software market—their software is, mostly, pretty darn good (‘though I’m not keen on their Mafia-style pricing policy). No, the reason why I won’t give Adobe my financial details is that they have proven that they cannot be trusted:

We also believe the attackers removed from our systems certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders.

The story broke two months ago. Everyone has mostly forgotten about it, like it’s no big deal. It is a big deal. It is a very big deal indeed.

I probably won’t be able to avoid using Adobe products completely; I might have to use some of their software at work. But I’ll be damned if they’re ever getting another penny out of me.

Flickr: The British Library’s Photostream

This is a wonderful addition to the already-wonderful Flickr Commons: over one million pictures from the British Library, available with liberal licensing.

Y’know, I’m worried about what will happen to my own photos when Flickr inevitably goes down the tubes (there are still some good people there fighting the good fight, but they’re in the minority and they’re battling against the douchiest of Silicon Valley managerial types who have been brought in to increase “engagement” by stripping away everything that makes Flickr special) …but what really worries me is what’s going to happen to Flickr Commons. It’s an unbelievably important and valuable resource.

Friday, December 13th, 2013

300ms tap delay, gone away - HTML5Rocks Updates

I think Chrome is doing the right thing by removing the 300 millisecond tap delay on sites that set width=device-width — it’s certainly better than only doing it on sites that set user-scalable=no, which felt like rewarding bad behaviour.

Thursday, December 12th, 2013

This is a Website – Jeffrey Zeldman

I had a lovely dinner last night with Jeffrey, Tantek, Cindy and Daniel. A combination of nostalgia and indie web chatter prompted Jeffrey to pen this beautiful ode to independent publishing.

We were struggling, whether we knew it or not, to found a more fluid society. A place where everyone, not just appointed apologists for the status quo, could be heard. That dream need not die. It matters more now than ever.

Monday, December 9th, 2013

OriDomi - origami for the web

A fun little JavaScript library for folding the DOM like paper. The annotated source is really nicely documented.

Type Rendering Mix

I got excited when Tim Brown announced this at An Event Apart today: a small JavaScript tool for detecting what kind of rasterising and anti-aliasing a browser is using, and adding the appropriate classes to the root element (in much the same way that Web Font Loader does).

Alas, it turns out that it’s reliant on user-agent string sniffing. I guess that’s to be expected: this isn’t something that can be detected directly. Still, it feels a little fragile: whenever you use any user-agent sniffing tool you are entering an arms race that requires you to keep your code constantly updated.

Why I’m turning JavaScript off by default

Another good ol’ rant from Tom. It’s a bit extreme but the underlying lamentation with the abandonment of progressive enhancement is well founded.

Toward A People Focused Mobile Communication Experience - Tantek

Some good brainstorming from Tantek that follows on nicely from Anne’s recent manifesto.

Dinosaurs! WTF?

A blog covering the conservative dinosaur readiness movement.

Sunday, December 8th, 2013

Hackfarm- One Week, a Dozen Projects, 20 “Lefties”

Ant—the latest super-smart addition to the Clearleft team—describes this year’s Hackfarm, which happened a couple of weeks ago.

It was Ant’s first week. Or, as he described it when we were wrapping up all the hacking, “Best first week at a job ever!”

An Hour of Code spawns hours of coding

Here’s a heartwarming tale. It starts out as a description of processing.js project for Code Club (which is already a great story) and then morphs into a description of how anyone can contribute to make a codebase better …resulting in a lovely pull request on Github.

Friday, December 6th, 2013

Poll Results: “Sites” vs “Apps” | CSS-Tricks

Some excellent research from Chris, canvassing opinions on whether there’s a difference between web “apps” and web “sites”. His conclusion:

Almost none of the points above ring true for me. All I see are exceptions and gray area.

If nothing else, the fact that none of the proposed distinctions agree with one another show how pointless the phrase “web app” is—if people have completely differing ideas on what a phrase means, it is completely useless in furthering discussion …the very definition of a buzzword.

This leads me to think perhaps the “web app” moniker (certainly the newer of the two) is simply just a fashionable term. We like the sound of it, so we use it, regardless if it truly means anything.

But all of this is, I think, missing the more important point: why? Why would you want to separate the cornucopia of the web into two simplistic buckets? What purpose does it serve? That’s the question that really needs be answered.

If we could pin down a super accurate definition that we agreed on, even then it might not be particularly useful. And since we can’t, I argue it’s even less useful.

The most accurate (and damning) definition of a “web app” that I’ve heard so far is: a web site that requires JavaScript to work.

Thursday, December 5th, 2013

Chloe Weil — Hipster

Chloe is going all in on the Indie Web. Here, she outlines how she’s posting to Twitter from her own site with a POSSE system (Post to Own Site, Syndicate Elsewhere).

Tuesday, December 3rd, 2013

200 Geeks, 24 Hours: Science Hack Day in San Francisco

This is a wonderful, wonderful round-up by KQED of the most recent Science Hack Day in San Francisco …a truly marvellous event.

Be sure to watch the accompanying video—it brought a tear to my eye.

Anatomy of a failed rendition |

A superb bit of sleuthing by James:

From London to the Mediterranean, to Malta and back again, over multiple countries and jurisdictions, through airspace and legal space. The contortions of G-WIRG’s flight path mirror the ethical labyrinth the British Government finds itself in when, against all better judgements, it insists on punishing individuals as an example to others, using every weasel justification in its well-funded legal war chest. Using a combination of dirty laws and private technologies to transform and transmit people from one jurisidiction, one legal condition and category, to another: this is the meaning of the verb “to render”.

The (other) Web we lost

John shares his concerns about the increasing complexity involved in developing for the web.

The Pastry Box Project | 2 December 2013, baked by Anne van Kesteren

Coming from anyone else, this glorious vision might seem far-fetched, but Anne is working to make it a reality.

Monday, December 2nd, 2013

The Business of Responsive Design by Mark Boulton

The transcript of Mark’s talk from last week’s Handheld conference in Cardiff.

There are mountains.

Sunday, December 1st, 2013

An Open Letter - Handheld 2013

This was my favourite moment from the Handheld conference in Cardiff.