Tags: edge

5

sparkline

Edge words

I really enjoyed last year’s Edge conference so I made sure not to miss this year’s event, which took place last weekend.

The format was a little different this time ‘round. Last year the whole day was taken up with panels. Now, panels are often rambling, cringeworthy affairs, but Edge Conf is one of the few events that does panels well: they’re run on a tight schedule and put together with lots of work in advance. At this year’s Edge, the morning was taken up with these tightly-run panels as usual, but the afternoon consisted of more Barcamp-like breakout sessions.

I’ve got to be honest: I don’t think the new format worked that well. The breakout sessions didn’t have the true flexibility that you get with an unconference schedule, so there was no opportunity to merge similarly-themed sessions. There was, for example, a session on components at the same time as a session on accessibility in web components.

That highlights the other issue: FOMO. I’m really not a fan of multi-track events; there were so many sessions that sounded really interesting, but I couldn’t clone myself and go to all of them at once.

But, like I said, the first half of the day was taken up with four sequential (rather than parallel) panels and they were all excellent. All of the moderators did a fantastic job, and I was fortunate enough to sit in on the progressive enhancement panel expertly moderated by Lyza.

The event is called Edge for a reason. There is a rarefied atmosphere—and not just because of the broken-down air conditioning. This is a room full of developers on the cutting edge of web development technologies. Being at Edge Conf means being in a bubble. And being in a bubble is absolutely fine as long as you’re aware you’re in a bubble. It would be problematic if anyone were to mistake the audience and the discussions at Edge as being in any way representative of typical working web devs.

One of the most insightful comments of the day came from Christian who said, “Yes, but this is Edge Conf.” You’re going to need some context for that quote, so here it is…

On the web components panel that Christian was moderating, Alex was making a point about the ubiquity of tools—”Tooling was save you”, he said—and he asked for a show of hands from the audience on who was not using some particular tooling technology; transpilers, package managers, build tools, I can’t remember the specific question. Nobody put their hand up. “See?” asked Alex. “Yes”, said Christian, “but this is Edge Conf.”

Now, while I wasn’t keen on the format of the afternoon with its multiple simultaneous breakout sessions, that doesn’t mean I didn’t enjoy the ones I plumped for. Quite the opposite. The last breakout session of the day, again expertly moderated by Lyza, was particularly great.

The discussion was all about progressive enhancement. There seemed to be a general consensus that we’re all 100% committed to the results of progressive enhancement—greater availability, wider reach, and better performance—but that the term itself is widely misunderstood as “making all of your functionality work even with JavaScript switched off”. This misunderstanding couldn’t be further from the truth:

  1. It’s not about making all of your functionality available; it’s making your core functionality available: everything else can be considered an enhancement and it’s perfectly fine if not everyone gets that enhancement.
  2. This isn’t about switching JavaScript off; it’s about any particular technology not being available for reasons we can’t foresee (network issues, browser issues, whatever it may be).

And yet the misunderstanding persists. For that reason, most of the people in the discussion at Edge Conf were in favour of simply dropping the term progressive enhancement and instead focusing on terms like availability and access. Tim writes:

I’m not sure what we call it now. Maybe we do need another term to get people to move away from the “progressive enhancement = working without JS” baggage that distracts from the real goal.

And Stuart writes:

So I’m not going to be talking about progressive enhancement any more. I’m going to be talking about availability. About reach. About my web apps being for everyone even when the universe tries to get in the way.

But Jason writes:

I completely disagree that we should change nomenclature because there exists some small segment of Web designers unwilling to expand their development toolbox. I think progressive enhancement—the term—remains useful, descriptive, and appropriate.

I’m torn. On the one hand, I agree with Jason. The term “progressive enhancement” is a great descriptor. But on the other hand, I don’t want to end up like that guy who’s made it his life’s work to change every instance of the phrase “comprises of” to “comprises” (or “consists of”) on Wikipedia. Technically, he’s correct. But it doesn’t sound like a fun way to spend your days.

I guess my worry is, if I write an article or give a presentation, and I title it something to do with progressive enhancement, am I going to alienate and put off the very audience I’m trying to reach? But if I title it something else, am I tricking people?

Words are hard.

100 words 097

It’s the weekend …and I got up at the crack of dawn to head to London. Yes, on this beautiful sunny day, I elected to take the commuter train up to the big city to spend the day trapped inside a building where the air conditioning crapped out. Sweaty!

But it was worth it. I was at the Edge conference, which is always an intense dose of condensed nerdery. This year I participated in one of the panels: a discussion on progressive enhancement expertly moderated by Lyza. She also led a break-out session on the same topic later on.

100 words 007

I’m staying at the Edgewater hotel in Seattle, an unusual structure that is literally on the water, giving it a nautical atmosphere. The views out on the Puget Sound are quite lovely.

Inside, the hotel has more of a Twin Peaks vibe. It feels less like a hotel and more like a lodge.

The hotel is clearly proud of the many rock stars it has hosted over the years. As you settle into your cosy room, you can imagine what it was like when the Beatles were fishing from their balcony, or Led Zeppelin were doing unspeakable things with mudsharks.

Notes from the edge

I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.

Components

The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.

Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.

I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.

I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:

Let me set the scene for Web Components…

Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a select element does; the browsers already know. The downside is that whenever we want to add a new interface element like input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.

So Web Components make use of HTML, JavaScript, and (scoped) CSS. The possibility space for the HTML is infinite: if you need an element that doesn’t exist, you just invent it. The possibility space for the JavaScript is pretty close to infinite: it’s a Turing-complete language that can be wrangled to do just about anything. The possibility space for CSS isn’t infinite, but it’s pretty darn big: there’s not much you can’t do with it at this point.

What’s missing from that bundle of HTML, JavaScript, and CSS are hooks for assistive technology. Up until now, this is something we’ve mostly left to the browser. We don’t have to include any hooks for assistive technology when we use a select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.

How do we that? Well, right now, our only option is to use ARIA …but the possibility space defined by ARIA is much, much smaller than HTML, JavaScript, or CSS.

That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.

The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.

Hmmm… I’m still not sure I’m explaining myself very well.

Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.

That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.

Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).

Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.

Developer Tooling

The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.

If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.

I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…

I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.

Broadly speaking, the feature set of a web browser can be divided into two buckets:

In one bucket, you’ve got the support for standards like HTML, CSS, JavaScript. Now, individual browsers might compete on how quickly or how thoroughly they get those standards implemented, but at this point, there’s no disagreement about the fact that proprietary crap is bad, standards are good, and that no matter how painful the process can be, browser makers all need to get together and work on standards together. Heck, even Apple can’t avoid collaborating on this stuff.

In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.

But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.

This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.

As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.

Build Process

The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:

Make is probably older than you.

Cue glares from me and Scott.

Gareth also said that making websites means making software. We’re all making software—live with it.

This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.

Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.

It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.

Closing thoughts

A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.

There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.

Facing the future

There is much hand-wringing in the media about the impending death of journalism, usually blamed on the rise of the web or more specifically bloggers. I’m sympathetic to their plight, but sometimes journalists are their own worst enemy, especially when they publish badly-researched articles that fuel moral panic with little regard for facts (if you’ve ever been in a newspaper article yourself, you’ll know that you’re lucky if they manage to spell your name right).

Exhibit A: an article published in The Guardian called How I became a Foursquare cyberstalker. Actually, the article isn’t nearly as bad as the comments, which take ignorance and narrow-mindedness to a new level.

Fortunately Ben is on hand to set the record straight. He wrote Concerning Foursquare and communicating privacy. Far from being a lesser form of writing, this blog post is more accurate than the article it is referencing, helping to balance the situation with a different perspective …and a nice big dollop of facts and research. Ben is actually quite kind to The Guardian article but, in my opinion, his own piece is more interesting and thoughtful.

Exhibit B: an article by Jeffrey Rosen in The New York Times called The Web Means the End of Forgetting. That’s a bold title. It’s also completely unsupported by the contents of the article. The article contains anecdotes about people getting into trouble about something they put on the web, and—even though the consequences for that action played out in the present—he talks about the permanent memory bank of the Web and writes:

The fact that the Internet never seems to forget is threatening, at an almost existential level, our ability to control our identities.

Bollocks. Or, to use the terminology of Wikipedia, citation needed.

Scott Rosenberg provides the necessary slapdown, asking Does the Web remember too much — or too little?:

Rosen presents his premise — that information once posted to the Web is permanent and indelible — as a given. But it’s highly debatable. In the near future, we are, I’d argue, far more likely to find ourselves trying to cope with the opposite problem: the Web “forgets” far too easily.

Exactly! I get irate whenever I hear the truism that the web never forgets presented without any supporting data. It’s right up there with eskimos have fifty words for snow and people in the middle ages thought that the world was flat. These falsehoods are irritating at best. At worst, as is the case with the myth of the never-forgetting web, the lie is downright dangerous. As Rosenberg puts it:

I’m a lot less worried about the Web that never forgets than I am about the Web that can’t remember.

That’s a real problem. And yet there’s no moral panic about the very real threat that, once digitised, our culture could be in more danger of being destroyed. I guess that story doesn’t sell papers.

This problem has a number of thorns. At the most basic level, there’s the issue of . I love the fact that the web makes it so easy for people to publish anything they want. I love that anybody else can easily link to what has been published. I hope that the people doing the publishing consider the commitment they are making by putting a linkable resource on the web.

As I’ve said before, a big part of this problem lies with the DNS system:

Domain names aren’t bought, they are rented. Nobody owns domain names, except ICANN.

I’m not saying that we should ditch domain names. But there’s something fundamentally flawed about a system that thinks about domain names in time periods as short as a year or two.

Then there’s the fact that so much of our data is entrusted to third-party sites. There’s no guarantee that those third-party sites give a rat’s ass about the long-term future of our data. Quite the opposite. The callous destruction of Geocities by Yahoo is a testament to how little our hopes and dreams mean to a company concerned with the bottom line.

We can host our own data but that isn’t quite as easy as it should be. And even with the best of intentions, it’s possible to have the canonical copies wiped from the web by accident. I’m very happy to see services like Vaultpress come on the scene:

Your WordPress site or blog is your connection to the world. But hosting issues, server errors, and hackers can wipe out in seconds what took years to build. VaultPress is here to protect what’s most important to you.

The Internet Archive is also doing a great job but Brewster Kahle shouldn’t have to shoulder the entire burden. Dave Winer has written about the idea of future-safe archives:

We need one or more institutions that can manage electronic trusts over very long periods of time.

The institutions need to be long-lived and have the technical know-how to manage static archives. The organizations should need the service themselves, so they would be likely to advance the art over time. And the cost should be minimized, so that the most people could do it.

The Library of Congress has its Digital Preservation effort. Dan Gillmor reports on the recent three-day gathering of the institution’s partners:

It’s what my technology friends call a non-trivial task, for all kinds of technical, social and legal reasons. But it’s about as important for our future as anything I can imagine. We are creating vast amounts of information, and a lot of it is not just worth preserving but downright essential to save.

There’s an even longer-term problem with digital preservation. The very formats that we use to store our most treasured memories can become obsolete over time. This goes to the very heart of why standards such as HTML—the format I’m betting on—are so important.

Mark Pilgrim wrote about the problem of format obsolescence back in 2006. I found his experiences echoed more recently by Paul Glister, author of the superb Centauri Dreams, one of my favourite websites. He usually concerns himself with challenges on an even longer timescale, like the construction of a feasible means of interstellar travel but he gives a welcome long zoom perspective on digital preservation in Burying the Digital Genome, pointing to a project called PLANETS: Preservation and Long-term Access Through Networked Services.

Their plan involves the storage, not just of data, but of data formats such as JPEG and PDF: the equivalent of a Rosetta stone for our current age. A box containing format-decoding documentation has been buried in a bunker under the Swiss Alps. That’s a good start.

David Eagleman recently gave a talk for The Long Now Foundation entitled Six Easy Steps to Avert the Collapse of Civilization. Step two is Don’t lose things:

As proved by the destruction of the Alexandria Library and of the literature of Mayans and Minoans, “knowledge is hard won but easily lost.”

Long Now: Six Easy Steps to Avert the Collapse of Civilization on Huffduffer

I’m worried that we’re spending less and less time thinking about the long-term future of our data, our culture, and ultimately, our civilisation. Currently we are preoccupied with the real-time web: Twitter, Foursquare, Facebook …all services concerned with what’s happening right here, right now. The Long Now Foundation and Tau Zero Foundation offer a much-needed sense of perspective.

As with that other great challenge of our time—the alteration of our biosphere through climate change—the first step to confronting the destruction of our collective digital knowledge must be to think in terms greater than the local and the present.