Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called toast. It turns out that’s not the case, but the confusion is understandable.
First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.
Secondly, Chrome isn’t going to ship a toast element. Instead, this is a proposal for a custom element currently called std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called toast (minus the sexually-transmitted-disease prefix). If this turns out to be a useful feature, there will surely be a discussion between implementators about the naming of the finished element.
This is the ideal candidate for a web component. It makes total sense to create a custom element along the lines of std-toast. At first I was confused about why this was happening inside of a browser instead of first being created as a standalone web component, but it turns out that there’s been a fair bit of research looking at existing implementations in libraries and web components. So this actually looks like a good example of paving an existing cowpath.
But it didn’t come across that way. The timing of announcements felt like this was something that was happening without prior discussion. Terence Eden writes:
It feels like a Google-designed, Google-approved, Google-benefiting idea which has been dumped onto the Web without any consideration for others.
I know that isn’t the case. And I know how many dedicated people have worked hard on this proposal.
To be clear, while I think there is value in minting a native HTML element to fill a defined gap, I am wary of the approach Google has taken. A repo from a new-to-the-industry Googler getting a lot of promotion from Googlers, with Googlers on social media doing damage control for the blowback, WHATWG Googlers handling questions on the repo, and Google AMP strongly supporting it (to reduce its own footprint), all add up to raise alarm bells with those who advocated for a community-driven, needs-based, accessible web.
But my concern wasn’t so much about the nature of the new elements, but of how we learned about them and what that says about how web standardization works.
So there’s a general feeling (outside of Google) that there’s something screwy here about the order of events. A lot discussion and research seems to have happened in isolation before announcing the intent to implement:
It does not appear that any discussions happened with other browser vendors or standards bodies before the intent to implement.
Why is this a problem? Google is seeking feedback on a solution, not on how to solve the problem.
The extensible web movement focused on exposing low-level APIs to developers: the fetch API, the cache API, custom elements, Houdini, and all of those other building blocks. Layered APIs, on the other hand, focuses on high-level features …like, say, an HTML element for displaying “toast” notifications.
Layered APIs is an interesting idea, but I’m worried that it could be used to circumvent discussion between implementers. It’s a route to unilaterally creating new browser features first and standardising after the fact. I know that’s how many features already end up in browsers, but I think that the sooner that authors, implementers, and standards bodies get a say, the better.
I certainly don’t think this is a good look for Google given the debacle of AMP’s “my way or the highway” rollout. I know that’s a completely different team, but the external perception of Google amongst developers has been damaged by the AMP project’s anti-competitive abuse of Google’s power in search.
Right now, a lot of people are jumpy about Microsoft’s move to Chromium for Edge. My friends at Microsoft have been reassuring me that while it’s always a shame to reduce browser engine diversity, this could actually be a good thing for the standards process: Microsoft could theoretically keep Google in check when it comes to what features are introduced to the Chromium engine.
But that only works if there is some kind of standards process. Layered APIs in general—and std-toast in particular—hint at a future where a single browser vendor can plough ahead on their own. I sincerely hope that’s a misreading of the situation and that this has all been an exercise in miscommunication and misunderstanding.
I hear a lot about how anyone can contribute to the web platform. We’ve all heard the preaching about incubation, the Extensible Web, working in public, paving the cowpaths, and so on. But to an outside observer this feels like Google making all the decisions, in private, and then asking for public comment after the feature has been designed.
I’ve been using Mailchimp for years now to send out a weekly newsletter from The Session. But I never visit the Mailchimp website. Instead, I use the API to create a campaign each week, and then send it out. I also use the API whenever a member of The Session updates their email preferences (or changes their details).
I got an email from Mailchimp that their old API was being deprecated and I’d need to update to their more recent one. The code I was using had been happily running for about seven years, but now I’d have to change it.
Everything went pretty smoothly. I was able to create campaigns, send campaigns, add new subscribers, and delete subscribers. But I ran into an issue when I wanted to update someone’s email address (on The Session, you can edit your details at any time, including your email address).
Here’s the set up:
$MailChimp = new MailChimp('abc123abc123abc123abc123abc123-us1');
$list_id = 'b1234346';
$subscriber_hash = $MailChimp -> subscriberHash('firstname.lastname@example.org');
$endpoint = 'lists/'.$listID.'/members/'.$subscriber_hash;
But that doesn’t work. Mailchimp effectively treats email addresses as unique IDs for subscribers. So the only way to change someone’s email address appears to be to delete them, and then subscribe them fresh with the new email address:
I mean, I’m not a huge fan of trying to get the damn things to work consistently—thanks, browsers—but I love the fact that they exist (athough I’ve come across a worrying number of web developers who weren’t aware of their existence). Print stylesheets are one more example of the assumption-puncturing nature of the web: don’t assume that everyone will be reading your content on a screen. News articles, blog posts, recipes, lyrics …there are many situations where a well-considered print stylesheet can make all the difference to the overall experience.
You know what I don’t like? QR codes!
It’s not because they’re ugly, or because they’ve been over-used by the advertising industry in completely inapropriate ways. No, I don’t like QR codes because they aren’t an open standard. Still, I must grudgingly admit that they’re a convenient way of providing a shortcut to a URL (albeit a completely opaque one—you never know if it’s actually going to take you to the URL it promises or to a Rick Astley video). And now that the parsing of QR codes is built into iOS without the need for any additional application, the barrier to usage is lower than ever.
So much as I might grit my teeth, QR codes and print stylesheets make for good bedfellows.
I picked up a handy tip from a Smashing Magazine article about print stylesheets a few years back. You can the combination of a @media print and generated content to provide a QR code for the URL of the page being printed out. Google’s Chart API provides a really handy shortcut for generating QR codes:
I’ve been thinking about another potential use for QR codes. I’m preparing a new talk for An Event Apart Seattle. The talk is going to be quite practical—for a change—and I’m going to be encouraging people to visit some URLs. It might be fun to include the biggest possible QR code on a slide.
I’d better generate the images before Google shuts down that API.
I like the robustness that comes with declarative languages. I also like the power that comes with imperative languages. Best of all, I like having the choice.
Take the video and audio elements, for example. If you want, you can embed a video or audio file into a web page using a straightforward declaration in HTML.
<audio src="..." controls><!-- fallback goes here --></audio>
Client-side form validation is another good example. For most us, the HTML attributes—required, type, etc.—are probably enough most of the time.
<input type="email" required />
<input type="geolocation" />
(And just in case you’re thinking of the fallback—which would be for the input element to be rendered as though its type value were text—and you think it’s ludicrous to expect users with non-supporting browsers to enter latitude and longitude coordinates by hand, I direct your attention to input type="color": in non-supporting browsers, it’s rendered as input type="text" and users are expected to enter colour values by hand.)
Anyway, that’s just one example. Like I said, it’s not that I’m in favour of declarative solutions instead of imperative ones; I strongly favour the choice offered by providing declarative solutions as well as imperative ones.
At first, I found the book to be a rollicking good read. It told the sweep of history in an engaging way, backed up with footnotes and references to prime sources. But then the author transitions from relaying facts to taking flights of fancy without making any distinction between the two (the only “tell” is that the references dry up).
Just as Matt Ridley had personal bugbears that interrupted the flow of The Rational Optimist, Yuval Noah Harari has fixated on some ideas that make a mess of the narrative arc of Sapiens. In particular, he believes that the agricultural revolution was, as he describes it, “history’s biggest fraud.” In the absence of any recorded evidence for this, he instead provides idyllic descriptions of the hunter-gatherer lifestyle that have as much foundation in reality as the paleo diet.
When the book avoids that particular historical conspiracy theory, it fares better. But even then, the author seems to think he’s providing genuinely new insights into matters of religion, economics, and purpose, when in fact, he’s repeating the kind of “college thoughts” that have been voiced by anyone who’s ever smoked a spliff.
I know I’m making it sound terrible, and it’s not terrible. It’s just …generally not that great. And when it is great, it only makes the other parts all the more frustrating. There’s a really good book in Sapiens, but unfortunately it’s interspersed with some pretty bad editorialising. I have to agree with Galen Strawson’s review:
Much of Sapiens is extremely interesting, and it is often well expressed. As one reads on, however, the attractive features of the book are overwhelmed by carelessness, exaggeration and sensationalism.
Towards the end of Sapiens, Yuval Noah Harari casts his eye on our present-day world and starts to speculate on the future. This is the point when I almost gave myself an injury with the amount of eye-rolling I was doing. His ideas on technology, computers, and even science fiction are embarrassingly childish and incomplete. And the bad news is that his subsequent books—Home Deus and 21 Lessons For The 21st Century—are entirely speculations about humanity and technology. I won’t be touching those with all the ten foot barge poles in the world.
In short, although there is much to enjoy in Sapiens, particularly in the first few chapters, I can’t recommend it.
It’s the afternoon of the second day of An Event Apart Seattle and Jason is talking about Designing Progressive Web Apps. These are my notes…
Jason wants to talk about a situation you might find yourself in. You’re in a room and in walks the boss, who says “We need a progressive web app.” Now everyone is asking themselves “What is a progressive web app?” Or maybe “How does the CEO even know about progressive web apps?”
Well, trade publications are covering progressive web apps. Lots of stats and case studies are being published. When executives see this kind of information, they don’t want to get left out. Jason keeps track of this stuff at PWA Stats.
Answering the question “What is a progressive web app?” is harder than it should be. The phrase was coined by Frances Berriman and Alex Russell. They listed ten characteristics that defined progressive web apps. The “linkable” and “progressive” characteristics are the really interesting and new characteristics. We’ve had technologies before (like Adobe Air) that tried to make app-like experiences, but they weren’t really of the web. Progressive web apps are different.
Despite this list of ten characteristics, even people who are shipping progressive web apps find it hard to define the damn thing. The definition on Google’s developer site keeps changing. They reduced the characteristics from ten to six. Then it became “reliable, fast, and engaging.” What does that mean? Craigslist is reliable, fast, and engaging—does that mean it’s a progressive web app.
The technical definition is useful (kudos to me, says Jason):
If you don’t have those three things, it’s not a progressive web app.
We should definitely use HTTPS if we want make life harder for the NSA. Also browser makers are making APIs available only under HTTPS. By July, Chrome will mark HTTP sites as insecure. Every site should be under HTTPS.
Service workers are where the power is. They act as a proxy. They allow us to say what we want to cache, what we want to go out to the network for; things that native apps have been able to do for a while. With progressive web apps we can cache the app shell and go to the network for content. Service workers can provide a real performance boost.
A manifest file is simply a JSON file. It’s short and clear. It lists information about the app: icons, colours, etc.
Once you provide those three things, you get benefits. Chrome and Opera on Android will prompt to add the app to the home screen.
So that’s what’s required for progressive web apps, but there’s more to them than that (in the same way there’s more to responsive web design than the three requirements in the baseline definition).
The hype around progressive web apps can be a bit of a turn-off. It certainly was for Jason. When he investigated the technologies, he wondered “What’s the big deal?” But then he was on a panel at a marketing conference, and everyone was talking about progressive web apps. People’s expectations of what you could do on the web really hadn’t caught up with what we can do now, and the phrase “progressive web app” gives us a way to encapsulate that. As Frances says, the name isn’t for us; it’s for our boss or marketer.
Should you have a progressive web app? Well, if you have a website, then the answer is almost certainly “Yes!” If you make money from that website, the answer is definitely “Yes!”
But there’s a lot of FUD around progressive web apps. It brings up the tired native vs. web battle. Remember though that not 100% of your users or customers have your app installed. And it’s getting harder to convince people to install apps. The average number of apps installed per month is zero. But your website is often a customer’s first interaction with your company. A better web experience can only benefit you.
Often, people say “The web can’t do…” but a lot of the time that information is out of date. There are articles out there with outdated information. One article said that progressive web apps couldn’t access the camera, location, or the fingerprint sensor. Yet look at Instagram’s progressive web app: it accesses the camera. And just about every website wants access to your location these days. And Jason knows you can use your fingerprint to buy things on the web because he accidentally bought socks when he was trying to take a screenshot of the J.Crew website on his iPhone. So the author of that article was just plain wrong. The web can do much more than we think it can.
Another common objection is “iOS doesn’t support progressive web apps”. Well, as of last week that is no longer true. But even when that was still true, people who had implemented progressive web apps were seeing increased conversion even on iOS. That’s probably because, if you’ve got the mindset for building a progressive web app, you’re thinking deeply about performance. In many ways, progressive web apps are a trojan horse for performance.
These are the things that people think about when it comes to progressive web apps:
Making it feel like a app
Installation and discovery
Beyond progressive web app
Making it feel like a app
What is an app anyway? Nobody can define it. Once again, Jason references my posts on this topic (how “app” is like “obscenity” or “brunch”).
A lot of people think that “app-like” means making it look native. But that’s a trap. Which operating system will you choose to emulate? Also, those design systems change over time. You should define your own design. Make it an exceptional experience regardless of OS.
It makes more sense to talk in terms of goals…
Goal: a more immersive experience.
Possible solution: removing the browser chrome and going fullscreen?
You can define this in the manifest file. But as you remove the browser chrome, you start to lose things that people rely on: the back button, the address bar. Now you have to provide that functionality. If you move to a fullscreen application you need to implement sharing, printing, and the back button (and managing browser history is not simple). Remember that not every customer will add your progressive web app to their home screen. Some will have browser chrome; some won’t.
Goal: a fast fluid experience.
Possible solution: use an app shell model.
You want smooth pages that don’t jump around as the content loads in. The app shell makes things seem faster because something is available instantly—it’s perceived performance. Basically you’re building a single page application. That’s a major transition. But thankfully, you don’t have to do it! Progressive web apps don’t have to be single page apps.
Goal: an app with personality.
Possible solution: Animated transitions and other bits of UI polish.
Really, it’s all about delight.
Installation and discovery
In your manifest file you can declare a background colour for the startup screen. You can also declare a theme colour—it’s like you’re skinning the browser chrome.
You can examine the manifest files for a site in Chrome’s dev tools.
Once you’ve got a progressive web app, some mobile browsers will start prompting users to add it to their home screen. Firefox on Android displays a little explainer the first time you visit a progressive web app. Chrome and Opera have add-to-homescreen banners which are a bit more intrusive. The question of when they show up keeps changing. They use a heuristic to decide this. The heuristic has been changed a few times already. One thing you should consider is suppressing the banner until it’s an optimal time. Flipkart do this: they only allow it on the order confirmation page—the act of buying something makes it really likely that someone will add the progressive web app to their home screen.
What about app stores? We don’t need them for progressive web apps—they’re on the web. But Microsoft is going to start adding progressive web apps to their app store. They’ve built a site called PWA Builder to help you with your progressive web app.
On the Android side, there’s Trusted Web Activity which is kind of like PhoneGap—it allows you to get a progressive web app into the Android app store.
But remember, your progressive web app is your website so all the normal web marketing still applies.
A lot of organisations say they have no need for offline functionality. But everyone has a need for some offline capability. At the very least, you can provide a fallback page, like Trivago’s offline maze game.
You can cache content that has been recently viewed. This is what Jason does on the Cloud Four site. They didn’t want to make any assumptions about what people might want, so they only cache pages as people browse around the site.
If you display cached information, you might want to display how stale the information is e.g. for currency exchange rates.
Another option is to let people choose what they want to keep offline. The Financial Times does this. They also pre-cache the daily edition.
If you have an interactive application, you could queue tasks and then carry them out when there’s a connection.
Or, like Slack does, don’t let people write something if they’re offline. That’s better than letting someone write something and then losing it.
Workbox is a handy library for providing offline functionality.
There are third-party push notification services that take care of a lot of this for you. Jason has used OneSignal.
Remember that people are really annoyed by push notifications. Don’t ask for permission immediately. Don’t ask someone to marry you on a first date. On Cloud Four’s blog, they only prompt after the user has read an article.
Twitter’s progressive web app does this really well. It’s so important that you do this well: if a user says “no” to your push notification permission request, you will never be able to ask them again. There used to be three options on Chrome: allow, block, or close. Now there are just two: allow or block.
Beyond progressive web apps
There are a lot of APIs that aren’t technically part of progressive web apps but get bundled in with them. Like the Credentials Management API or the Payment Request API (which is converging with ApplePay).
So how should you plan your progressive web app launch? Remember it’s progressive. You can keep adding features. Each step along the way, you’re providing value to people.
Start with some planning and definition. Get everyone in a room and get a common definition of what the ideal progressive web app would look like. Remember there’s a continuum of features for all five of the things that Jason has outlined here.
Benchmark your existing site. It will help you later on.
Assess your current website. Is the site reasonably fast? Is it responsive? Fix those usability issues first.
Next, do the baseline. Switch to HTTPS. Add a manifest file. Add a service worker. Apart from the HTTPS switch, this can all be done on the front end. Don’t wait for all three: ship each one when they’re ready.
Then do front-end additions: pre-caching pages, for example.
Finally, there are the larger initiatives (with more complex APIs). This is where your initial benchmarking really pays off. You can demonstrate the value of what you’re proposing.
Every step on the path to a progressive web app makes sense on its own. Figure out where you want to go and start that journey.
I saw Ruth give a fantastic talk on the Web Audio API at CSS Day this year. It had just the right mixture of code and inspiration. I decided there and then that I’d have to find some opportunity to play around with web audio.
As ever, my own website is the perfect playground. I added an audio Easter egg to adactio.com a while back, and so far, no one has noticed. That’s good. It’s a very, very silly use of sound.
In her talk, Ruth emphasised that the Web Audio API is basically just about dealing with numbers. Lots of the examples of nice usage are the audio equivalent of data visualisation. Data sonification, if you will.
It sounds terrible. It’s like a theremin with hiccups.
Still, I kind of like it. I mean, I wish it sounded nicer (and I’m open to suggestions on how to achieve that—feel free to fork the code), but there’s something endearing about hearing a month’s worth of activity turned into a wobbling wave of sound. And it’s kind of fun to hear how a particular tag is used more frequently over time.
Anyway, it’s just a silly little thing, but anywhere you spot a sparkline on my site, you can tap it to hear it translated into sound.
I just can’t get excited about the prospect of building something for any particular operating system, be it desktop or mobile. I think about the potential lifespan of what would be built and end up asking myself “why bother?” If something isn’t on the web—and of the web—I find it hard to get excited about it. I’m somewhat jealous of people who can get equally excited about the web, native, hardware, print …in my mind, if it hasn’t got a URL, it’s missing some vital spark.
I know that this is a problem, but I can’t help it. At the very least, I have enough presence of mind to recognise it as being my problem.
Given these unreasonable feelings of attachment towards the web, you might expect me to wish it to become the one technology to rule them all. But I’ve never felt that any such victory condition would make sense. If anything, I’ve always been grateful for alternative avenues of experimentation and expression.
When Flash was a thriving ecosystem for artists to push the boundaries of what was possible to deliver to a web browser, I never felt threatened by it. I never wished for web technologies to emulate those creations. Don’t get me wrong: I’m happy that we’ve got nice smooth animations in CSS, but I never thought the lack of animation was crippling the web’s potential.
Now we have native technologies that can do more than the web can do. iOS and Android apps can access device APIs that web browsers can’t (yet). And, once again, while I look forward to the day that websites will be able to do all the things that native apps can do today, I don’t think that the lack of those capabilities is dooming the web to irrelevance.
There will always be some alternative that is technologically more advanced than the web. First there were CD-ROMs. Then we had Flash. Now we have native apps. Each one of those platforms offered more power and functionality than you could get from a web browser. And yet the web persists. That’s because none of the individual creations made with those technologies could compete with the collective power of all of the web, hyperlinked together. A single native app will “beat” a single website every time …but an app store pales when compared to the incredible reach and scope of the entire World Wide Web.
The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.
The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.
Like I said, I’m okay with that. I’m okay with the web not being as advanced as some other technology stack at any particular moment. I can wait.
In fact, as PPK points out, we could do real damage to the web by attempting to make it mimic some platform that’s currently in the ascendent. I disagree with his framing of it as a battle—rather than conceding defeat, I see it more like waiting out a siege—but I agree completely with this assessment:
The web cannot emulate native perfectly, and it never will.
If we accept that, then we can play to the web’s strengths (while at the same time, playing a slow game of catch-up behind the scenes). The danger comes when we try to emulate the capabilities of something that isn’t the web:
Emulating native leads to bad UX (or, at least, a UX that’s clearly a sub-optimal copy of native UX).
Whenever a website tries to emulate something from an operating system—be it desktop or mobile—the result is invariably something that gets really, really close …but falls just a little bit short. It feels like entering an uncanny valley of interaction design.
I think you make what I call “bicycle bear websites.” Why? Because my response to both is the same.
“Listen bub,” I say, “it is very impressive that you can teach a bear to ride a bicycle, and it is fascinating and novel. But perhaps it’s cruel? Because that’s not what bears are supposed to do. And look, pal, that bear will never actually be good at riding a bicycle.”
This is how I feel about so many of the fancy websites I see. “It is fascinating that you can do that, but it’s really not what a website is supposed to do.”
It’s time to recognise that this is the wrong approach. We shouldn’t try to compete with native apps in terms set by the native apps. Instead, we should concentrate on the unique web selling points: its reach, which, more or less by definition, encompasses all native platforms, URLs, which are fantastically useful and don’t work in a native environment, and its hassle-free quality.
I think PPK, Cennydd, and I are all in broad agreement, but we almost certainly differ in the details. PPK, for example, argues that maybe news sites should be native apps instead, but for me, those are exactly the kind of sites that benefit from belonging to no particular platform. And when Cennydd talks about applications on the web, it raises the whole issue of what constitutes a web app anyway. If we’re talking about having access to device APIs—cameras, microphones, accelerometers—then yes, native is the way to go. But if we’re talking about interface elements and motion design, then I think the web can hold its own …sometimes.
Of course not every web browser can match the capabilities of a native app—that’s why it’s so important to approach web development through the lens of progressive enhancement rather than treating it like software development no different than that of native platforms. The web is not a platform—that’s the whole point of the web; it’s cross-platform. As Baldur put it:
Treating the web like another app platform makes sense if app platforms are all you’re used to. But doing so means losing the reach, adaptability, and flexibility that makes the web peerless in both the modern media and software industries.
The price we pay for that incredible cross-platform reach is that features on the web will always be lagging behind, and even when do they do arrive, they won’t be available in all web browsers.
To paraphrase William Gibson: capabilities on the web will always be here, but they will never be evenly distributed.
But let’s take a step back from the surface-level differences between web and native. Just as happened with CD-ROMs and Flash, the web is catching up with native when it comes to motion design, visual feedback, and gestures like swiping and dragging. I don’t think those are where the fundamental differences lie. I don’t even think the fundamental differences lie in accessing device APIs like cameras, microphones, and offline storage—the web is (slowly) catching up in those areas too.
What if the fundamental differences lie deeper than the technical implementation? What if the web is suited to some things more than others, not because of technical limitations, but because of philosophical mismatches?
The web was born at CERN, an amazing environment that’s free of many of the economic and hierarchical pressures that shape technology decisions elsewhere. The web’s heritage as a hypertext document sharing system for pure scientific research is often treated as a handicap, something that must be overcome in this age of applications and monetisation. But I see this heritage as a feature, not a bug. It promotes ideals of universal access above individual convenience, creation above consumption, and sharing above financial gain.
For web development to grow as a craft and as an industry, we have to follow the money. Without money the craft becomes a hobby and unmaintained software begins to rot.
But I think there’s a danger here. If we allow the web to be led by money-making, we may end up changing the fundamental nature of the web, and not for the better.
Now, personally, I believe that it’s entirely possible to run a profitable business on the web. There are plenty of them out there. But suppose we allow that other avenues are more profitable. Let’s assume that there’s more friction in making money on the web than there is in, say, making money on iOS (or Android, or Facebook, or some other monolithic stack). If that were the case …would that be so bad?
Suppose, to use PPK’s phrase, we “concede defeat” to Apple, Google, Microsoft, and Facebook. When you think about it, it makes sense that platforms borne from profit-driven companies are going to be better at generating profit than something created by a bunch of idealistic scientists trying to improve the knowledge of the human race. Suppose we acknowledged that the web isn’t that well-suited to capitalism.
I think I’d be okay with that.
Would the web become little more than a hobbyist’s playground? A place for amateurs rather than professional businesses?
I’d be okay with that too.
Y’see, what attracted me to the web—to the point where I have this blind spot—wasn’t the opportunity to make money. What attracted me to the web was its remarkable ability to allow anyone to share anything, not just for the here and now, but for the future too.
If you’ve been reading my journal or following my links for any time, you’ll be aware that two of my biggest interests are progressive enhancement and digital preservation. In my mind, these two things are closely intertwingled.
For me, progressive enhancement is a means of practicing universal design, a way of providing access to as many people as possible. That includes access across time, hence the crossover with digital preservation. I’ve noticed again and again that what’s good for accessibility is also good for longevity, and vice versa.
Whenever the ephemerality of the web is mentioned, two opposing responses tend to surface. Some people see the web as a conversational medium, and consider ephemerality to be a virtue. And some people see the web as a publication medium, and want to build a “permanent web” where nothing can ever disappear.
I don’t want a web where “nothing can ever disappear” but I also don’t want the default lifespan of a resource on the web to be ephemeral. I think that whoever published that resource should get to decide how long or short its lifespan is. The problem, as Maciej points out, is in the mismatch of expectations:
I’ve come to believe that a lot of what’s wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.
I completely agree with Bret’s woeful assessment of the web when it comes to link rot:
It is this common record of public thought — the “great conversation” — whose stability and persistence is crucial, both for us alive today and for those who will come after.
I believe we can and should do better. But I completely and utterly disagree with him when he says:
Photos from your friend’s party are not part of the common record.
Nor are most casual conversations. Nor are search histories, commercial transactions, “friend networks”, or most things that might be labeled “personal data”. These are not deliberate publications like a bound book; they are not intended to be lasting contributions to the public discourse.
We can agree when it comes to search histories and commercial transactions, but it makes no sense to lump those in with the ordinary plenty that I’ve written about before:
My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.
For me, this lies at the heart of what the web does. The web removes the need for tastemakers who get to decide what gets published. The web removes the need for gatekeepers who get to decide what gets saved.
Other avenues of expressions will always be more powerful than the web in the short term: CD-ROMs, Flash, and now native. But they all come with gatekeepers. The collective output of the human race—from the most important scholarly papers to the most trivial blog post—is too important to put in the hands of the gatekeepers of today who may not even be around tomorrow: Apple, Google, Microsoft, et al.
The web has no gatekeepers. The web has no quality control. The web is a mess. The web is for everyone.
The supersmart Scott Jenson just gave a talk at The Web Is in Cardiff, which was by all accounts, excellent. I wish I could have seen it, but I’m currently chilling out in Florida and I haven’t mastered the art of bilocation.
In it, he takes to task the idea that—through progressive enhancement—you should be able to offer all functionality to all browsers, thereby foregoing the use of newer technologies that aren’t universally supported.
If that were what progressive enhancement meant, I’d be with him all the way. But progressive enhancement is not about offering all functionality; progressive enhancement is about making sure that your core functionality is available to everyone. Everything after that is, well, an enhancement (the clue is in the name).
The trick to doing this well is figuring out what is core functionality, and what is an enhancement. There are no hard and fast rules.
Sometimes it’s really obvious. Web fonts? They’re an enhancement. Rounded corners? An enhancement. Gradients? An enhancement. Actually, come to think of it, all of your CSS is an enhancement. Your content, on the other hand, is not. That should be available to everyone. And in the case of task-based web thangs, that means the fundamental tasks should be available to everyone …but you can still layer more tasks on top.
If you’re building an e-commerce site, then being able to add items to a shopping cart and being able to check out are your core tasks. Once you’ve got that working with good ol’ HTML form elements, then you can go crazy with your enhancements: animating, transitioning, swiping, dragging, dropping …the sky’s the limit.
Scott asked about building a camera app with progressive enhancement:
Snarky Question: How are you supposed to ‘progressively enhance’ an HTML camera app? Show puppies? Not everything devolves to simple markup
Here again, the real question to ask is “what is the core functionality?” Building a camera app is a means to an end, not the end itself. You need to ask what the end goal is. Perhaps it’s “enable people to share photos with their friends.” Going back to good ol’ HTML, you can accomplish that task with:
<input type="file" accept="image/*">
Now that you’ve got that out of the way, you can spend the majority of your time making the best damn camera app you can, using all the latest browser technologies. (Perhaps WebRTC? Maybe use a canvas element to display the captured image data and apply CSS filters on top?)
My point is that not everything devolves to content. Sometimes the functionality is the point.
I agree wholeheartedly. In fact, I would say that even in the case of “content” sites, functionality is still the point—the functionality would be reading/hearing/accessing content. But I think that Scott is misunderstanding progressive enhancement if he think it means providing all the functionality that one can possibly provide.
@jgarber@mjacksonw Right. Lots of cool features on the Boston Globe don’t work when JS breaks; “reading the news” is not one of them.
What I’m chaffing at is the belief that when a page is offering specific functionality, Let’s say a camera app or a chat app, what does it mean to progressively enhance it?
Again, a realtime chat app is a means to an end. What is it enabling? The ability for people to talk to each other over the web? Okay, we can do that using good ol’ HTML—text and form elements—with full page refreshes. That won’t be realtime. That’s okay. The realtime part is an enhancement. Use Web Sockets and WebRTC (in the browsers that support them) to provide the realtime experience. But everyone gets the core functionality.
Like I said, the trick is figuring out what’s core functionality and what’s an enhancement.
@scottjenson That was how we approached @geteditorially’s rich editor, anyway: start with a textarea, layer on functionality from there.
If progressive enhancement truly meant making all functionality available to everyone, then it would be unworkable. I think that’s a common misconception around progressive enhancement; there’s this idea that using progressive enhancement means that you’re going to spend all your time making stuff work in older browsers. In fact, it’s the exact opposite. As long as you spend a little bit of time at the start making sure that the core functionality works with good ol’ fashioned HTML, then you can spend most of your time trying out the latest and greatest browser technologies.
As Orde put it:
For us, building with Progressive Enhancement moves almost all of our development time and costs to newer browsers, not older ones.
Progressive Enhancement frees us to focus on the costs of building features for modern browsers, without worrying much about leaving anyone out. With a strongly qualified codebase, older browser support comes nearly for free.
Approaching browser support this way requires a different way of thinking. For everything you’re building, you need to ask “is this core functionality, or is it an enhancment?” and build accordingly. It takes a bit of getting used to, but it gets easier the more you do it (until, after a while, it becomes second nature).
But if you’re thinking about progressive enhancement as “devolving” down—as Scott Jenson describes in his post—then I think you’re on the wrong track. Instead it’s about taking care of the core functionality quickly and then spending your time “enhancing” up.
Shouldn’t we be allowed to experiment? Isn’t it reasonable to build things that push the envelope?
Absolutely! And the best and safest way to do that is to make sure that you’re providing your core functionality for everyone. Once you do that, you can go nuts with the latest and greatest experimental envelope-pushing technologies, secure in the knowledge that you don’t even need to worry about the fact that they don’t work in older browsers. Geolocation! Offline storage! Device APIs! Anything you can think of, you can use as a powerful enhancement on top of your core tasks.
Once you realise this, it’s immensely liberating to use progressive enhancement. You can have the best of both worlds: universal access to core functionality, combined with all the latest cuting-edge technology too.
Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.
Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.
Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.
So what happened?
I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.
No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.
In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”
Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.
Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.
I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.
But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.
Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.
Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.
Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.
Then there’s RSS.
During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.
Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!
Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.
That has changed.
Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.
Jo’s feelings about Twitter’s anti-RSS policy mirror my own:
I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.
That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.
The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”
RSS is used a lot …but much of the usage is invisible:
RSS is plumbing. It’s used all over the place but you don’t notice it.
If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.
He points out the many strengths of RSS, including its decentralisation:
It’s anti-monopolist. By design it creates a level playing field.
How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.
I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.
In a post called The True Web, Robin Sloan reiterates the strength of RSS:
It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.
I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).
Returning to Anil’s description of the web we lost:
We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.
I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.
Twitter has come in for a lot of (justifiable) criticism for changes to its API that make it somewhat developer-hostile. But it has to be said that developers don’t always behave responsibly when they’re using the API.
The classic example of this is the granting of permissions. James summed it up nicely: it’s just plain rude to ask for write-access to my Twitter account before I’ve even started to use your service. I could understand it if the service needed to post to my timeline, but most of the time these services claim that they want me to sign up via Twitter so that I can find my friends who are also using the service — that doesn’t require write access. Quite often, these requests to authenticate are accompanied by reassurances like “we’ll never tweet without your permission” …in which case, why ask for write-access in the first place?
To be fair, it used to be a lot harder to separate out read and write permissions for Twitter authentication. But now it’s actually not that bad, although it’s still not as granular as it could be.
One of the services that used to require write-access to my Twitter account was Lanyrd. I gave it permission, but only because I knew the people behind the service (a decision-making process that doesn’t scale very well). I always felt uneasy that Lanyrd had write-access to my timeline. Eventually I decided that I couldn’t in good conscience allow the lovely Lanyrd people to be an exception just because I knew where they lived. Fortunately, they concurred with my unease. They changed their log-in system so that it only requires read-access. If and when they need write-access, that’s the point at which they ask for it:
We now ask for read-only permission the first time you sign in, and only ask to upgrade to write access later on when you do something that needs it; for example following someone on Twitter from the our attendee directory.
Far too many services ask for write-access up front, without providing a justification. When asked for an explanation, I’m sure most of them would say “well, that’s how everyone else does it”, and they would, alas, be correct.
What’s worse is that users grant write-access so freely. I was somewhat shocked by the amount of tech-savvy friends who unwittingly spammed my timeline with automated tweets from a service called Twitter Counter. Their reactions ranged from sheepish to embarrassed to angry.
I urge you to go through your Twitter settings and prune any services that currently have write-access that don’t actually need it. You may be surprised by the sheer volume of apps that can post to Twitter on your behalf. Do you trust them all? Are you certain that they won’t be bought up by a different, less trustworthy company?
If a service asks me to sign up but insists on having write-access to my Twitter account, it feels like being asked out on a date while insisting I sign a pre-nuptial agreement. Not only is somewhat premature, it shows a certain lack of respect.
Not every service behaves so ungallantly. Done Not Done, 1001 Beers, and Mapalong all use Twitter for log-in, but none of them require write-access up-front.
Branch and Medium are typical examples of bad actors in this regard. The core functionality of these sites has nothing to do with posting to Twitter, but both sites want write-access so that they can potentially post to Twitter on my behalf later on. I know that I won’t ever want either service to do that. I can either trust them, or not use the service at all. Signing up without granting write-access to my Twitter account isn’t an option.
In the case of Branch, Medium, and many other services, Twitter authentication is the only way to sign up and start using the service. Using a username and password isn’t an option. On the face of it, requiring Twitter for authentication doesn’t sound all that different to requiring an email address for authentication. But demanding write-access to Twitter is the equivalent of demanding the ability to send emails from your email address.
The way that so many services unnecessarily ask for write-access to Twitter—and the way that so many users unquestioningly grant it—reminds me of the password anti-pattern all over again. Because this rude behaviour is so prevalent, it has now become the norm. If we want this situation to change, we need to demand more respect.
The next time that a service demands unwarranted write-access to your Twitter account, refuse to grant it. Then tell the people behind that service why you’re refusing to sign up.
It’s the same format that powers Firefox’s search providers. If you visit Huffduffer with Firefox and click in the browser’s search field, you’ll see the option to “Add Huffduffer” to the list of search engines.
I can’t imagine many people will actually do that but still, no harm in providing the option.
There are many reasons to go to South by Southwest Interactive: meeting up with friends old and new being the primary one. Then there’s the motivational factor. I always end up feeling very inspired by what I see.
This year, that feeling of inspiration was front and centre. First off, I tried to impart some of it on the How to Rawk SXSW panel, which was a lot of fun. Mind you, I did throw some shit at the fan by demonstrating how wasteful the overstuffed schwag bags are. I hope I didn’t get MJ into trouble.
My other public appearance was on The Heather Gold Show which was bags of fun. With a theme of Get Excited and Make Things, the topic of inspiration was bandied about a lot. It was a blast. Heather is a superb host and the other guests were truly inspirational. I discovered a kindred spirit in fellow excitable geek, Gina Trapani.
The actual panels and presentations at SXSW are the usual mixture of hit and miss, although the Cooking For Geeks presentation was really terrific. Any presenter who hacks the audience’s taste buds during a presentation is alright with me.
But by far the most inspirational thing I’ve seen was a panel hosted by Tantek on Open Science. The subject matter was utterly compelling and the panelists were ludicrously articulate and knowledgeable:
I was struck by the sheer volume of scientific data and APIs out there now. And yet, we aren’t really making use of it. Why we aren’t we making mashups using Google Mars? Why haven’t I built a Farmville-style game with Google Moon?
Halfway through the panel, I turned to Riccardo and whispered, We should organise a Science Hack Day.
I’m serious. It would probably be somewhere in London. I have no idea where or when. I have no idea how to get a venue or sponsors. But maybe you do.
What do you think? Everyone I’ve mentioned the idea to so far seems pretty excited about the idea. I’ll try to set up a wiki for brainstorming venues, sponsors, APIs, datasets and all that stuff. In the meantime, feel free to leave a comment here.
I got excited. Now I want to make things …with science! Are you with me?
We have some new location-centric toys to play with. Let the hacking commence.
Flickr has released its shapefiles dataset for free (as in beer, as in it would be nice if you mentioned where you got the free beer). These shapefiles are bounding boxes that have been generated by the action of humans correcting suggested place names for geotagged photos. Tom put this data to good use with his neighbourhood boundaries app.
Speaking of excellent location-driven creations by Tom, be sure to check out Clarke; a little OS X app that updates your FireEagle location every five minutes by triangulating your position with Skyhook.
The mighty Zeldman has written a thought-provoking piece called The Vanishing Personal Site which chronicles the changing nature of personal publishing. Where once we had a central URL that defined our online presence, people are increasingly publishing in fragments distributed across services like Twitter, Pownce, Flickr and Magnolia. It was this fragmentation that spurred my first dabblings with APIs to produce Adactio Elsewhere which I did three years ago to the day.
Jeff takes a different approach by incorporating all of those other publishing points directly back into his site rather than a separate aggregation area. This approach seems to be gaining ground.
One of the comments to Jeffrey’s post points to the newly launched website of the architectDenna Jones built in part by Jon Tan who describes the thinking behind it. The site is driven entirely by third-party services like Tumblr, Del.icio.us and Flickr. Jon, by contrast, has his third-party publishing aggregated on a page called Asides, similar to Adactio Elsewhere.
I think most people, even if they are micro-publishing in many places, still have one URL that they consider as their online representation. It might be a blog, it might be a Flickr profile, or for many people, it might be a Facebook account.
It will be interesting to watch these trends develop. Something else I’m going to watch is Jon Tan’s website. It’s dripping with gorgeous typography wrapped in an elastic layout. How is that I haven’t come across this site before? Why wasn’t I informed?
I’ve used del.icio.us for quite a while now. I’m storing 1159 bookmarks, each one of them tagged. It works just fine but it also feels a little, I don’t know …stale. There is supposedly a redesign in the works but I’m not sure that I want to wait around any longer to find out if they’re finally going to put some microformats in the markup.
Instead, I’m moving over to Magnolia. I’ve had a Magnolia account for years but I’ve never really used it. I didn’t see the point while I had a del.icio.us account. But whereas de.icio.us appears stagnated, Magnolia seems to be constantly innovating. Also, it uses microformats. There’s also the fact that I know Larry and I’ve briefly met Todd (lovely gents, both) but I don’t know Joshua Schachter. That shouldn’t matter but it kind of does.
I’ve updated my feedburner RSS feed to point it at my Magnolia links instead of my del.ious.us links. If you were subscribed to my del.icio.us feed separately, you’ll probably want to update your feedreader to point to my Magnolia links instead.
It remains to be seen whether I’ll stay at Magnolia. Even though it is functionally and cosmetically superior to del.icio.us, that might not be enough. After all, Jaiku is superior to Twitter in almost every way—design,markup, reliability—but Twitter still wins. That’s mostly because that’s where all my friends are. Right now my bookmarking friends are split fairly evenly between del.icio.us and Magnolia. Then again, I’ve never really made much use of the “social” part of “social bookmarking”.
So who knows? Maybe I’ll end up moving back to del.icio.us at some stage. It’s reassuring to know that moving my data around between these services is pretty straightforward: I can export from Magnolia and import into del.icio.us any time I want.
Brave representatives from Facebook, Plaxo, Twitter, LinkedIn, Dopplr and Pownce showed up to be named and shamed (though most of the shame was reserved for Google in not providing an API for contacts).
To be honest, the impression I got from Google was that I shouldn’t hold my breath but now that they’ve stepped up to the plate and provided an API, there’s really no excuse for websites to ask users to enter their GMail username and password. The API uses AuthSub now but Kevin announced at SXSW that it will support OAuth at some unspecified future date.
Within 24 hours of the Contact Data API’s release, Dopplr had already removed the username/password form and implemented the handshake authentication instead. Bravo, Matt!
So who’s going to be next? Place your bets now. Here are my nominations for the next contenders:
The day that I was flying to San Francisco, Simon and Nat were flying to New Zealand for Kiwi Foo and Webstock so we shared a bus to Heathrow. They both look knackered because they had attempted to “get on New Zealand time” by staying up all night. We parted at the airport: See you in Austin I said. Good luck decentralising the social graph he replied.
Since arriving in San Francisco, I’ve spent most of my time trying to meet up with as many people as possible. A hastily-convened microformats/geek dinner helped to accomplish that.
Now I’m in Sebastopol for the SG Foo Camp. The letters SG stand for Social Graph, which is unfortunate—I’m not a big fan of that particularly techy-sounding term. That said, I’m really looking forward to hearing more from Brad Fitzpatrick about the new Social Graph API from Google. It isn’t the first XFN parser but it’s the only one with Google’s infrastructure. The data returned from spidering my XFN links is impressive but the fact that it can also return results with inbound links is very impressive, although it takes significantly longer to return results and often times out.
For most people, today’s big news was Microsoft licking its lips at Yahoo but that was completely eclipsed by the new API for me. While I was waiting at Tantek’s for Larry and Chris to drive by and pick us up, I spent my time gleefully looking through the reams of information returned from entering just one URL into the API. Just now, I was chatting with John Musser from Programmable Web and we were thinking up all the potential mashups that this could open up.
I’m not going to build anything just yet though. I’m far too tired. I need to find a nice quiet corner of the O’Reilly office to unroll my sleeping bag.
There have been a number of experiments carried out to investigate the effects of video on communication. I recall hearing about one experiment done with mothers and babies. The mothers were placed in one room with a video camera and the babies were placed in another room with a monitor showing a video feed from the mother. The babies interacted just fine with the video representations of their mothers. Then a one second lag was introduced. The babies freaked out.
I was reminded of this during the closing panel on day two of Fundamentos Web. Tim Berners-Lee dialed in via iChat to join a phalanx of panelists in meatspace. Alas, the signal wasn’t particularly strong. Add to that the problem of simultaneous translation, which isn’t really simultaneous, and you’ve got a gap of quite a few seconds between Asturias and Sir Tim’s secret lair. The resultant communication was, therefore, not really much of a conversation. It was still fascinating though.
Some of the most interesting perspectives came from George and Hannah—the people who are working at the coalface of social media. George asked Sir Tim for advice on the cultural side-effects of open data—how to educate people that publishing on sites like Flickr means that your pictures can and will be viewed in other contexts. Interestingly, Sir Tim’s response indicated that he was more concerned with educating people in how to keep their data private.
This difference in perspective might be an indication of a generation gap. The assumption amongst, say, teenagers is that everything is public except what they explictly want to keep private. The default assumption amongst older folks (such as my generation) is the exact opposite: data is private except when it is explictly made public. The first position matches the sensibilities of Flickr and Last.fm. The second position is more in line with Facebook’s walled garden approach.
I was really glad that George raised this issue. It’s something that has been occupying my mind lately, particular in reference to Flickr.
Flickr provides a range of ways of accessing your photos; the website, RSS, KML, LOL… and of course, the API. It’s a wonderful API, certainly the best one that I’ve played with. I had a blast putting together the Flickr portion of Adactio Elsewhere.
Using the API, I was able to put together my own interface onto my photos and the latest photos from my contacts. There’s nothing particularly remarkable about that—there are literally hundreds, if not thousands, of third-party sites that use the Flickr API to do the same thing. However, a lot of those sites use Flash or non-degrading Ajax. But I use Hijax. That means that, even though I’ve built an Ajax interface, the fundamental interaction is RESTful with good ol’ fashioned URLs. As a result—and this is just one of the benefits of Hijax—the Googlebot can spider all possible states of my application.
You can probably see where this is going. It’s a similar situation to what happened with my pirate-speak page converter. Even though I’m not providing a direct interface onto anyone’s pictures, Google is listing deep links in its search results.
This has resulted in a shitstorm on the Flickr forum. Reading through the reactions on that thread has been illuminating. In a nutshell, I’m getting penalised for having search-engine friendly pages. I, along with some other people on that thread, have tried to explain that Adactio Elsewhere is just one example of public Flickr data appearing beyond the bounds of Flickr’s domain—an issue tangentially relatred to intellectual property rights.
In this particular sitution, I was able to take some steps to soothe the injured parties by creating a PHP array called $stroppy_users. I also added a meta element instructing searchbots not to index Adactio Elsewhere which, I believe, will prevent any future grievances. As I said in the forum:
If a tree falls in the forest and Google doesn’t index it, does it make a noise?
I think the outburst of moral panic on the Flickr forum is symptomatic of a larger trend that has accompanied the growth of the site’s user base. Two years ago, Flickr was not your father’s photo sharing website. Now, especially with the migration from Yahoo Photos, it is. If you look at some of the frightened reactions to Flickr’s pirate day shenanigans you’ll see even more signs of this growth (Tom has a great in-depth look at the furore).
As sites like Flickr and Last.fm move from a user base of early adopters into the mainstream, this issue becomes more important. What isn’t clear is how the moral responsibility should be distributed. Should Flickr provide clearer rules for API use? Should Google index less? Should the people publishing photos take more care in choosing when to mark photos as public and when to mark photos as private? Should developers (like myself) be more cautious in what we allow our applications to do with the API?
I don’t know the answers but I’m fairly certain that we’re not dealing with a technological issue here; this is a cultural matter.