Of all the buzzwords in tech, perhaps none has been deployed with as much philosophical conviction as “frictionless.” Over the past decade or so, eliminating “friction” — the name given to any quality that makes a product more difficult or time-consuming to use — has become an obsession of the tech industry, accepted as gospel by many of the world’s largest companies.
Wednesday, December 12th, 2018
Saturday, November 10th, 2018
Rachel does some research to find out why people use CSS frameworks like Bootstrap—it can’t just be about grids, right?
In our race to get our site built quickly, our desire to make things as good as possible for ourselves as the designers and developers of the site, do we forget who we are doing this for? Do the decisions made by the framework developer match up with the needs of the users of the site you are building?
Not for the first time, I’m reminded of Rachel’s excellent post from a few years ago: Stop solving problems you don’t yet have.
Thursday, October 11th, 2018
Some sensible answers to this question here…
…of which, exactly zero mention end users.
Tuesday, October 9th, 2018
Design has disrupted taxis in a massive, almost unprecedented way. But good design doesn’t merely aim to disrupt—it should set out to actually build viable solutions. Designers shouldn’t look at a problem and say, “What we’re going to do is just fuck it up and see what happens.” That’s a dereliction of duty.
Tuesday, September 11th, 2018
Damn, that’s a fine opening! And the rest of this post by Alex is pretty darn great too. He’s absolutely right in calling out the fetishisation of developer experience at the expense of user needs:
The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.
I have a feeling that this will be a very bitter pill for many developers to swallow:
If one views the web as a way to address a fixed market of existing, wealthy web users, then it’s reasonable to bias towards richness and lower production costs. If, on the other hand, our primary challenge is in growing the web along with the growth of computing overall, the ability to reasonably access content bumps up in priority. If you believe the web’s future to be at risk due to the unusability of most web experiences for most users, then discussion of developer comfort that isn’t tied to demonstrable gains for marginalized users is at best misguided.
Oh,captain, my captain!
Tools that cost the poorest users to pay wealthy developers are bunk.
Monday, May 14th, 2018
Thank you. Thank you very much, Mike; it really is an honour and a privilege to be here. Kia Ora! Good morning.
This is quite intimidating. I’m supposed to open the show and there’s all these amazing, exciting speakers are going to be speaking for the next two days on amazing, exciting topics, so I’d better level up: I’d better talk about something exciting. So, let’s do it!
Yeah, we need to talk about capitalism. Capitalism is one of a few competing theories on how to structure an economy and the theory goes that you have a marketplace and the marketplace rules all and it is the marketplace that self-regulates through its kind of invisible hand. Pretty good theory, the idea being that through this invisible hand in the marketplace, wealth can be distributed relatively evenly; sort of a bell-curve distribution of wealth where some people have more, some people have less, but it kind of evens out. You see bell-curve distributions right for things like height and weight or IQ. Some have more, some have less, but the difference isn’t huge. That’s the idea, that economies could follow this bell-curve distribution.
But, without any kind of external regulation, what tends to happen in a capitalistic economy is that the rich get richer, the poor get poorer and it runs out of control. Instead of a bell-curve distribution you end up with something like this which is a power-law distribution, where you have the wealth concentrated in a small percentage and there’s a long tail of poverty.
That’s the thing about capitalism: it sounds great in theory, but not so good in practice.
The theory, I actually agree with, the theory being that competition is good and that competition is healthy. I think that’s a sound theory. I like competition: I think there should be a marketplace of competition, to try and avoid these kind of monopolies or duopolies that you get in these power-law distributions. The reason I say that is that we as web designers and web developers, we’ve seen what happens when monopolies kick in.
We were there: we were there when there was a monopoly; when Microsoft had an enormous share of the browser market, it was like in the high nineties, percentage of browser share with Internet Explorer, which they achieved because they had an enormous monopoly in the desktop market as well, they were able to bundle Internet Explorer with the Windows Operating System. Not exactly an invisible hand regulating there.
But we managed to dodge this bullet. I firmly believe that the more browsers we have, the better. I think a diversity of browsers is a really good thing. I know sometimes as developers, “Oh wouldn’t it be great if there was just one browser to develop for: wouldn’t that be great?” No. No it would not! We’ve been there and it wasn’t pretty. Firefox was pretty much a direct result of this monopoly situation in browsers. But like I say, we dodged this bullet. In some ways the web interpreted this monopoly as damage and routed around it. This idea of interpreting something as damaged and routing around it, that’s a phrase that comes from network architecture
As with economies, there are competing schools of thought on how you would structure a network; there’s many ways to structure a network.
One way, sort of a centralised network approach is, you have this hub and spoke model, where you have lots of smaller nodes connecting to a single large hub and then those large hubs can connect with one another and this is how the telegraph worked, then the telephone system worked like this. Airports still pretty much work like this. You’ve got regional airports connecting to a large hub and those large hubs connect to one another, and it’s a really good system. It works really well until the hub gets taken out. If the hub goes down, you’ve got these nodes that are stranded: they can’t connect to one another. That’s a single point of failure, that’s a vulnerability in this network architecture.
It was the single point of failure, this vulnerability, that led to the idea of packet-switching and different network architectures that we saw in the ARPANET, which later became the Internet. The impetus there was to avoid the single point of failure. What if you took these nodes and gave them all some connections?
This is more like a bell-curve distribution of connections now. Some nodes have more connections than others, some have fewer, but there isn’t a huge difference between the amount of connections between nodes. Then the genius of packet-switching is that you just need to get the signal across the network by whatever route works best at the time. That way, if a node were to disappear, even a relatively well-connected one, you can route around the damage. Now you’re avoiding the single point of failure problem that you would have with a hub and spoke model.
Like I said, this kind of thinking came from the ARPANET, later the Internet and it was as a direct result of trying to avoid having that single point of failure in a command-and-control structure. If you’re in a military organisation, you don’t want to have that single point of failure. You’ve probably heard that the internet was created to withstand a nuclear attack and that’s not exactly the truth but the network architecture that we have today on the internet was influenced by avoiding that command-and-control, that centralised command-and-control structure.
Some people kind of think there’s sort of blood on the hands of the internet because it came from this military background, DARPA…Defence Advance Research Project. But actually, the thinking behind this was not to give one side the upper hand in case of a nuclear conflict: quite the opposite. The understanding was, if there were the chance of a nuclear conflict and you had a hub and spoke model of communication, then actually you know that if they take out your hub, you’re screwed, so you’d better strike first. Whereas if you’ve got this kind of distributed network, then if there’s the possibility of attack, you might be more willing to wait it out and see because you know you can survive that initial first strike. And so this network approach was not designed to give any one side the upper hand in case of a nuclear war, but to actually avoid nuclear war in the first place. On the understanding that this was in place for both sides, so Paul Baran and the other people working on the ARPANET, they were in favour of sharing this technology with the Russians, even at the height of the Cold War.
The World Wide Web
In this kind of network architecture, there’s no hubs, there’s no regional nodes, there’s just nodes on the network. It’s very egalitarian and the network can grow and shrink infinitely; it’s scale-free, you can just keep adding nodes to network and you don’t need to ask permission to add a node to this network. That same sort of architecture then influenced the World Wide Web, which is built on top of the Internet and Tim Berners-Lee uses this model where anybody can add a website to the World Wide Web: you don’t need to ask for permission, you just add one more node to the network; there’s no plan to it, there’s no structure and so it’s a mess! It’s a sprawling, beautiful mess with no structure.
And it’s funny, because in the early days of the Web it wasn’t clear that the Web was going to win; not at all. There was competition. You had services like Compuserve and AOL. I’m not talking about AOL the website. Before it was a website, it was this thing separate to the web, which was much more structured, much safer; these were kind of the walled gardens and they would make wonderful content for you and warn you if you tried to step outside the bounds of their walled gardens into the wild, lawless lands of the World Wide Web and say, ooh, you don’t want to go out there. And yet the World Wide Web won, despite its chaoticness, its lawlessness. People chose the Web and despite all the content that Compuserve and AOL and these other walled gardens were producing, they couldn’t compete with the wild and lawless nature of the Web. People left the walled gardens and went out into the Web.
And then a few years later, everyone went back into the walled gardens. Facebook, Twitter, Medium. Very similar in a way to AOL and Compuserve: the nice, well-designed places, safe spaces not like those nasty places out in the World Wide Web and they warn you if you’re about to head out into the World Wide Web. But there’s a difference this time round: AOL and Compuserve, they were producing content for us, to keep us in the walled gardens. With the case of Facebook, Medium, Twitter, we produce the content. These are the biggest media companies in the world and they don’t produce any media. We produce the media for them.
How did this happen? How did we end up with this situation when we returned into the walled gardens? What happened?
Facebook, I used to wonder, what is the point of Facebook? I mean this in the sense that when Facebook came along, there were lots of different social networks, but they all kinda had this idea of being about a single, social object. Flickr was about the photograph and upcoming.org was about the event and Dopplr was about travel. And somebody was telling me about Facebook and saying, “You should get on Facebook.” I was like, “Oh yeah? What’s it for?” He said: “Everyone’s on it.” “Yeah, but…what’s it for? Is it photographs, events, what is it?” And he was like, “Everyone’s on it.” And now I understand that it’s absolutely correct: the reason why everyone is on Facebook is because everyone is on Facebook. It’s a direct example of Metcalfe’s Law.
Again, it’s a power-law distribution: that the value of the network is proportional to the square of the number of nodes on a network. Basically, the more people on the network the better. The first person to have a fax machine, that’s a useless lump of plastic. As soon as one more person has a fax machine, it’s exponentially more useful. Everyone is on Facebook because everyone is on Facebook. Which turns it into a hub. It is now a centralised hub, which means it is a single point of failure, by the way. In security terms, I guess you would talk about it having a large attack surface.
Let’s say you wanted to attack media outlets, I don’t know, let’s say you were trying to influence an election in the United States of America… Instead of having to target hundreds of different news outlets, you now only need to target one because it has a very large attack surface.
It’s just like, if you run WordPress as a CMS, you have to make sure to keep it patched all the time because it’s a large attack surface. It’s not that it’s any more or less secure or vulnerable than any other CMS, it’s just that it’s really popular and therefore is more likely to be attacked. Same with a hub like Facebook.
OK. Why then? Why did we choose to return to these walled gardens? Well, the answer’s actually pretty obvious, which is: they’re convenient. Walled gardens are nice to use. The user experience is pretty great; they’re well-designed, they’re nice.
The disadvantage is what you give up when you gain this convenience. You give up control. You no longer have control over the content that you publish. You don’t control who’s going to even see what you publish. Some algorithm is taking care of that. These silos—Facebook, Twitter, Medium—they now have control of the hyperlinks. Walled gardens give you convenience, but the cost is control.
The Indie Web
This is where this idea of the indie web comes in, to try and bridge this gap that you could somehow still have the convenience of using these beautiful, well-designed walled gardens and yet still have the control of owning your own content, because let’s face it, having your own website, that’s a hassle: it’s hard work, certainly compared to just opening up Facebook, opening a Facebook account, Twitter account, Medium account and just publishing, boom. The barrier to entry is really low whereas setting up your own website, registering a domain, do you choose a CMS? There’s a lot of hassle involved in that.
But maybe we can bridge the gap. Maybe we can get both: the convenience and the control. This is the idea of the indie web. At its heart, there’s a fairly uncontroversial idea here, which is that you should have your own website. I mean, there would have been a time when that would’ve been a normal statement to make and these days, it sounds positively disruptive to even suggest that you should have your own website.
I have my own personal reasons for wanting to publish on my own website. If anybody was here back in the…six years ago, I was here at Webstock which was a great honour and I was talking about digital preservation and longevity and for me, that’s one of the reasons why I like to have the control over my own content, because these things do go away. If you published your content on, say, MySpace: I’m sorry. It’s gone. And there was a time when it was unimaginable that MySpace could be gone; it was like Facebook, you couldn’t imagine the web without it. Give it enough time. MySpace is just one example, there’s many more. I used to publish in GeoCities. Delicious; Magnolia, Pownce, Dopplr. Many, many more.
Now, I’m not saying that everything should be online for ever. What I’m saying is, it should be your choice. You should be able to choose when something stays online and you should be able to choose when something gets taken offline. The web has a real issue with things being taken offline. Linkrot is a real problem on the web, and partly that’s to do with the nature of the web, the fundamental nature of the way that linking works on the web.
When Tim Berners-Lee and Robert Cailliau were first coming up with the World Wide Web, they submitted a paper to a hypertext conference, I think it was 1991, 92, about this project they were building called the World Wide Web. The paper was rejected. Their paper was rejected by these hypertext experts who said, this system: it’ll never work, it’s terrible. One of the reasons why they rejected it was it didn’t have this idea of two-way linking. Any decent hypertext system, they said, has a concept of two-way linking where there’s knowledge of the link at both ends, so in a system that has two-way linking, if a resource happens to move, the link can move with it and the link is maintained. Now that’s not the way the web works. The web has one-way linking; you can just link to something, that’s it and the other resource has no knowledge that you’re linking to it but if that resource moves or goes away, the link is broken. You get linkrot. That’s just the way the web works.
But. There’s a little technique that if you sort of squint at it just the right way, it sort of looks like two-way linking on the web and involves a very humble bit of HTML. The
rel attribute. Now, you’ve probably seen the
rel attribute before, you’ve probably seen it on the
link element. Rel is short for relationship, so the value of the
rel attribute will describe the relationship of the linked resource, whatever’s inside the
href; so I’m sure you’ve probably typed this at some point where you say
rel=stylesheet on a
link element. What you’re saying is, the linked resource, what’s in the
href, has the relationship of being a style sheet for the current document.
link rel="stylesheet" href="..."
You can use it on
A elements as well. There’s
rel values like
prev for previous and
next, say this is the relationship of being the next document, or this is the relationship of being the previous document. Really handy for pagination of search results, for example.
a rel="prev" href="..." a rel="next" href="..."
And then there’s this really silly value,
a rel="me" href="..."
Now, how does that work? The linked document has a relationship of being me? Well, I use this. I use this on my own website. I have
A elements that link off to my profiles on these other sites, so I’m saying, that Twitter profile over there: that’s me. And that’s me on Flickr and that’s me on GitHub.
a rel="me" href="https://twitter.com/adactio" a rel="me" href="https://flickr.com/adactio" a rel="me" href="https://github.com/adactio"
OK, but still, these are just regular, one-way hyperlinks I’m making. I’ve added a
rel value of “me” but so what?
Well, the interesting thing is, if you go to any of those profiles, when you’re signing up, you can add your own website: that’s one of the fields. There’s a link from that profile to your own website and in that link, they also use
a rel="me" href="https://adactio.com"
I’m linking to my profile on Twitter saying,
rel=me; that’s me. And my Twitter profile is linking to my website saying,
rel=me; that’s me. And so you’ve kind of got two-way linking. You’ve got this confirmed relationship, these claims have been verified. Fine, but what can you do with that?
Well, there’s a technology called RelMeAuth that uses this, kind of piggy-backs on something that all these services have in common: Twitter and Flickr and GitHub. All of these services have OAuth, authentication. Now, if I wanted to build an API, I should probably, for a right-API, I probably need to be an OAuth provider. I am not smart enough to become an OAuth provider; that sounds way too much like hard work for me. But I don’t need to because Twitter and Flickr and GitHub are already OAuth providers, so I can just piggy-back on the functionality that they provide, just be adding
Here’s an example of this in action. There’s an authentication service called IndieAuth and I literally sign in with my URL. I type in my website name, it then finds the
rel=me links, the ones that are reciprocal; I choose which one I feel like logging in with today, let’s say Twitter, I get bounced to Twitter, I have to be logged in on Twitter for this to work and then I’ve authenticated. I’ve authenticated with my own website; I’ve used OAuth without having to write OAuth, just by adding
rel=me to a couple of links that were already on my site.
Why would I want to authenticate? Well, there’s another piece of technology called micropub. Now, this is definitely more complicated than just adding
rel=me to a few links. This is an end-point on my website and can be an end-point on your website and it accepts POST requests, that’s all it does. And if I’ve already got authentication taken care of and now I’ve got an end-point for POST requests, I’ve basically got an API, which means I can post to my website from other places. Once that end-point exists, I can use somebody else’s website to post to my website, as long as they’ve got this micropub support. I log in with that IndieAuth flow and then I’m using somebody else’s website to post to my website. That’s pretty nice. As long as these services have micropub support, I can post from somebody else’s posting interface to my own website and choose how I want to post.
In this example there, I was using a service called Quill; it’s got a nice interface. You can do long-form writing on it. It’s got a very Medium-like interface for long-form writing because a lot of people—when you talk about why are they on Medium—it’s because the writing experience is so nice, so it’s kind of been reproduced here. This was made by a friend of mine named Aaron Parecki and he makes some other services too. He makes OwnYourGram and OwnYourSwarm, and what they are is they’re kind of translation services between Instagram and Swarm to micropub.
Instagram and Swarm do not provide micropub support but by using these services, authenticating with these services using the
rel=me links, I can then post from Instagram and from Swarm to my own website, which is pretty nice. If I post something on Swarm, it then shows up on my own website. And if I post something on Instagram, it goes up on my own website. Again I’m piggy-backing. There’s all this hard work of big teams of designers and engineers building these apps, Instagram and Swarm, and I’m taking all that hard work and I’m using it to post to my own websites. It feels kind of good.
There’s an acronym for this approach, and it’s PESOS, which means you Publish Elsewhere and Syndicate to your Own Site. There’s an alternative to this approach and that’s POSSE, or you Publish on your Own Site and then you Syndicate Elsewhere, which I find preferable, but sometimes it’s not possible. For example, you can’t publish on your own site and syndicate it to Instagram; Instagram does not allow any way of posting to it except through the app. It has an API but it’s missing a very important method which is post a photograph. But you can syndicate to Medium and Flickr and Facebook and Twitter. That way, you benefit from the reach, so I’m publishing the original version on my own website and then I’m sending out copies to all these different services.
For example, I’ve got this section on my site called Notes which is for small little updates of say, oh, I don’t know, 280 characters and I’ve got the option to syndicate if I feel like it to say, Twitter or Flickr. When I post something on my own website—like this lovely picture of an amazingly good dog called Huxley—I can then choose to have that syndicated out to other places like Flickr or Facebook. The Facebook one’s kind of a cheat because I’m just using an “If This Then That” recipe to observe my site and post any time I post something. But I can syndicate to Twitter as well. The original URL is on my website and these are all copies that I’ve sent out into the world.
OK, but what about…what about when people comment or like or retweet, fave, whatever it is they’re doing, the copy? They don’t come to my website to leave a comment or a fave or a like, they’re doing it on Twitter or Flickr. Well, I get those. I get those on my website too and that’s possible because of another building block called webmention. Webmention is another end-point that you can have on your site but it’s very, very simple: it just accepts pings. It’s basically a ping tracker. Anyone remember pingback? We used to have pingbacks on blogs; and it was quite complicated because it was XML-RPC and all this stuff. This is literally just a post that goes “ping”.
Let’s say you link to something on my website; I have no way of knowing that you’ve linked to me, I have no way of knowing that you’ve effectively commented on something I’ve posted, so you send me a ping using webmention and then I can go check and see, is there really a link to this article or this post or this note on this other website and if there is: great. It’s up to me now what I do with that information. Do I display it as a comment? Do I store it to the database? Whatever I want to do.
And you don’t even have to have your own webmention end-point. There’s webmentions as a service that you can subscribe to. Webmention.io is one of those; it’s literally like an answering service for pings. You can check in at the end of the day and say, “Any pings for me today?” Like a telephone answering service but for webmentions.
And then there’s this really wonderful service, a piece of open source software called Bridgy, which acts as a bridge. Places like Twitter and Flickr and Facebook, they do not send webmentions every time someone leaves a reply, but Bridgy—once you’ve authenticated with the
rel=me values—Bridgy monitors your social media accounts and when somebody replies, it’ll take that and translate it into a webmention and send it to your webmention end-point. Now, any time somebody makes a response on one of the copies of your post, you get that on your own website, which is pretty neat.
It’s up to you what you do with those webmentions. I just display them in a fairly boring manner, the replies appear as comments and I just say how many shares there were, how many likes, but this is a mix of things coming from Twitter, from Flickr, from Facebook, from anywhere where I’ve posted copies. But you could make them look nicer too. Drew McLellan has got this kind of face-pile of the user accounts of the people who are responding out there on Twitter or on Flickr or on Facebook and he displays them in a nice way.
Drew, along with Rachel Andrew, is one of the people behind Perch CMS; a nice little CMS where a lot of this technology is already built in. It has support for webmention and all these kind of things, and there’s a lot of CMSs have done this where you don’t have to invent this stuff from scratch. If you like what you see and you think, “Oh, I want to have a webmention end-point, I want to have a micropub end-point”, chances are it already exists for the CMS you’re using. If you’re using something like WordPress or Perch or Jekyll or Kirby: a lot of these CMSs already have plug-ins available for you to use.
Those are a few technologies that we can use to try and bridge that gap, to try and still get the control of owning your own content on your own website and still have the convenience of those third party services that we get to use their interfaces, that we get to have those conversations, the social effects that come with having a large network. Relatively simple building blocks:
rel=me, micropub and webmention.
But they’re not the real building blocks of the indie web; they’re just technologies. Don’t get too caught up with the technologies. I think the real building blocks of the indie web can be found here in the principles of the indie web.
There’s a great page of design principles about why are we even doing this. There are principles like own your own data; focus on the user experience first; make tools for yourself and then see how you can scale them and share them with other people. But the most important design principle, I think, that’s on that list comes at the very end and it’s this: that we should 🎉 have fun (and the emoji is definitely part of the design principle).
I also think we should remember the original motto of the World Wide Web, which was: let’s share what we know. And over the next few days, you’re going to hear a lot of amazing, inspiring ideas from amazing, inspiring people and I hope that you would be motivated to maybe share your thoughts. You could share what you know on Mark Zuckerberg’s website. You could share what you know on Ev Williams’s website. You could share what you know on Biz Stone and Jack Dorsey’s website. But I hope you’ll share what you know on your own website.
Saturday, January 20th, 2018
Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.
Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.
It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.
I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:
In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.
- Identify core functionality.
- Make that functionality available using the simplest possible technology.
Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”
Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.
I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.
There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.
But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.
That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.
Thursday, February 19th, 2015
When I look around, I see our community spending a lot of time coming up with new tools and techniques to make our jobs easier. To ship faster. And it’s not that I’m against efficiency, but I think we need to consider the implications of our decisions. And if one of those implications is making our users suffer—or potentially suffer—in order to make our lives easier, I think we need to consider their needs above our own.
Thursday, October 23rd, 2014
The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.
Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.
While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:
And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.
Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.
Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.
It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.
Stuart finishes on a stirring note:
The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.
However he wraps up by saying that…
In a post called Missed Connections, Aaron pushes back against that last point:
Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).
But here’s the problem with progressively enhancing from server functionality to a rich client:
A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.
Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:
It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.
Ah, there’s the rub!
When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?
I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.
But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.
Here’s the harsh truth: building websites with progressive enhancement is not convenient.
Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.
Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.
But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?
For me, this is the most exciting aspect of Node.js:
Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.
Here’s the ideal situation:
- A browser requests a URL.
- If the browser does cut the mustard, keep all the interaction in the client, just like a single page app.
So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?
Well, partly it’s back to that question of controlling the server environment.
In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.
Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.
Now …who wants to do the same thing for progressive enhancement?
Monday, March 5th, 2012
A great article from David with some concrete proposals for media companies.
By the way, how nice is David’s new responsive design? Very nice. Very nice indeed.