Monday, July 27th, 2015

On The Verge

Quite a few people have been linking to an article on The Verge with the inflammatory title The Mobile web sucks. In it, Nilay Patel heaps blame upon mobile browsers, Safari in particular:

But man, the web browsers on phones are terrible. They are an abomination of bad user experience, poor performance, and overall disdain for the open web that kicked off the modern tech revolution.

Les Orchard says what we’re all thinking in his detailed response The Verge’s web sucks:

Calling out browser makers for the performance of sites like his? That’s a bit much.

Nilay does acknowledge that the Verge could do better:

Now, I happen to work at a media company, and I happen to run a website that can be bloated and slow. Some of this is our fault: The Verge is ultra-complicated, we have huge images, and we serve ads from our own direct sales and a variety of programmatic networks.

But still, it sounds like the buck is being passed along. The performance issues are being treated as Somebody Else’s Problem …ad networks, trackers, etc.

The developers at Vox Media take a different, and in my opinion, more correct view. They’re declaring performance bankruptcy:

I mean, let’s cut to the chase here… our sites are friggin’ slow, okay!

But I worry about how they can possibly reconcile their desire for a faster website with a culture that accepts enormously bloated ads and trackers as the inevitable price of doing business on the web:

I’m hearing an awful lot of false dichotomies here: either you can have a performant website or you have a business model based on advertising. Here’s another false dichotomy:

If the message coming down from above is that performance concerns and business concerns are fundamentally at odds, then I just don’t know how the developers are ever going to create a culture of performance (which is a real shame, because they sound like a great bunch). It’s a particularly bizarre false dichotomy to be foisting when you consider that all the evidence points to performance as being a key differentiator when it comes to making moolah.

It’s funny, but I take almost the opposite view that Nilay puts forth in his original article. Instead of thinking “Oh, why won’t these awful browsers improve to be better at delivering our websites?”, I tend to think “Oh, why won’t these awful websites improve to be better at taking advantage of our browsers?” After all, it doesn’t seem like that long ago that web browsers on mobile really were awful; incapable of rendering the “real” web, instead only able to deal with WAP.

As Maciej says in his magnificent presentation Web Design: The First 100 Years:

As soon as a system shows signs of performance, developers will add enough abstraction to make it borderline unusable. Software forever remains at the limits of what people will put up with. Developers and designers together create overweight systems in hopes that the hardware will catch up in time and cover their mistakes.

We complained for years that browsers couldn’t do layout and javascript consistently. As soon as that got fixed, we got busy writing libraries that reimplemented the browser within itself, only slower.

I fear that if Nilay got his wish and mobile browsers made a quantum leap in performance tomorrow, the result would be even more bloated JavaScript for even more ads and trackers on websites like The Verge.

If anything, browser makers might have to take more drastic steps to route around the damage of bloated websites with invasive tracking.

We’ve been here before. When JavaScript first landed in web browsers, it was quickly adopted for three primary use cases:

  1. swapping out images when the user moused over a link,
  2. doing really bad client-side form validation, and
  3. spawning pop-up windows.

The first use case was so popular, it was moved from a procedural language (JavaScript) to a declarative language (CSS). The second use case is still with us today. The third use case was solved by browsers. They added a preference to block unwanted pop-ups.

Tracking and advertising scripts are today’s equivalent of pop-up windows. There are already plenty of tools out there to route around their damage: Ghostery, Adblock Plus, etc., along with tools like Instapaper, Readability, and Pocket.

I’m sure that business owners felt the same way about pop-up ads back in the late ’90s. Just the price of doing business. Shrug shoulders. Just the way things are. Nothing we can do to change that.

For such a young, supposedly-innovative industry, I’m often amazed at what people choose to treat as immovable, unchangeable, carved-in-stone issues. Bloated, invasive ad tracking isn’t a law of nature. It’s a choice. We can choose to change.

Every bloated advertising and tracking script on a website was added by a person. What if that person refused? I guess that person would be fired and another person would be told to add the script. What if that person refused? What if we had a web developer picket line that we collectively refused to cross?

That’s an unrealistic, drastic suggestion. But the way that the web is being destroyed by our collective culpability calls for drastic measures.

By the way, the pop-up ad was first created by Ethan Zuckerman. He has since apologised. What will you be apologising for in decades to come?

Thursday, July 16th, 2015

Quakepunk

There’s an article in The New Yorker by Kathryn Schulz called The Really Big One. It’s been creating quite a buzz, and rightly so. It’s a detailed and evocative piece about the Cascadia fault:

When the next very big earthquake hits, the northwest edge of the continent, from California to Canada and the continental shelf to the Cascades, will drop by as much as six feet and rebound thirty to a hundred feet to the west—losing, within minutes, all the elevation and compression it has gained over centuries.

But there’s another hotspot on the other side of the country: the New Madrid fault line. There isn’t (yet) an article about in The New Yorker. There’s something better. Two articles by Maciej:

  1. Confronting New Madrid and
  2. Confronting New Madrid (Part 2).

The New Madrid Seismic Zone earned its reputation on the strength of three massive earthquakes that struck in the winter of 1811-1812. The region was very sparsely settled at the time, and became more sparsely settled immediately afterwards, as anyone with legs made it their life’s mission to get out of southern Missouri.

The articles are fascinating and entertaining in equal measure. No surprise there. I’ve said it before and I’ll say it again, Maciej Cegłowski is the best writer on the web. Every so often I find myself revisiting Argentina On Two Steaks A Day or A Rocket To Nowhere just for the sheer pleasure of it.

I want to read more from Maciej, and there’s a way to make it happen. If we back him on Kickstarter, he’ll take a trip to the Antarctic and turn it into words:

Soliciting donations to take a 36-day voyage to the Ross Ice Shelf, Bay of Whales and subantarctic islands, and write it up real good.

Let’s make it happen. Let’s throw money at him like he’s a performing monkey. Dance, writer-boy, dance!

Tuesday, July 14th, 2015

Indie Web Camp Brighton 2015

Indie Web Camp Brighton 2015 is a wrap, and what a fun weekend it turned out to be.

I was really pleased with the turnout; not just the number of people who came along—many of them from very far afield—but also the range of skill levels and backgrounds represented. What a lovely bunch!

Indie Web Camp Brighton group photo

We kicked off the first day with a show’n’tell: people demoed their sites, showed their posting interfaces, and talked about what they’d like to improve. That sparked plenty of ideas for the afternoon discussions. But in between we had a nice long lunch break—it was a lovely sunny day in Brighton so we took full advantage of the sun, the street food, and the ice cream.

We wrapped up the first day around 5pm and I immediately dashed off to start loading in and sound checking for a Salter Cane gig that evening. That turned out to be a lot of fun—the audience were great—but I was completely knackered by the end of the day.

The weather on Sunday was far gloomier, but that was okay—we spent the whole day indoors anyway, coding and hacking away at stuff. Quite a few people were adding h-entry and h-card to their sites so I helped them out whenever I could. Meanwhile I was working on trying to get an SMS interface to my site working using the Twilio API.

The actual coding part went pretty quickly, but then I hit a wall. Whenever Twilio tried to reach a URL on my site, it would time out with a 504 error. I couldn’t figure out what was going on. On a hunch, I tried sending it to a subdomain that wasn’t being served over HTTPS. That worked fine. Now, I can’t imagine that Twilio is actually unable to work with secure endpoints, so it must be something to do with the way that I’ve enabled HTTPS on my domain. Anyway, the HTTP subdomain solution worked, and eleven minutes before demo time I finally had something to show.

We finished the day and the event with the quickfire demos. As always, there was some really impressive stuff—it’s quite amazing how much can get done in such a short space of time. Then we tidied up and headed across the street to the pub for a well-deserved pint.

All in all, a great weekend.

Friday, July 3rd, 2015

100 × 100

For 100 days I wrote and published a blog post that was 100 words long. This was all part of the 100 Days project running at Clearleft. It was by turns fun, annoying, rewarding, and tedious.

It feels nice to have 10,000 words written by the end of it even if many of those words were written in haste, without much originality and often without much enthusiasm. There were many evenings when I was already quite tired and then remembered that I had to bash out 100 words. On those occasions, it really felt like a chore, but then, that’s the whole point of the exercise—that you do it every day regardless of how motivated or not you feel on that day.

I missed the daily deadline once. I could make the excuse that it was a really late night of carousing, but I knew in advance that I was going to be out so I could’ve written my 100 words ahead of time—I didn’t.

My exercise of choice wasn’t too arduous. Some of the other Clearlefties picked far more ambitious tasks. Alas, many of them didn’t make it to the finish line, probably because they set their own bar so high. I knew that I wanted to do something that involved writing, and I picked the 100 words constraint simply because it sounded cute.

Lots of people reading my posts thought that 100 words was the upper limit in the same way that 140 characters is the upper limit on Twitter. But for me, the whole point of the exercise was that each post needed to be 100 words exactly. Now I kind of want to write a Twitter client that only lets you post tweets that are exactly 140 characters.

Writing a post that needed to be an exact number of words long was where the challenge lay, but it was also where the reward was found. It was frustrating to have to excise words or even whole sentences just to make the word count fit, but it was also very satisfying when the final post felt like a fully-formed thing.

I realised a few weeks into the project that the piece of software I was writing in (and relying on for an accurate word count) was counting hyphenated phrases as one word. So the phrase “dog-eat-dog world” was counted as two words, not four. I worried that maybe I had already published some posts that were over 100 words long. Later on, I tried to avoid hyphenating, or else I’d add in the hyphens after I had hit the 100 word point. In any case, there may be some discrepancy in the word count between the earlier posts and the later ones.

That’s the thing about an exercise that involves writing exactly 100 words; it leads to existential questions like “what is a word anyway?”

Some of the posts made heavy use of hyperlinks. I wondered whether this was cheating. But then I decided that, given the medium I was publishing on, it would be weird not to have any hyperlinks. And the pieces still stand on their own if you don’t follow any of the links.

Most of the posts used observations from that day for their subject matter—diary-like slices of life. But occasionally I’d put down some wider thought—like days 15, 73, 81, or 98. Still, I suspect it’s the slice-of-life daily updates that will be most interesting to read back on in years to come.

Thursday, July 2nd, 2015

Baseline

Jake gave a great talk at Responsive Day Out 3 all about nuanced progressive enhancement, with a look at service workers in particular (a technology designed with progressive enhancement at its heart).

To illustrate the performance gains, Jake used his SVGOMG site as an example—a really terrific resource for optimising SVGs.

SVGOMG requires JavaScript for its core functionality (optimising an SVG file). That was a deliberate choice. Jake could’ve made the barrier-to-entry as low as any browser that supports input type="file" but he decided that for this audience (developers) it was a safe assumption that JavaScript would be available.

Jake talked about this in an interview with Paul about the site:

I’m a strong believer in progressive enhancement, but also that each phase of the enhancement needs a user.

I agree completely with this approach. It makes sense to have a valid reason for adding any enhancement. But there’s something about this particular example that wasn’t sitting right with me. It took me a while to figure it out, but I now realise what it is.

Jake is talking about making it work on the server as an enhancement. But that’s not an enhancement, it’s a fallback.

Thinking in terms of fallbacks is more of a “graceful degradation” approach (i.e. for every “full” feature, thinking of a corresponding fallback). That’s not how I like to think of progressive enhancement. I like to think in terms of a baseline. And that baseline, in my mind, does not require a user to justify its existence. That’s because the baseline isn’t there to cover the use cases we can think of, it’s there to cover the use cases we can’t predict.

That might seem like a minor difference in wording to the graceful degradation approach but I think it’s actually a fundamentally different way of approaching the situation.

When I was on the progressive enhancement panel at Edge Conf, Lyza asked how low the baseline should be. I said “as low as possible.” Some of my fellow panelists took issue with this saying it varies from project to project, and that’s completely true, but I think I should’ve clarified that when I talk about a baseline, I’m not talking about browsers. I don’t think about a baseline in terms of “IE4 and above, Android 2.1 and above, etc.”—I think about a baseline in terms of “the minimum required technology to allow a user to accomplish the core task” (that qualification about core tasks is important—the baseline does not need to cover tasks that are nice-to-have; those can safely require more sophisticated technology).

That “minimum required technology” often turns out to be a combination of a web server, HTTP, and some HTML.

So to take SVGOMG as an example, I would begin with the baseline of “allowing a user to optimise an SVG file”. The minimum required technology is a web server running a programme that does the optimisation, and an HTML document that contains a form element with input type="file". Once that’s in place, then I can start applying Jake’s very sensible approach of thinking about enhancements in terms of specific user benefits. In this case, it’s pretty clear that 99.99% of the users would benefit from not having that round-trip to the server and have the SVG optimisation happen in the browser using JavaScript.

There’s an enhancement provided for the use case that I can imagine. But—and this is the subtle but important distinction—there’s a baseline for all the use cases that I can’t think of. I need to recognise that I won’t be able to predict all the possible use cases, and that’s okay—as long there’s a solid baseline in place, I’ve got an insurance policy for unforeseen circumstances. It’s still not perfect, but it lowers the risk somewhat by reducing the number of assumptions being built in at that baseline level.

Going back to Jake’s chat with Paul, he says:

I thought about making the site work without JS by doing the SVG work on the server, but this would be slow and a maintenance burden.

The maintenance burden is a very valid point. This is something that Stuart talked about a while back:

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

Leaving aside the promise of isomorphic/universal/whatever JavaScript, this issue of developer convenience is big issue. When I use the term “developer convenience” to label this problem, I am not belittling it in any way—developer convenience is incredibly important (hence the appeal of so many tools and frameworks that make life easier for developers). I still believe that developer convenience should be lower on the list of priorities than having a rock-solid baseline, but I can totally understand if someone doesn’t share that opinion. It’s a personal decision and if the pain involved in making a more universal baseline is greater than the perceived—and, let’s face it, somewhat abstract—benefit, I can totally understand that.

Anyway, that’s my little brain dump about progressive enhancement and baseline experiences. Something about treating the baseline experience as an enhancement was itching at my brain and now that I’ve managed to scratch it, I can see what was troubling me: thinking about the baseline experience in the same way as thinking about enhancements doesn’t work for me.

Personally, I’m going to strive to keep the baseline as low as possible. I’m also going to strive to apply Jake’s maxim about every enhancement requiring a user.

Tuesday, June 30th, 2015

100 words 100

Edge words

I really enjoyed last year’s Edge conference so I made sure not to miss this year’s event, which took place last weekend.

The format was a little different this time ‘round. Last year the whole day was taken up with panels. Now, panels are often rambling, cringeworthy affairs, but Edge Conf is one of the few events that does panels well: they’re run on a tight schedule and put together with lots of work in advance. At this year’s Edge, the morning was taken up with these tightly-run panels as usual, but the afternoon consisted of more Barcamp-like breakout sessions.

I’ve got to be honest: I don’t think the new format worked that well. The breakout sessions didn’t have the true flexibility that you get with an unconference schedule, so there was no opportunity to merge similarly-themed sessions. There was, for example, a session on components at the same time as a session on accessibility in web components.

That highlights the other issue: FOMO. I’m really not a fan of multi-track events; there were so many sessions that sounded really interesting, but I couldn’t clone myself and go to all of them at once.

But, like I said, the first half of the day was taken up with four sequential (rather than parallel) panels and they were all excellent. All of the moderators did a fantastic job, and I was fortunate enough to sit in on the progressive enhancement panel expertly moderated by Lyza.

The event is called Edge for a reason. There is a rarefied atmosphere—and not just because of the broken-down air conditioning. This is a room full of developers on the cutting edge of web development technologies. Being at Edge Conf means being in a bubble. And being in a bubble is absolutely fine as long as you’re aware you’re in a bubble. It would be problematic if anyone were to mistake the audience and the discussions at Edge as being in any way representative of typical working web devs.

One of the most insightful comments of the day came from Christian who said, “Yes, but this is Edge Conf.” You’re going to need some context for that quote, so here it is…

On the web components panel that Christian was moderating, Alex was making a point about the ubiquity of tools—”Tooling was save you”, he said—and he asked for a show of hands from the audience on who was not using some particular tooling technology; transpilers, package managers, build tools, I can’t remember the specific question. Nobody put their hand up. “See?” asked Alex. “Yes”, said Christian, “but this is Edge Conf.”

Now, while I wasn’t keen on the format of the afternoon with its multiple simultaneous breakout sessions, that doesn’t mean I didn’t enjoy the ones I plumped for. Quite the opposite. The last breakout session of the day, again expertly moderated by Lyza, was particularly great.

The discussion was all about progressive enhancement. There seemed to be a general consensus that we’re all 100% committed to the results of progressive enhancement—greater availability, wider reach, and better performance—but that the term itself is widely misunderstood as “making all of your functionality work even with JavaScript switched off”. This misunderstanding couldn’t be further from the truth:

  1. It’s not about making all of your functionality available; it’s making your core functionality available: everything else can be considered an enhancement and it’s perfectly fine if not everyone gets that enhancement.
  2. This isn’t about switching JavaScript off; it’s about any particular technology not being available for reasons we can’t foresee (network issues, browser issues, whatever it may be).

And yet the misunderstanding persists. For that reason, most of the people in the discussion at Edge Conf were in favour of simply dropping the term progressive enhancement and instead focusing on terms like availability and access. Tim writes:

I’m not sure what we call it now. Maybe we do need another term to get people to move away from the “progressive enhancement = working without JS” baggage that distracts from the real goal.

And Stuart writes:

So I’m not going to be talking about progressive enhancement any more. I’m going to be talking about availability. About reach. About my web apps being for everyone even when the universe tries to get in the way.

But Jason writes:

I completely disagree that we should change nomenclature because there exists some small segment of Web designers unwilling to expand their development toolbox. I think progressive enhancement—the term—remains useful, descriptive, and appropriate.

I’m torn. On the one hand, I agree with Jason. The term “progressive enhancement” is a great descriptor. But on the other hand, I don’t want to end up like that guy who’s made it his life’s work to change every instance of the phrase “comprises of” to “comprises” (or “consists of”) on Wikipedia. Technically, he’s correct. But it doesn’t sound like a fun way to spend your days.

I guess my worry is, if I write an article or give a presentation, and I title it something to do with progressive enhancement, am I going to alienate and put off the very audience I’m trying to reach? But if I title it something else, am I tricking people?

Words are hard.

Monday, June 29th, 2015

100 words 099

This is the penultimate post in my 100 days project.

I’ve had quite a few people tell me how much they’re enjoying reading my hundred word posts. I thank them. Then I check: “You know they’re exactly 100 words long, right?”

“Really?” they respond. “I didn’t realise!”

“But that’s the whole point!” I say. The clue is in the name. It’s not around 100 words—it’s exactly 100 words every day for 100 days.

That’s the real challenge: not just the writing, but the editing, rearranging, and condensing.

After all, it’s not as if I can just stop in the

Sunday, June 28th, 2015

100 words 098

When I’m grilling outside, I cook on a gas barbecue. There are quite a few people who would take issue with this. Charcoal is clearly better, they claim. And they’re right. But the thing is, I can fire up my gas barbecue quickly and just get down to cooking.

When I’m programming on the server, I code in PHP. There are quite a few people who would take issue with this. Any other language is clearly better, they claim. And they’re right. But the thing is, I can fire up my text editor quickly and just get down to coding.

Saturday, June 27th, 2015

100 words 097

It’s the weekend …and I got up at the crack of dawn to head to London. Yes, on this beautiful sunny day, I elected to take the commuter train up to the big city to spend the day trapped inside a building where the air conditioning crapped out. Sweaty!

But it was worth it. I was at the Edge conference, which is always an intense dose of condensed nerdery. This year I participated in one of the panels: a discussion on progressive enhancement expertly moderated by Lyza. She also led a break-out session on the same topic later on.