Tags: mozilla

6

sparkline

Ends and means

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA.

I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause?

Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo:

Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum?

Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?”

There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut.

Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple.

By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete.

If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart:

We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy.

There’s real hubris and audacity in thinking that one company should be able to tackle fixing the whole web. I think the AMP team are genuinely upset and hurt that people aren’t cheering them on. Perhaps they will dismiss the criticisms as outpourings of “Why wasn’t I consulted?” But that would be a mistake. The many thoughtful people who are extremely critical of AMP are on the same side as the AMP team when it comes the end-goal of better, faster websites. But burning the web to save it? No thanks.

Ben Thompson goes into more detail on the tension between the ends and the means in The Aggregator Paradox:

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

From that perspective, there’s a moral argument to be made for wielding monopoly power like Google is doing. No doubt the AMP team feel it would be morally wrong for Google not to use its influence in search to give preferential treatment to AMP pages.

Going back to the opening examples of online blackouts, was it morally wrong for companies to use their power to influence politics? Or would it have been morally wrong for them not to have used their influence?

When do the ends justify the means?

Here’s a more subtle example than Google AMP, but one which has me just as worried for the future of the web. Mozilla announced that any new web features they add to their browser will require HTTPS.

The end-goal here is one I agree with: HTTPS everywhere. On the face of it, the means of reaching that goal seem reasonable. After all, we already require HTTPS for sensitive JavaScript APIs like geolocation or service workers. But the devil is in the details:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

Emphasis mine.

This is a step too far. Again, I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement. In some ways, I think it might actually be worse.

There’s an assumption in this decision that websites are being made by professionals who will know how to switch to HTTPS. But the web is for everyone. Not just for everyone to use. It’s for everyone to build.

One of my greatest fears for the web is that building it becomes the domain of a professional priesthood. Anything that raises the bar to writing some HTML or CSS makes me very worried. Usually it’s toolchains that make things more complex, but in this case the barrier to entry is being brought right into the browser itself.

I’m trying to imagine future Codebar evenings, helping people to make their first websites, but now having to tell them that some CSS will be off-limits until they meet the entry requirements of HTTPS …even though CSS and HTTPS have literally nothing to do with one another. (And yes, there will be an exception for localhost and I really hope there’ll be an exception for file: as well, but that’s simply postponing the disappointment.)

No doubt Mozilla (and the W3C Technical Architecture Group) believe that they are doing the right thing. Perhaps they think it would be morally wrong if browsers didn’t enforce HTTPS even for unrelated features like new CSS properties. They believe that, in this particular case, the ends justify the means.

I strongly disagree. If you also disagree, I encourage you to make your voice heard. Remember, this isn’t about whether you think that we should all switch to HTTPS—we’re all in agreement on that. This is about whether it’s okay to create collateral damage by deliberately denying people access to web features in order to further a completely separate agenda.

This isn’t about you or me. This is about all those people who could potentially become makers of the web. We should be welcoming them, not creating barriers for them to overcome.

Singapore

I was in Singapore last week. It was most relaxing. Sure, it’s Disneyland With The Death Penalty but the food is wonderful.

chicken rice fishball noodles laksa grilled pork

But I wasn’t just there to sample the delights of the hawker centres. I had been invited by Mozilla to join them on the opening leg of their Developer Roadshow. We assembled in the PayPal offices one evening for a rapid-fire round of talks on emerging technologies.

We got an introduction to Quantum, the new rendering engine in Firefox. It’s looking good. And fast. Oh, and we finally get support for input type="date".

But this wasn’t a product pitch. Most of the talks were by non-Mozillians working on the cutting edge of technologies. I kicked things off with a slimmed-down version of my talk on evaluating technology. Then we heard from experts in everything from CSS to VR.

The highlight for me was meeting Hui Jing and watching her presentation on CSS layout. It was fantastic! Entertaining and informative, it was presented with gusto. I think it got everyone in the room very excited about CSS Grid.

The Singapore stop was the only I was able to make, but Hui Jing has been chronicling the whole trip. Sounds like quite a whirlwind tour. I’m so glad I was able to join in even for a portion. Thanks to Sandra and Ali for inviting me along—much appreciated.

I’ll also be speaking at Mozilla’s View Source in London in a few weeks, where I’ll be talking about building blocks of the Indie Web:

In these times of centralised services like Facebook, Twitter, and Medium, having your own website is downright disruptive. If you care about the longevity of your online presence, independent publishing is the way to go. But how can you get all the benefits of those third-party services while still owning your own data? By using the building blocks of the Indie Web, that’s how!

‘Twould be lovely to see you there.

The Progressive Web App Dev Summit

I was in Amsterdam again at the start of last week for the Progressive Web App Dev Summit, organised by Google. Most of the talks were given by Google employees, but not all—this wasn’t just a European version of Google I/O. Representatives from Opera, Mozilla, Samsung, and Microsoft were also there, and there were quite a few case studies from independent companies. That was very gratifying to see.

Almost all the talks were related to progressive web apps. I say, “almost all” because there were occasional outliers. There was a talk on web components, which don’t have anything directly to do with progressive web apps (and I hope there won’t be any attempts to suggest otherwise), and another on rendering performance that had good advice for anyone building any kind of website. Most of the talks were about the building blocks of progressive web apps: HTTPS, Service Workers, push notifications, and all that jazz.

I was very pleased to see that there was a move away from the suggesting that single-page apps with the app-shell architecture model were the only way of building progressive web apps.

There were lots of great examples of progressively enhancing existing sites into progressive web apps. Jeff Posnick’s talk was a step-by-step walkthrough of doing exactly that. Reading through the agenda, I was really happy to see this message repeated again and again:

In this session we’ll take an online-only site and turn it into a fully network-resilient, offline-first installable progressive web app. We’ll also break out of the app shell and look at approaches that better-suit traditional server-driven sites.

Progressive Web Apps should work everywhere for every user. But what happens when the technology and API’s are not available for in your users browser? In this talk we will show you how you can think about and build sites that work everywhere.

Progressive Web Apps should load fast, work great offline, and progressively enhance to a better experience in modern browsers.

How do you put the “progressive” into your current web app?

You can (and should!) build for the latest and greatest browsers, but through a collection of fallbacks and progressive enhancements you can bring a lot tomorrow’s web to yesterday’s browsers.

I think this is a really smart move. It’s a lot easier to sell people on incremental changes than it is to convince them to rip everything out and start from scratch (another reason why I’m dubious about any association between web components and progressive web apps—but I’ll save that for another post).

The other angle that I really liked was the emphasis on emerging markets, not just wealthy westerners. Tal Oppenheimer’s talk Building for Billions was superb, and Alex kicked the whole thing off with some great facts and figures on mobile usage.

In my mind, these two threads are very much related. Progressive enhancement allows us to have our progressive web app cake and eat it too: we can make websites that can be accessed on devices with limited storage and slow networks, while at the same time ensuring those same sites take advantage of all the newest features in the latest and greatest browsers. I talked to a lot of Google devs about ways to measure the quality of a progressive web app, and I’m coming to the conclusion that a truly high-quality site is one that can still be accessed by a proxy browser like Opera Mini, while providing a turbo-charged experience in the latest version of Chrome. If you think that sounds naive or unrealistic, then I think you might want to dive deeper into all the technologies that make progressive web apps so powerful—responsive design, Service Workers, a manifest file, HTTPS, push notifications; all of those features can and should be used in a layered fashion.

Speaking of Opera, Andreas kind of stole the show, demoing the latest interface experiments in Opera Mobile.

That ambient badging that Alex was talking about? Opera is doing it. The importance of being able to access URLs that I’ve been ranting about? Opera is doing it.

Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right.

Nice! I’m looking forward to seeing what other browsers come up with it. It’s genuinely exciting to see all these different browser makers in complete agreement on which standards they want to support, while at the same time differentiating their products by competing on user experience. Microsoft recently announced that progressive web apps will be indexed in their app store just like native apps—a really interesting move.

The Progressive Web App Dev Summit wrapped up with a closing panel, that I had the honour of hosting. I thought it was very brave of Paul to ask me to host this, considering my strident criticism of Google’s missteps.

Initially there were going to be six people on the panel. Then it became eight. Then I blinked and it suddenly became twelve. Less of a panel, more of a jury. Half the panelists were from Google and the other half were from Opera, Microsoft, Mozilla, and Samsung. Some of those representatives were a bit too media-trained for my liking: Ali from Microsoft tried to just give a spiel, and Alex Komoroske from Google wouldn’t give me a straight answer about whether he wants Android Instant apps to succeed—Jake was a bit more honest. I should have channelled my inner Paxman a bit more.

Needless to say, nobody from Apple was at the event. No surprise there. They’ve already promised to come to the next event. There won’t be an Apple representative on stage, obviously—that would be asking too much, wouldn’t it? But at least it looks like they’re finally making an effort to engage with the wider developer community.

All in all, the Progressive Web App Dev Summit was good fun. I found the event quite inspiring, although the sausage festiness of the attendees was depressing. It would be good if the marketing for these events reached a wider audience—I met a lot of developers who only found out about it a week or two before the event.

I really hope that people will come away with the message that they can get started with progressive web apps right now without having to re-architect their whole site. Right now the barrier to entry is having your site running on HTTPS. Once you’ve got that up and running, it’s pretty much a no-brainer to add a manifest file and a basic Service Worker—to boost performance if nothing else. From there, you’re in a great position to incrementally add more and more features—an offline-first approach with your Service Worker, perhaps? Or maybe start dabbling in push notifications. The great thing about all of these technologies (with the glaring exception of web components in their current state) is that you don’t need to bet the farm on any of them. Try them out. Use them as enhancements. You’ve literally got nothing to lose …and your users have everything to gain.

This is for everyone with a certificate

Mozilla—like Google before them—have announced their plans for deprecating HTTP in favour of HTTPS. I’m all in favour of moving to HTTPS. I’ve done it myself here on adactio.com, on thesession.org, and on huffduffer.com. I have some concerns about the potential linkrot involved in the move to TLS everywhere—as outlined by Tim Berners-Lee—but still, anything that makes the work of GCHQ and the NSA more difficult is alright by me.

But I have a big, big problem with Mozilla’s plan to “encourage” the move to HTTPS:

Gradually phasing out access to browser features.

Requiring HTTPS for certain browser features makes total sense, given the security implications. Service Workers, for example, are quite correctly only available over HTTPS. Any API that has access to a device sensor—or that could be used for fingerprinting in any way—should only be available over HTTPS. In retrospect, Geolocation should have been HTTPS-only from the beginning.

But to deny access to APIs where there are no security concerns, where it is merely a stick to beat people with …that’s just wrong.

This is for everyone. Not just those smart enough to figure out how to add HTTPS to their site. And yes, I know, the theory is that is that it’s going to get easier and easier, but so far the steps towards making HTTPS easier are just vapourware. That makes Mozilla’s plan look like something drafted by underwear gnomes.

The issue here is timing. Let’s make HTTPS easy first. Then we can start to talk about ways of encouraging adoption. Hopefully we can figure out a way that doesn’t require Mozilla or Google as gatekeepers.

Sven Slootweg outlines the problems with Mozilla’s forced SSL. I highly recommend reading Yoav’s post on deprecating HTTP too. Ben Klemens has written about HTTPS: the end of an era …that era being the one in which anyone could make a website without having to ask permission from an app store, a certificate authority, or a browser manufacturer.

On the other hand, Eric Mill wrote We’re Deprecating HTTP And It’s Going To Be Okay. It makes for an extremely infuriating read because it outlines all the ways in which HTTPS is a good thing (all of which I agree with) without once addressing the issue at hand—a browser that deliberately cripples its feature set for political reasons.

South by CSS

South by Southwest has become a vast, sprawling festival with a preponderance of panels pitched at marketers, start-ups and people that use the words “social media” in their job title without irony. But there were also some great design and development talks if you looked for them.

Samantha gave a presentation on style tiles, which I unfortunately missed but I’ll be eagerly awaiting the release of the audio. I also missed some good meaty JavaScript talks but I did manage to make it along to Jen’s excellent introduction to HTML5 APIs.

Andy’s talk on CSS best practices was one of the best presentations I’ve ever seen. He did a fantastic job of tackling some really important topics. It’s a presentation (and a presenter) that deserves a wider audience, so if you’re involved in putting together the line-up for any front-end conferences, I highly recommend that you nab him.

Divya put together an absolutely killer panel called CSS.next, all about how CSS gets specced and shipped, and what’s coming down the line. All of the panelists were smart, articulate, and well-informed. The panel was very enlightening, as well as being thoroughly enjoyable.

And then there was the Browser Wars panel.

This is something of a SXSW tradition. Arun assembles a line-up of representatives from browser makers—Mozilla, Google, Microsoft, and Opera—and then peppers them with some hardball questions. Apple is invited to send a representative every year, and every year, Apple declines.

There was no shortage of contentious topics this year. The subject of Google Dart was raised (“Good luck with that,” said Brendan). There was also plenty of discussion about the recent DRM proposal submitted to the HTML working group. There was a disturbing level of agreement amongst all the panelists that some form of DRM for video was needed because, hey, that’s just the way things go…

As an aside, I must say I found the lack of imagination on display to be pretty disheartening. Two years ago, Chris was on the Browser Wars panel representing Microsoft, defending the EOT format because, hey, that’s just the way things go. Without some form of DRM, he argued, we couldn’t have fonts on the web. Well, the web found a way. Now Chris is representing Google but the argument remains the same. DRM, so the argument goes, is the only way we’ll get video on the web because that’s what the “rights holders” demand. And yet, if you are a photographer, no such special consideration is afforded to you. The img element has no DRM and people are managing just fine, thank you. Video, apparently, is a special case …just like fonts. ahem

Anyway…

The subject of vendor prefixes also came up. Specifically, the looming prospect of non-webkit browsers parsing -webkit prefixed properties was raised.

I saw a pattern amongst all three subjects: the DRM proposal, Dart, and browsers implementing another browser’s vendor prefix. All three proposals were made to address a genuine problem. The proposals all suffer from varying degrees of batshit craziness but they certainly galvanised a lot of discussion.

For example, Brendan said that while Google Dart may not stand a hope in hell of supplanting JavaScript, some of the ideas it contains may well end up influencing the development of ECMAScript.

Similarly, Mozilla’s plan for vendor-prefixing certainly caused all parties to admit the problem: the W3C was moving too slow; Apple should have submitted proprietary properties for standardisation sooner; Mozilla, Microsoft, and Opera should have been innovating faster; and web developers should have been treating vendor-prefixed properties as experimental features, not stable parts of a spec.

So the proposal to do something batshit crazy and implement -webkit-prefixed CSS properties has actually had some very positive effects …but there’s no reason to actually go ahead and do it!

I tried to make this point during the audience participation part of the panel, but it was like banging my head against a brick wall. Chaals kept repeating the problem case, but I wasn’t disputing the problem; I was trying to point out that the proposed solution wouldn’t fix the problem.

It was a classic case of the same kind of thinking we saw in the SOPA proposal:

  1. Something must be done!
  2. Implementing -webkit prefixes is something.
  3. Something has been done.

The problem is that it won’t work. Adding “like Webkit” to the user-agent string will probably have much more of an effect and frankly, I don’t care if any of the browsers do that. At this point, a little bit more pissing into the bloated cesspool of user-agent strings is hardly going to matter. A browser’s user-agent string isn’t an identifier, it’s a reverse-chronological history of the web. Why not update the history booklet to include the current predilection amongst developers for Webkit browsers on mobile?

But implementing -webkit vendor prefixes? Pointless! If a developer is only building and testing their sites for one class of device or browser, simply implementing that browser’s prefixed CSS is just putting a band aid on a decapitation.

So I was kind of hoping that Mozilla would just come right out and say that maybe they wouldn’t actually go ahead and do this but hey, look at all the great discussion it generated (just like Dart, just like the DRM proposal). But sadly, no. Brendan categorically stated that the proposal was not presented in order to foment discussion. And in follow-up tweets, he wrote that he actually expected it to level the mobile browser playing field. That’s an admirably optimistic viewpoint but it’s sadly self-delusional.

And what will happen when implementing -webkit prefixes fails to level the playing field? We’ll be left with deliberately broken browsers.

Once something ships in a browser, it’s very, very hard to ever remove it. During the Dart discussion, Chris talked about the possibility of removing Dart from Chrome if developers don’t take to it. Turning to the Microsoft representative he asked rhetorically, “I mean, do you guys still ship VBScript?”

The answer?

“Yes.”

Prix Fixe

A year and a half ago, Eric wrote a great article in A List Apart called Prefix or Posthack. It’s a balanced look at vendor prefixes in CSS that concludes in their favour:

If the history of web standards has shown us anything, it’s that hacks will be necessary. By front-loading the hacks using vendor prefixes and enshrining them in the standards process, we can actually fix some of the potential problems with the process and possibly accelerate CSS development.

So the next time you find yourself grumbling about declaring the same thing four times, once for each browser, remember that the pain is temporary. It’s a little like a vaccine—the shot hurts now, true, but it’s really not that bad in comparison to the disease it prevents.

Henri disagrees. He wrote a post called Vendor Prefixes Are Hurting the Web:

In practice, vendor prefixes lead to a situation where Web author have to say the same thing in a different way to each browser. That’s the antithesis of having Web standards. Standards should enable authors to write to a standard and have it work in implementations from multiple vendors.

Daniel Glazman wrote a point-by-point rebuttal to Henri’s post called CSS vendor prefixes, an answer to Henri Sivonen that’s well worth a read. Alex also wrote a counter-argument to Henri’s post called Vendor Prefixes Are A Rousing Success that echoes some of the points Eric made in his ALA article:

The standards process needs to lag implementations, which means that we need spaces for implementations to lead in. CSS vendor prefixes are one of the few shining examples of this working in practice.

Alex’s co-worker Paul disagrees. He recently wrote Vendor Prefixes Are Not Developer-friendly:

  1. Prefixes are not developer-friendly.
  2. Recent features would have been in a much better state without prefixes.
  3. Implementor maneuverability is not hampered without prefixes.

All of this would have remained a fairly academic discussion but for a bombshell dropped by Tantek at a face-to-face meeting of the CSS Working Group in Paris:

At this point we’re trying to figure out which and how many -webkit- prefix properties to actually implement support for in Mozilla.

The superficial issue is that web developers have been implementing -webkit- properties without then adding the non-prefixed standardised version (and without adding the corresponding prefixes of other vendors). The more fundamental problem is that while vendor prefixes were intended to introduce experimental features until those features became standardised, the reality is that the prefixed version ends up being supported in perpetuity. Nobody is happy about this situation but that’s the unfortunate reality.

Among the unhappy voices are:

Once again, Eric sought to bring clarity to the situation in the form of an article on A List Apart, this time publishing an interview with Tantek. Alex also popped up again, writing a post called Misdirection which addresses what he feels are some fundamental assumptions being made in the interview.

Finally, Mozilla engineer Robert O’Callahan—who I chatted with briefly at Kiwi Foo Camp about the vendor prefix situation—wrote about Alternatives To Supporting -webkit Prefixes In Other Engines in which he makes clear that evangelism efforts like Christian’s, while entirely laudable, aren’t a realistic solution to the problem.

It’s all a bit of a mess really, with lots of angry finger-pointing: at Apple, at Mozilla, at web developers, at the W3C…

My own feelings match those of Eric, who wrote:

I’d love to be proven wrong, but I have to assume the vendors will push ahead with this regardless. … I don’t mean to denigrate or undermine any of the efforts I mentioned before—they’re absolutely worth doing even if every non-WebKit browser starts supporting -webkit- properties next week. If nothing else, it will serve as evidence of your commitment to professional craftsmanship.