Tags: proposal

23

sparkline

Saturday, September 7th, 2019

samuelgoto/sms-receiver: phone number verification

An interesting proposal to allow websites to detect certain SMS messages. The UX implications are fascinating.

Monday, August 26th, 2019

Opening up the AMP cache

I have a proposal that I think might alleviate some of the animosity around Google AMP. You can jump straight to the proposal or get some of the back story first…

The AMP format

Google AMP is exactly the kind of framework I’d like to get behind. Unlike most front-end frameworks, its components take a declarative approach—no knowledge of JavaScript required. I think Lea’s excellent Mavo is the only other major framework that takes this inclusive approach. All the configuration happens in markup, and all the styling happens in CSS. Excellent!

But I cannot get behind AMP.

Instead of competing on its own merits, AMP is unfairly propped up by the search engine of its parent company, Google. That makes it very hard to evaluate whether AMP is being used on its own merits. Instead, the evidence suggests that most publishers of AMP pages are doing so because they feel they have to, rather than because they want to. That’s a real shame, because as a library of web components, AMP seems pretty good. But there’s just no way to evaluate AMP-the-format without taking into account AMP-the-ecosystem.

The AMP ecosystem

Google AMP ostensibly exists to make the web faster. Initially the focus was specifically on mobile performance, but that distinction has since fallen by the wayside. The idea is that by using AMP’s web components, your pages will be speedy. Though, as Andy Davies points out, this isn’t always the case:

This is where I get confused… https://independent.co.uk only have an AMP site yet it’s performance is awful from a user perspective - isn’t AMP supposed to prevent this?

See also: Google AMP lowered our page speed, and there’s no choice but to use it:

According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95.

Publishers who already have fast web pages—like The Guardian—are still compelled to make AMP versions of their stories because of the search benefits reserved for AMP. As Terence Eden reported from a meeting of the AMP advisory committee:

We heard, several times, that publishers don’t like AMP. They feel forced to use it because otherwise they don’t get into Google’s news carousel — right at the top of the search results.

Some people felt aggrieved that all the hard work they’d done to speed up their sites was for nothing.

The Google AMP team are at pains to point out that AMP is not a ranking factor in search. That’s true. But it is unfairly privileged in other ways. Only AMP pages can appear in the Top Stories carousel …which appears above any other search results. As I’ve said before:

Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

From A letter about Google AMP:

Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.

That’s not the only way that AMP pages get preferential treatment. It turns out that the secret to the speed of AMP pages isn’t the web components. It’s the prerendering.

The AMP cache

If you’ve ever seen an AMP page in a list of search results, you’ll have noticed the little lightning icon. If you’ve ever tapped on that search result, you’ll have noticed that the page loads blazingly fast!

That’s not down to AMP-the-format, alas. That’s down to the fact that the page has been prerendered by Google before you even went to it. If any page were prerendered that way, it would load blazingly fast. But currently, this privilege is reserved for AMP pages only.

If, after tapping through to that AMP page, you looked at the address bar of your browser, you might have noticed something odd. Even though you might have thought you were visiting The Washington Post, or The New York Times, the URL of the (blazingly fast) page you’re looking at is still under Google’s domain. That’s because Google hosts any AMP pages that it prerenders.

Google calls this “the AMP cache”, but it would be better described as “AMP hosting”. The web page sent down the wire is hosted on Google’s domain.

Here’s that AMP letter again:

When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.

Through gritted teeth, I will refer to this as “the AMP cache”, because that’s what everyone else calls it. But make no mistake, Google is hosting—not caching—these pages.

But why host the pages on a Google domain? Why not prerender the original URLs?

Prerendering and privacy

Scott summed up the situation with AMP nicely:

The pitch I think site owners are hearing is: let us host your pages on our domain and we’ll promote them in search results AND preload them so they feel “instant.” To opt-in, build pages using this component syntax.

But perhaps we could de-couple the AMP format from the AMP cache.

That’s what Terence suggests:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

The AMP letter, too:

Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index.

Scott reiterates:

It’s been said before but it would be so good for the web if pages with a Lighthouse score over say, 90 could get into that top search result area, even if they’re not built using Google’s AMP framework. Feels wrong to have to rebuild/reproduce an already-fast site just for SEO.

This was also what I was calling for. But then Malte pointed out something that stumped me. Privacy.

Here’s the problem…

Let’s say Google do indeed prerender already-fast pages when they’re listed in search results. You, a search user, type something into Google. A list of results come back. Google begins pre-rendering some of them. But you don’t end up clicking through to those pages. Nonetheless, the servers those pages are hosted on have received a GET request coming from a Google search. Those publishers now know that a particular (cookied?) user could have clicked through to their site. That’s very different from knowing when someone has actually arrived at a particular site.

And that’s why Google host all the AMP pages that they prerender. Given the privacy implications of prerendering non-Google URLs, I must admit that I see their point.

Still, it’s a real shame to miss out on the speed benefit of prerendering:

Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit.

A modest proposal

Why is Google’s AMP cache just for AMP pages? (Y’know, apart from the obvious answer that it’s in the name.)

What if Google were allowed to host non-AMP pages? Google search could then prerender those pages just like it currently does for AMP pages. There would be no privacy leaks; everything would happen on the same domain—google.com or ampproject.org or whatever—just as currently happens with AMP pages.

Don’t get me wrong: I’m not suggesting that Google should make a 1:1 model of the web just to prerender search results. I think that the implementation would need to have two important requirements:

  1. Hosting needs to be opt-in.
  2. Only fast pages should be prerendered.

Opting in

Currently, by publishing a page using the AMP format, publishers give implicit approval to Google to host that page on Google’s servers and serve up this Google-hosted version from search results. This has always struck me as being legally iffy. I’ve looked in the AMP documentation to try to find any explicit granting of hosting permission (e.g. “By linking to this JavaScript file, you hereby give Google the right to serve up our copies of your content.”), but no luck. So even with the current situation, I think a clear opt-in for hosting would be beneficial.

This could be a meta element. Maybe something like:

<meta name="caches-allowed" content="google">

This would have the nice benefit of allowing comma-separated values:

<meta name="caches-allowed" content="google, yandex">

(The name is just a strawman, by the way—I’m not suggesting that this is what the final implementation would actually look like.)

If not a meta element, then perhaps this could be part of robots.txt? Although my feeling is that this needs to happen on a document-by-document basis rather than site-wide.

Many people will, quite rightly, never want Google—or anyone else—to host and serve up their content. That’s why it’s so important that this behaviour needs to be opt-in. It’s kind of appalling that the current hosting of AMP pages is opt-in-by-proxy-sort-of.

Criteria for prerendering

Which pages should be blessed with hosting and prerendering? The fast ones. That’s sorta the whole point of AMP. But right now, there’s a lot of resentment by people with already-fast websites who quite rightly feel they shouldn’t have to use the AMP format to benefit from the AMP ecosystem.

Page speed is already a ranking factor. It doesn’t seem like too much of a stretch to extend its benefits to hosting and prerendering. As mentioned above, there are already a few possible metrics to use:

  • Page Speed Index
  • Lighthouse
  • Web Page Test

Ah, but what if a page has good score when it’s indexed, but then gets worse afterwards? Not a problem! The version of the page that’s measured is the same version of the page that gets hosted and prerendered. Google can confidently say “This page is fast!” After all, they’re the ones serving up the page.

That does raise the question of how often Google should check back with the original URL to see if it has changed/worsened/improved. The answer to that question is however long it currently takes to check back in on AMP pages:

Each time a user accesses AMP content from the cache, the content is automatically updated, and the updated version is served to the next user once the content has been cached.

Issues

This proposal does not solve the problem with the address bar. You’d still find yourself looking at a page from The Washington Post or The New York Times (or adactio.com) but seeing a completely different URL in your browser. That’s not good, for all the reasons outlined in the AMP letter.

In fact, this proposal could potentially make the situation worse. It would allow even more sites to be impersonated by Google’s URLs. Where currently only AMP pages are bad actors in terms of URL confusion, opening up the AMP cache would allow equal opportunity URL confusion.

What I’m suggesting is definitely not a long-term solution. The long-term solutions currently being investigated are technically tricky and will take quite a while to come to fruition—web packages and signed exchanges. In the meantime, what I’m proposing is a stopgap solution that’s technically a lot simpler. But it won’t solve all the problems with AMP.

This proposal solves one problem—AMP pages being unfairly privileged in search results—but does nothing to solve the other, perhaps more serious problem: the erosion of site identity.

Measuring

Currently, Google can assess whether a page should be hosted and prerendered by checking to see if it’s a valid AMP page. That test would need to be widened to include a different measurement of performance, but those measurements already exist.

I can see how this assessment might not be as quick as checking for AMP validity. That might affect whether non-AMP pages could be measured quickly enough to end up in the Top Stories carousel, which is, by its nature, time-sensitive. But search results are not necessarily as time-sensitive. Let’s start there.

Assets

Currently, AMP pages can be prerendered without fetching anything other than the markup of the AMP page itself. All the CSS is inline. There are no initial requests for other kinds of content like images. That’s because there are no img elements on the page: authors must use amp-img instead. The image itself isn’t loaded until the user is on the page.

If the AMP cache were to be opened up to non-AMP pages, then any content required for prerendering would also need to be hosted on that same domain. Otherwise, there’s privacy leakage.

This definitely introduces an extra level of complexity. Paths to assets within the markup might need to be re-written to point to the Google-hosted equivalents. There would almost certainly need to be a limit on the number of assets allowed. Though, for performance, that’s no bad thing.

Make no mistake, figuring out what to do about assets—style sheets, scripts, and images—is very challenging indeed. Luckily, there are very smart people on the Google AMP team. If that brainpower were to focus on this problem, I am confident they could solve it.

Summary

  1. Prerendering of non-Google URLs is problematic for privacy reasons, so Google needs to be able to host pages in order to prerender them.
  2. Currently, that’s only done for pages using the AMP format.
  3. The AMP cache—and with it, prerendering—should be decoupled from the AMP format, and opened up to other fast web pages.

There will be technical challenges, but hopefully nothing insurmountable.

I honestly can’t see what Google have to lose here. If their goal is genuinely to reward fast pages, then opening up their AMP cache to fast non-AMP pages will actively encourage people to make fast web pages (without having to switch over to the AMP format).

I’ve deliberately kept the details vague—what the opt-in should look like; what the speed measurement should be; how to handle assets—I’m sure smarter folks than me can figure that stuff out.

I would really like to know what other people think about this proposal. Obviously, I’d love to hear from members of the Google AMP team. But I’d also love to hear from publishers. And I’d very much like to know what people in the web performance community think about this. (Write a blog post and send me a webmention.)

What am I missing here? What haven’t I thought of? What are the potential pitfalls (and are they any worse than the current acrimonious situation with Google AMP)?

I would really love it if someone with a fast website were in a position to say, “Hey Google, I’m giving you permission to host this page so that it can be prerendered.”

I would really love it if someone with a slow website could say, “Oh, shit! We’d better make our existing website faster or Google won’t host our pages for prerendering.”

And I would dearly love to finally be able to embrace AMP-the-format with a clear conscience. But as long as prerendering is joined at the hip to the AMP format, the injustice of the situation only harms the AMP project.

Google, open up the AMP cache.

Friday, February 1st, 2019

Limiting JavaScript? - TimKadlec.com

Following on from that proposal for a browser feature that I linked to yesterday, Tim thinks through all the permutations and possibilities of user agents allowing users to throttle resources:

If a limit does get enforced (it’s important to remember this is still a big if right now), as long as it’s handled with care I can see it being an excellent thing for the web that prioritizes users, while still giving developers the ability to take control of the situation themselves.

Thursday, January 31st, 2019

194028 – Add limits to the amount of JavaScript that can be loaded by a website

Now this is a feature request I can get behind!

A user must provide permission to enable geolocation, or notifications, or camera access, so why not also require permission for megabytes of JavaScript that will block the main thread?

Without limits, there is no incentive for a JavaScript developer to keep their codebase small and dependencies minimal. It’s easy to add another framework, and that framework adds another framework, and the next thing you know you’re loading tens of megabytes of data just to display a couple hundred kilobytes of content.

I’m serious about this. It’s is an excellent proposal for WebKit, similar to the never-slow mode proposed by Alex for Chromium.

Monday, January 7th, 2019

A declarative router for service workers - JakeArchibald.com

An interesting proposal from Jake on a different way of defining how service worker fetch events could be handled under various conditions. For now, I have no particular opinion on it. I’m going to let this stew in my mind for a while.

Friday, November 2nd, 2018

The International Flag of Planet Earth

A proposed flag for the planet.

Wednesday, July 25th, 2018

Badging API Explainer

Here’s an intriguing proposal that would allow web apps to indicate activity in an icon (like an unread count) in the same way that native apps can.

This is an interesting one because, in this case, it’s not just browsers that would have to implement it, but operating systems as well.

Saturday, June 16th, 2018

On Rejection | Zeldman on Web & Interaction Design

The focus of the A Book Apart series is what makes it great …and that means having to reject some proposals that don’t fit. Even though I’ve had the honour of being a twice-published A Book Apart author, I also have the honour of receiving a rejection, which Jeffrey mentions here:

In one case we even had to say no to a beautifully written, fully finished book.

That was Resilient Web Design.

So why did we turn down books we knew would sell? Because, again—they weren’t quite right for us.

It was the right decision. And this is the right advice:

If you’ve sent us a proposal that ultimately wasn’t for us, don’t be afraid to try again if you write something new—and most importantly, believe in yourself and keep writing.

Thursday, February 1st, 2018

Global Diversity CFP Day—Brighton edition

There are enough middle-aged straight white men like me speaking at conferences. That’s why the Global Diversity Call-For-Proposals Day is happening this Saturday, February 3rd.

The purpose is two-fold. One is to encourage a diverse range of people to submit talk proposals to conferences. The other is to help with the specifics—coming with ideas, writing a good title and abstract, preparing the presentation, and all that.

Julie is organising the Brighton edition. Clearleft are providing the venue—68 Middle Street. I’ll be on hand to facilitate. Rosa and Dot will be doing the real work, mentoring the attendees.

If you’ve ever thought about submitting a talk proposal to a conference but just don’t know where to start, or if you’re just interested in the idea, please do come along on Saturday. It’s starts at 11am and will be all wrapped up by 3pm.

See you there!

Monday, September 25th, 2017

jakearchibald/navigation-transitions

I honestly think if browsers implemented this, 80% of client-rendered Single Page Apps could be done as regular good ol’-fashioned websites.

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

Saturday, September 23rd, 2017

[selectors] Functional pseudo-class like :matches() with 0 specificity · Issue #1170 · w3c/csswg-drafts

A really interesting proposal from Lea that would allow CSS authors to make full use of selectors but without increasing specificity. Great thoughts in the comments too.

Thursday, August 24th, 2017

Mapping in HTML – a proposal for a new element – Terence Eden’s Blog

I quite like this proposal for geo element in HTML, especially that it has a fallback built in (like video). I’m guessing the next step is to file an issue and create a web component to demonstrate how this could work.

That brings up another question: what do you name a custom element that you’d like to eventually become part of the spec? You can’t simply name it geo because you have to include a hyphen. geo-polyfill? geo-proposal? or polyfill-geo? proposal-geo?

Monday, April 3rd, 2017

Cascading HTML Style Sheets — A Proposal

It’s fascinating to look back at this early proposal for CSS from 1994 and see what the syntax might have been:

A one-statement style sheet that sets the font size of the h1 element:

h1.font.size = 24pt 100%

The percentage at the end of the line indicates what degree of influence that is requested (here 100%).

Tuesday, October 4th, 2016

The Proposal | gablaxian.com

This is how Graham proposed to Linzi …and you can try it out for yourself.

(I must’ve got something in my eye when I got to the end of the level. Weird.)

Wednesday, September 21st, 2016

Proposal to CSSWG, Sept 2016 // Speaker Deck

Jen has some ideas for a new CSS Region spec to turbo-boost Grid. I’m still trying to wrap my head around it, but in the meantime, if you have feedback on this, please let her know.

Saturday, August 6th, 2016

Service worker meeting notes - JakeArchibald.com

Jake has written up the notes from the most recent gathering to discuss service workers. If you have any feedback on any of the proposed changes or additions to the spec, please add them. This proposal is the biggie:

We’re considering allowing the browser to run multiple concurrent instances of a service worker.

Sunday, September 27th, 2015

WorldWideWeb: Proposal for a HyperText Project

Sometimes it’s nice to step back and look at where all this came from. Here’s Tim Berners-Lee’s proposal from 1990.

The current incompatibilities of the platforms and tools make it impossible to access existing information through a common interface, leading to waste of time, frustration and obsolete answers to simple data lookup. There is a potential large benefit from the integration of a variety of systems in a way which allows a user to follow links pointing from one piece of information to another one.

Tuesday, July 16th, 2013

Installable Webapps: Extend the Sandbox by Boris Smus

This a great proposal: well-researched and explained, it tackles the tricky subject of balancing security and access to native APIs.

Far too many ideas around installable websites focus on imitating native behaviour in a cargo-cult kind of way, whereas this acknowledges addressability (with URLs) as a killer feature of the web …a beautiful baby that we definitely don’t want to throw out with the bathwater.

Tuesday, May 8th, 2012

Proposition to change the prefixing policy from Florian Rivoal on 2012-05-04 (www-style@w3.org from May 2012)

This seems like a sensible way for browsers to approach implementing vendor-prefixed CSS properties.

Wednesday, August 4th, 2010

HTML Resource Packages

An interesting performance proposal from mozilla that will degrade nicely in legacy browsers.