Oh, I like this! A leaderboard of news sites, ranked by performance.
I’d love to see something like this for just about every sector …including agency websites.
Oh, I like this! A leaderboard of news sites, ranked by performance.
I’d love to see something like this for just about every sector …including agency websites.
This is a really intriguing book that combines design theory and programming—learn about contrast, colour, and shapes, with each lesson supported by code examples.
It’s still a work in progress but the whole thing is online for free. Yay for web books!
This really resonates with me. Tim Bray duly notes that people are writing on Medium, and being shunted towards native apps, and that content is getting centralised at Facebook and other hubs, and then he declares:
But I don’t care.
Anyhow, I’m not going away.
I’m not going to be around for this, but I wish I could go. If you’re in Brighton, I highly recommend this one-day workshop from Matt. He’s been doing some internal workshops at Clearleft and he’s pretty great.
I’d love to see other publishers take a firm stand against the shoddy ad tech from data brokers slowing down their sites.
We go to our partners and say, ‘This is how fast things need to be executed; if you don’t hit this threshold, we can’t put you on the site.’
(I mean, I’d really like to see publishers take a stand against invasive tracking via ads, but taking a stand on speed is a good start.)
A great presentation from Laura on how tracking scripts are killing the web. We can point our fingers at advertising companies to blame for this, but it’s still developers like us who put those scripts onto websites.
We need to ask ourselves these questions about what we build. Because we are the gatekeepers of what we create. We don’t have to add tracking to everything, it’s already gotten out of our control.
I’m genuinely touched that my little web book could inspire someone like this. I absolutely love reading about what people thought of the book, especially when they post on their own site like this.
This book has inspired me to approach web site building in a new way. By focusing on the core functionality and expanding it based on available features, I’ll ensure the most accessible site I can. Resilient web sites can give a core experience that’s meaningful, but progressively enhance that experience based on technical capabilities.
A jolly nice review of Resilient Web Design.
After just a few pages in, I could see why so many have read Resilient Web Design all in one go. It lives up to all the excellent reviews.
A wide-ranging post from Andrew on the downsides of Google’s AMP solution.
I don’t agree with all the issues he has with the format itself (in my opinion, the fact that AMP pages can’t have
script elements is a feature, not a bug), but I wholeheartedly concur with his concerns about the AMP cache:
It recklessly devalues the URL
Spot on! And as Andrew points out, in this age of fake news, devaluing the URL is a recipe for disaster.
It’s hard to avoid the idea that the primary objective of AMP is really about hosting publisher content inside the Google ecosystem (as is more obviously the objective of Facebook Instant Articles and Apple News).
Chapter 3 of Resilient Web Design, republished in Smashing Magazine:
In the world of web design, we tend to become preoccupied with the here and now. In “Resilient Web Design“, Jeremy Keith emphasizes the importance of learning from the past in order to better prepare ourselves for the future. So, perhaps we should stop and think more beyond our present moment? The following is an excerpt from Jeremy’s web book.
Here’s a crazy idea: threaded tweets, but logged together, on a single webpage. A ‘weblog’, if you will.— Paul Lloyd (@paulrobertlloyd) March 21, 2017
Some people have been putting Paul’s crazy idea into practice.
There are more people I could mention …but, to be honest, not that many more. Seems like most people are happy to only publish on Ev’s blog or not at all.
I know not everybody wants to write on the web, and that’s fine. But it makes me sad when people choose not to publish their thoughts because they think no-one will be interested, or that it’s all been said before. I understand where those worries come from, but I believe—no, I know—that they are unfounded.
It’s a world wide web out there. There’s plenty of room for everyone. And I, for one, love reading the words of others.
I got a nice email recently from Colin van Eenige. He wrote:
For my graduation project I’m researching the development of Progressive Web Apps and found your offline book called resilient web design. I was very impressed by the implementation of the website and it really was a nice experience.
I’m very interested in your vision on progressive web apps and what capabilities are waiting for us regarding offline content. Would it be fine if I’d send you some questions?
I said that would be fine, although I couldn’t promise a swift response. He sent me four questions. I finally got ‘round to sending my answers…
Well, given the subject matter, it felt right that the canonical version of the book should be not just online, but made with the building blocks of the web. The other formats are all nice to have, but the HTML version feels (to me) like the “real” book.
Interestingly, it wasn’t too much trouble for people to generate other formats from the HTML (ePub, MOBI, PDF), whereas I think trying to go in the other direction would be trickier.
As for the offline part, that felt like a natural fit. I had already done that with a previous book of mine, HTML5 For Web Designers, which I put online a year or two after its print publication. In that case, I used AppCache for the offline functionality. AppCache is horrible, but this use case might be one of the few where it works well: a static book that’s never going to change. Cache invalidation is one of the worst parts of using AppCache so by not having any kinds of updates at all, I dodged that bullet.
But when it came time for Resilient Web Design, a service worker was definitely the right technology. Still, I’ve got AppCache in there as well for the browsers that don’t yet support service workers.
The biggest effect that service workers could have is to change the expectations that people have about using the web, especially on mobile devices. Right now, people associate the web on mobile with long waits and horrible spammy overlays. Service workers can help solve that first part.
If people then start adding sites to their home screen, that will be a great sign that the web is really holding its own. But I don’t think we should get too optimistic about that: for a user, there’s no difference between a prompt on their screen saying “add to home screen” and a prompt on their screen saying “download our app”—they’re equally likely to be dismissed because we’ve trained people to dismiss anything that covers up the content they actually came for.
It’s entirely possible that websites could start taking over much of the functionality that previously was only possible in a native app. But I think that inertia and habit will keep people using native apps for quite some time.
The big exception is in markets where storage space on devices is in short supply. That’s where the decision to install a native app isn’t taken likely (given the choice between your family photos and an app, most people will reject the app). The web can truly shine here if we build lightweight, performant services.
Even in that situation, I’m still not sure how many people will end up adding those sites to their home screen (it might feel so similar to installing a native app that there may be some residual worry about storage space) but I don’t think that’s too much of a problem: if people get to a site via search or typing, that’s fine.
I worry that the messaging around “progressive web apps” is perhaps over-fetishising the home screen. I don’t think that’s the real battleground. The real battleground is in people’s heads; how they perceive the web and how they perceive native.
After all, if the average number of native apps installed in a month is zero, then that’s not exactly a hard target to match. :-)
For me, progressive web apps don’t feel like a separate thing from making websites. I worry that the marketing of them might inflate expectations or confuse people. I like the idea that they’re simply websites that have taken their vitamins.
So my vision for progressive web apps is the same as my vision for the web: something that people use every day for all sorts of tasks.
I find it really discouraging that progressive web apps are becoming conflated with single page apps and the app shell model. Those architectural decisions have nothing to do with service workers, HTTPS, and manifest files. Yet I keep seeing the concepts used interchangeably. It would be a real shame if people chose not to use these great technologies just because they don’t classify what they’re building as an “app.”
If anything, it’s good ol’ fashioned content sites (newspapers, wikipedia, blogs, and yes, books) that can really benefit from the turbo boost of service worker+HTTPS+manifest.
I was at a conference recently where someone was given a talk encouraging people to build progressive web apps but discouraging people from doing it for their own personal sites. That’s a horrible, elitist attitude. I worry that this attitude is being codified in the term “progressive web app”.
Well, like I said, I think that some people are focusing a bit too much on the home screen and not enough on the benefits that service workers can provide to just about any website.
My biggest learning is that these technologies aren’t for a specific subset of services, but can benefit just about anything that’s on the web. I mean, just using a service worker to explicitly cache static assets like CSS, JS, and some images is a no-brainer for almost any project.
So there you go—I’m very excited about the capabilities of these technologies, but very worried about how they’re being “sold”. I’m particularly nervous that in the rush to emulate native apps, we end up losing the very thing that makes the web so powerful: URLs.
Except AMP isn’t really one technology, is it? And therein lies the confusion. This was at the heart of the panel I was on. When we talk about AMP, we could be talking about one of three things:
imgelement on an AMP page, you use an
styleelement instead of an external file, and there’s a limit on what you can do with those styles.
The first piece of AMP—the format—is kind of like a collection of marginal gains. Where the
img element might have some performance issues, the
amp-img element optimises for perceived performance. But if you just used the AMP web components, it wouldn’t be enough to make your site blazingly fast.
The second part of AMP—the rules—is where the speed gains start to really show. You can’t have an external style sheet, and crucially, you can’t have any third-party scripts other than the AMP script itself. This is key to making AMP pages super fast. It’s not so much about what AMP does; it’s more about what it doesn’t allow. If you never used a single AMP component, but stuck to AMP’s rules disallowing external styles and scripts, you could easily make a page that’s even faster than what AMP can do.
At AMP Conf, Natalia pointed out that The Guardian’s non-AMP pages beat out the AMP pages for performance. So why even have AMP pages? Well, that’s down to the third, most contentious, part of the AMP puzzle.
The AMP cache turns the user experience of visiting an AMP page from fast to instant. While you’re still on the search results page, Google will pre-render an AMP page in the background. Not pre-fetch, pre-render. That’s why it opens so damn fast. It’s also what causes the most confusion for end users.
From my unscientific polling, the behaviour of AMP results confuses the hell out of people. The fact that the page opens instantly isn’t the problem—far from it. It’s the fact that you don’t actually go to an another page. Technically, you’re still on Google. An analogous mental model would be an RSS reader, or an email client: you don’t go to an item or an email; you view it in situ.
Well, that mental model would be fine if it were consistent. But in Google search, only some results will behave that way (the AMP pages) and others will behave just like regular links to other websites. No wonder people are confused! Some search results take them away and some search results keep them on Google …even though the page looks like a different website.
The price that we pay for the instantly-opening AMP pages from the Google cache is the URL. Because we’re looking at Google’s pre-rendered copy instead of the original URL, the address bar is not pointing to the site the browser claims to be showing. Everything in the body of the browser looks like an article from The Guardian, but if I look at the URL (which is what security people have been telling us for years is important to avoid being phished), then I’ll see a domain that is not The Guardian’s.
But wait! Couldn’t Google pre-render the page at its original URL?
Yes, they could. But they won’t.
This was a point that Paul kept coming back to: trust. There’s no way that Google can trust that someone else’s URL will play by the AMP rules (no external scripts, only loading embedded content via web components, limited styles, etc.). They can only trust the copies that they themselves are serving up from their cache.
By the way, there was a joint AMP/search panel at AMP Conf with representatives from both teams. As you can imagine, there were many questions for the search team, most of which were Glomar’d. But one thing that the search people said time and again was that Google was not hosting our AMP pages. Now I don’t don’t know if they were trying to make some fine-grained semantic distinction there, but that’s an outright falsehood. If I click on a link, and the URL I get taken to is a Google property, then I am looking at a page hosted by Google. Yes, it might be a copy of a document that started life somewhere else, but if Google are serving something from their cache, they are hosting it.
This is one of the reasons why AMP feels like such a bait’n’switch to me. When it first came along, it felt like a direct competitor to Facebook’s Instant Articles and Apple News. But the big difference, we were told, was that you get to host your own content. That appealed to me much more than having Facebook or Apple host the articles. But now it turns out that Google do host the articles.
This will be the point at which Googlers will say no, no, no, you can totally host your own AMP pages …but you won’t get the benefits of pre-rendering. But without the pre-rendering, what’s the point of even having AMP pages?
Alright, but what about The Guardian? They’ve already got fast pages, but they still have to create separate AMP pages if they want to get the pre-rendering benefits when they show up in Google search results. Sorry, says Google, but it’s the only way we can trust that the pre-rendered page will be truly fast.
So here’s the impasse we’re at. Google have provided a list of best practices for making fast web pages, but the only way they can truly verify that a page is sticking to those best practices is by hosting their own copy, URLs be damned.
This was the crux of Paul’s argument when he was on the Shop Talk Show podcast (it’s a really good episode—I was genuinely reassured to hear that Paul is not gung-ho about drinking the AMP Kool Aid; he has genuine concerns about the potential downsides for the web).
Initially, I accepted this argument that Google just can’t trust the rest of the web. But the more I talked to people at AMP Conf—and I had some really, really good discussions with people away from the stage—the more I began to question it.
Here’s the thing: the regular Google search can’t guarantee that any web page is actually 100% the right result to return for a search. Instead there’s a lot of fuzziness involved: based on the content, the markup, and the number of trusted sources linking to this, it looks like it should be a good result. In other words, Google search trusts websites to—by and large—do the right thing. Sometimes websites abuse that trust and try to game the system with sneaky tricks. Google responds with penalties when that happens.
Why can’t it be the same for AMP pages? Let me host my own AMP pages (maybe even host my own AMP script) and then when the Googlebot crawls those pages—the same as it crawls any other pages—that’s when it can verify that the AMP page is abiding by the rules. If I do something sneaky and trick Google into flagging a page as fast when it actually isn’t, then take my pre-rendering reward away from me.
To be fair, Google has very, very strict rules about what and how to pre-render the AMP results it’s caching. I can see how allowing even the potential for a false positive would have a negative impact on the user experience of Google search. But c’mon, there are already false positives in regular search results—fake news, spam blogs. Googlers are smart people. They can solve—or at least mitigate—these problems.
Google says it can’t trust our self-hosted AMP pages enough to pre-render them. But they ask for a lot of trust from us. We’re supposed to trust Google to cache and host copies of our pages. We’re supposed to trust Google to provide some mechanism to users to get at the original canonical URL. I’d like to see trust work both ways.
Thinking of writing a book? Here’s some excellent advice and insights from Yaili, who only went and wrote another one.
Let me say this first: writing a book is hard work. It eats up all of your free time and mental space. It makes you feel like you are forever procrastinating and producing very little. It makes you not enjoy any free time. It’s like having a dark cloud hanging over your head at all times. At. All. Times.
It strikes me that Garrett’s site has become a valuable record of the human condition with its mix of two personal stories—one relating to his business and the other relating to his health—both of them communicated clearly through great writing.
Have a read back through the archive and I think you’ll share my admiration.
A nice little use of print (and screen) styles from Bastian—compose letters in a web browser.
Instead of messing around in Word, Pages or even Indesign, you can write your letters in the browser, export them as HTML or PDF (via Apple Preview).
I can relate to what Rachel describes here—I really like using my own website as a playground to try out new technologies. That’s half the fun of the indie web.
I had already decided to bring my content back home in 2017, but I’d also like to think about this idea of using my own site to better demonstrate and play with the new technologies I write about.
So if AMP is useful it’s because it raises the stakes. If we (news developers) don’t figure out faster ways to load our pages for readers, then we’re going to lose a lot of magic.
A number of developers answered questions on the potential effects of Google’s AMP project. This answer resonates a lot with my own feelings:
AMP is basically web performance best practices dressed up as a file format. That’s a very clever solution to what is, at heart, a cultural problem: when management (in one form or another) comes to the CMS team at a news organization and asks to add more junk to the site, saying “we can’t do that because AMP” is a much more powerful argument than trying to explain why a pop-over “Like us on Facebook!” modal is driving our readers to drink.
But the danger is that AMP turns into a long-term “solution” instead of a stop-gap:
So in a sense, the best possible outcome is that AMP is disruptive enough to shake the boardroom into understanding the importance of performance in platform decisions (and making the hard business decisions this demands), but that developers are allowed to implement those decisions in standard HTML instead of adding yet another delivery format to their export pipeline.
The ideal situation looks a lot more like Tim’s proposal: