Taking the indie web to the next level—self-hosting on your own hardware.
Tired of Big Tech monopolies, a community of hobbyists is taking their digital lives off the cloud and onto DIY hardware that they control.
Taking the indie web to the next level—self-hosting on your own hardware.
Tired of Big Tech monopolies, a community of hobbyists is taking their digital lives off the cloud and onto DIY hardware that they control.
Back in 2014 Vitaly asked me if I’d be the host for Smashing Conference in Freiburg. I jumped at the chance. I thought it would be an easy gig. All of the advantages of speaking at a conference without the troublesome need to actually give a talk.
As it turned out, it was quite a bit of work:
It wasn’t just a matter of introducing each speaker—there was also a little chat with each speaker after their talk, so I had to make sure I was paying close attention to each and every talk, thinking of potential questions and conversation points. After two days of that, I was a bit knackered.
Last month, I hosted an other event, but this time it was online: UX Fest. Doing the post-talk interviews was definitely a little weirder online. It’s not quite the same as literally sitting down with someone. But the online nature of the event did provide one big advantage…
To minimise technical hitches on the day, and to ensure that the talks were properly captioned, all the speakers recorded their talks ahead of time. That meant I had an opportunity to get a sneak peek at the talks and prepare questions accordingly.
UX Fest had a day of talks every Thursday in June. There were four talks per Thursday. I started prepping on the Monday.
First of all, I just watched all the talks and let them wash me over. At this point, I’d often think “I’m not sure if I can come up with any questions for this one!” but I’d let the talks sit there in my subsconscious for a while. This was also a time to let connections between talks bubble up.
Then on the Tuesday and Wednesday, I went through the talks more methodically, pausing the video every time I thought of a possible question. After a few rounds of this, I inevitably ended up with plenty of questions, some better than others. So I then re-ordered them in descending levels of quality. That way if I didn’t get to the questions at the bottom of the list, it was no great loss.
In theory, I might not get to any of my questions. That’s because attendees could also ask questions on the day via a chat window. I prioritised those questions over my own. Because it’s not about me.
On some days there was a good mix of audience questions and my own pre-prepared questions. On other days it was mostly my own questions.
Either way, it was important that I didn’t treat the interview like a laundry list of questions to get through. It was meant to be a conversation. So the answer to one question might touch on something that I had made a note of further down the list, in which case I’d run with that. Or the conversation might go in a really interesting direction completely unrelated to the questions or indeed the talk.
Above all, these segments needed to be engaging and entertaining in a personable way, more like a chat show than a post-game press conference. So even though I had done lots of prep for interviewing each speaker, I didn’t want to show my homework. I wanted each interview to feel like a natural flow.
To quote the old saw, this kind of spontaneity takes years of practice.
There was an added complication when two speakers shared an interview slot for a joint Q&A. Not only did I have to think of questions for each speaker, I also had to think of questions that would work for both speakers. And I had to keep track of how much time each person was speaking so that the chat wasn’t dominated by one person more than the other. This was very much like moderating a panel, something that I enjoy very much.
In the end, all of the prep paid off. The conversations flowed smoothly and I was happy with some of the more thought-provoking questions that I had researched ahead of time. The speakers seemed happy too.
Y’know, there are not many things I’m really good at. I’m a mediocre developer, and an even worse designer. I’m okay at writing. But I’m really good at public speaking. And I think I’m pretty darn good at this hosting lark too.
You can listen to an audio version of Weighing up UX.
This is the month of UX Fest 2021—this year’s online version of UX London. The festival continues with masterclasses every Tuesday in June and a festival day of talks every Thursday (tickets for both are still available). But it all kicked off with the conference part last week: three back-to-back days of talks.
I have the great pleasure of hosting the event so not only do I get to see a whole lot of great talks, I also get to quiz the speakers afterwards.
Right from day one, a theme emerged that continued throughout the conference and I suspect will continue for the rest of the festival too. That topic was metrics. Kind of.
See, metrics come up when we’re talking about A/B testing, growth design, and all of the practices that help designers get their seat at the table (to use the well-worn cliché). But while metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.
People have tried to quantify user experience benefits using measurements like NetPromoter Score, which is about as useful as reading tea leaves or chicken entrails.
So we tend to equate user experience gains with business gains. That makes sense. Happy users should be good for business. That’s a reasonable hypothesis. But it gets tricky when you need to make the case for improving the user experience if you can’t tie it directly to some business metric. That’s when we run into the McNamara fallacy:
Making a decision based solely on quantitative observations (or metrics) and ignoring all others.
The way out of this quantitative blind spot is to use qualitative research. But another theme of UX Fest was just how woefully under-represented researchers are in most organisations. And even when you’ve gone and talked to users and you’ve got their stories, you still need to play that back in a way that makes sense to the business folks. These are stories. They don’t lend themselves to being converted into charts’n’graphs.
And so we tend to fall back on more traditional metrics, based on that assumption that what’s good for user experience is good for business. But it’s a short step from making that equivalency to flipping the equation: what’s good for the business must, by definition, be good user experience. That’s where things get dicey.
Broadly speaking, the talks at UX Fest could be put into two categories. You’ve got talks covering practical subjects like product design, content design, research, growth design, and so on. Then you’ve got the higher-level, almost philosophical talks looking at the big picture and questioning the industry’s direction of travel.
The tension between these two categories was the highlight of the conference for me. It worked particularly well when there were back-to-back talks (and joint Q&A) featuring a hands-on case study that successfully pushed the needle on business metrics followed by a more cautionary talk asking whether our priorities are out of whack.
For example, there was a case study on growth design, which emphasised the importance of A/B testing for validation, immediately followed by a talk on deceptive dark patterns. Now, I suspect that if you were to A/B test a deceptive dark pattern, the test would validate its use (at least in the short term). It’s no coincidence that a company like Booking.com, which lives by the A/B sword, is also one of the companies sued for using distressing design patterns.
Using A/B tests alone is like using a loaded weapon without supervision. They only tell you what people do. And again, the solution is to make sure you’re also doing qualitative research—that’s how you find out why people are doing what they do.
But as I’ve pondered the lessons from last week’s conference, I’ve come to realise that there’s also a danger of focusing purely on the user experience. Hear me out…
At one point, the question came up as to whether deceptive dark patterns were ever justified. What if it’s for a good cause? What if the deceptive dark pattern is being used by an organisation actively campaigning to do good in the world?
In my mind, there was no question. A deceptive dark pattern is wrong, no matter who’s doing it.
(There’s also the problem of organisations that think they’re doing good in the world: I’m sure that every talented engineer that worked on Google AMP honestly believed they were acting in the best interests of the open web even as they worked to destroy it.)
Where it gets interesting is when you flip the question around.
Suppose you’re a designer working at an organisation that is decidedly not a force for good in the world. Say you’re working at Facebook, a company that prioritises data-gathering and engagement so much that they’ll tolerate insurrectionists and even genocidal movements. Now let’s say there’s talk in your department of implementing a deceptive dark pattern that will drive user engagement. But you, being a good designer who fights for the user, take a stand against this and you successfully find a way to ensure that Facebook doesn’t deploy that deceptive dark pattern.
Does that count as being a good user experience designer? Yes, you’ve done good work at the coalface. But the overall business goal is like a deceptive dark pattern that’s so big you can’t take it in. Is it even possible to do “good” design when you’re inside the belly of that beast?
Facebook is a relatively straightforward case. Anyone who’s still working at Facebook can’t claim ignorance. They know full well where that company’s priorities lie. No doubt they sleep at night by convincing themselves they can accomplish more from the inside than without. But what about companies that exist in the grey area of being imperfect? Frankly, what about any company that relies on surveillance capitalism for its success? Is it still possible to do “good” design there?
There are no easy answers and that’s why it so often comes down to individual choice. I know many designers who wouldn’t work at certain companies …but they also wouldn’t judge anyone else who chooses to work at those companies.
At Clearleft, every staff member has two levels of veto on client work. You can say “I’m not comfortable working on this”, in which case, the work may still happen but we’ll make sure the resourcing works out so you don’t have anything to do with that project. Or you can say “I’m not comfortable with Clearleft working on this”, in which case the work won’t go ahead (this usually happens before we even get to the pitching stage although there have been one or two examples over the years where we’ve pulled out of the running for certain projects).
Going back to the question of whether it’s ever okay to use a deceptive dark pattern, here’s what I think…
It makes no difference whether it’s implemented by ProPublica or Breitbart; using a deceptive dark pattern is wrong.
But there is a world of difference in being a designer who works at ProPublica and being a designer who works at Breitbart.
That’s what I’m getting at when I say there’s a danger to focusing purely on user experience. That focus can be used as a way of avoiding responsibility for the larger business goals. Then designers are like the soldiers on the eve of battle in Henry V:
For we know enough, if we know we are the kings subjects: if his cause be wrong, our obedience to the king wipes the crime of it out of us.
I quite enjoy interviewing people. I don’t mean job interviews. I mean, like, talk show interviews. I’ve had a lot of fun over the years moderating panel discussions: @media Ajax in 2007, SxSW in 2008, Mobilism in 2011, the Progressive Web App Dev Summit and EnhanceConf in 2016.
I’ve even got transcripts of some panels I’ve moderated:
I enjoyed each and every one. I also had the pleasure of interviewing the speakers at every Responsive Day Out. Hosting events like that is a blast, but what with The Situation and all, there hasn’t been much opportunity for hosting conferences.
Well, I’m going to be hosting an event next month: UX Fest. It’s this year’s online version of UX London.
An online celebration of digital design, taking place throughout June 2021.
I am simultaneously excited and nervous. I’m excited because I’ll have the chance to interview a whole bunch of really smart people. I’m nervous because it’s all happening online and that might feel quite different to an in-person discussion.
But I have an advantage. While the interviews will be live, the preceding talks will be pre-recorded. That means I have to time watch and rewatch each talk, spot connections between them, and think about thought-provoking questions for each speaker.
So that’s what I’m doing between now and the beginning of June. If you’d like to bear witness to the final results, I encourage you to get a ticket for UX Fest. You can come to the three-day conference in the first week of June, or you can get a ticket for the festival spread out over the following three Thursdays in June, or you can get a combo ticket for both and save some money.
There’s an inclusion programme for the conference and festival days:
Anyone from an underrepresented group is invited to apply. We especially invite and welcome Black, indigenous & people of colour, LGBTQIA+ people and people with disabilities.
There’ll also be a whole bunch of hands-on masterclasses throughout June that you can book individually. I won’t be hosting those though. I’ll have plenty to keep me occupied hosting the conference and the festival.
My work shouldn’t be presented in the Smithsonian behind glass or anything, I’m just pointing at this enormous flaw in the architecture of the web itself: you’re renting servers and renting URLs. Nothing is permanent because on the web we don’t really own any space, we’re just borrowing land temporarily.
If you’re using web fonts, there are good performance (and privacy) reasons for hosting your own font files. And fortunately, Google Fonts gives you that option. There’s a “Download family” button on every specimen page.
But if you go ahead and download a font family from Google Fonts, you’ll notice something a bit odd. The .zip file only contains .ttf files. You can serve those on the web, but it’s far from the best choice. Woff2 is far leaner in file size.
This means you need to manually convert the downloaded .ttf files into .woff or .woff2 files using something like Font Squirrel’s generator. That’s fine, but I’m curious as to why this step is necessary. Why doesn’t Google Fonts provide .woff or .woff2 files in the downloaded folder? After all, if you choose to use Google Fonts as a third-party hosting service for your fonts, it most definitely serves up the appropriate file formats.
I thought maybe it was something to do with the licensing. Maybe some licenses only allow for unmodified truetype files to be distributed? But I’ve looked at fonts with different licenses—some have Apache 2 licensing, some have Open Font licensing—and they’re all quite permissive and definitely allow for modification.
Maybe the thinking is that, if you’re hosting your own font files, then you know what you’re doing and you should be able to do your own file conversion and subsetting. But I’ve come across more than one website in the wild serving up .ttf files. And who can blame them? They want to host their own font files. They downloaded those files from Google Fonts. Why shouldn’t they assume that they’re good to go?
It’s all a bit strange. If anyone knows why Google Fonts only provides .ttf files for download, please let me know. In a pinch, I will also accept rampant speculation.
Trys also pointed out some weird default behaviour if you do let Google Fonts do the hosting for you. Specifically if it’s a variable font. Let’s say it’s a font with weight as a variable axis. You specify in advance which weights you’ll be using, and then it generates separate font files to serve for each different weight.
Doesn’t that defeat the whole point of using a variable font? I mean, I can see how it could result in smaller file sizes if you’re just using one or two weights, but isn’t half the fun of having a weight axis that you can go crazy with as many weights as you want and it’s all still one font file?
Like I said, it’s all very strange.
This is an interesting project to try to rank web hosts by performance:
Real-world server response (Time to First Byte) latencies, as experienced by real-world users navigating the web.
I have a proposal that I think might alleviate some of the animosity around Google AMP. You can jump straight to the proposal or get some of the back story first…
But I cannot get behind AMP.
Instead of competing on its own merits, AMP is unfairly propped up by the search engine of its parent company, Google. That makes it very hard to evaluate whether AMP is being used on its own merits. Instead, the evidence suggests that most publishers of AMP pages are doing so because they feel they have to, rather than because they want to. That’s a real shame, because as a library of web components, AMP seems pretty good. But there’s just no way to evaluate AMP-the-format without taking into account AMP-the-ecosystem.
Google AMP ostensibly exists to make the web faster. Initially the focus was specifically on mobile performance, but that distinction has since fallen by the wayside. The idea is that by using AMP’s web components, your pages will be speedy. Though, as Andy Davies points out, this isn’t always the case:
This is where I get confused… https://independent.co.uk only have an AMP site yet it’s performance is awful from a user perspective - isn’t AMP supposed to prevent this?
According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95.
Publishers who already have fast web pages—like The Guardian—are still compelled to make AMP versions of their stories because of the search benefits reserved for AMP. As Terence Eden reported from a meeting of the AMP advisory committee:
We heard, several times, that publishers don’t like AMP. They feel forced to use it because otherwise they don’t get into Google’s news carousel — right at the top of the search results.
Some people felt aggrieved that all the hard work they’d done to speed up their sites was for nothing.
The Google AMP team are at pains to point out that AMP is not a ranking factor in search. That’s true. But it is unfairly privileged in other ways. Only AMP pages can appear in the Top Stories carousel …which appears above any other search results. As I’ve said before:
Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.
Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.
That’s not the only way that AMP pages get preferential treatment. It turns out that the secret to the speed of AMP pages isn’t the web components. It’s the prerendering.
If you’ve ever seen an AMP page in a list of search results, you’ll have noticed the little lightning icon. If you’ve ever tapped on that search result, you’ll have noticed that the page loads blazingly fast!
That’s not down to AMP-the-format, alas. That’s down to the fact that the page has been prerendered by Google before you even went to it. If any page were prerendered that way, it would load blazingly fast. But currently, this privilege is reserved for AMP pages only.
If, after tapping through to that AMP page, you looked at the address bar of your browser, you might have noticed something odd. Even though you might have thought you were visiting The Washington Post, or The New York Times, the URL of the (blazingly fast) page you’re looking at is still under Google’s domain. That’s because Google hosts any AMP pages that it prerenders.
Google calls this “the AMP cache”, but it would be better described as “AMP hosting”. The web page sent down the wire is hosted on Google’s domain.
Here’s that AMP letter again:
When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.
Through gritted teeth, I will refer to this as “the AMP cache”, because that’s what everyone else calls it. But make no mistake, Google is hosting—not caching—these pages.
But why host the pages on a Google domain? Why not prerender the original URLs?
The pitch I think site owners are hearing is: let us host your pages on our domain and we’ll promote them in search results AND preload them so they feel “instant.” To opt-in, build pages using this component syntax.
But perhaps we could de-couple the AMP format from the AMP cache.
That’s what Terence suggests:
My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.
Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index.
It’s been said before but it would be so good for the web if pages with a Lighthouse score over say, 90 could get into that top search result area, even if they’re not built using Google’s AMP framework. Feels wrong to have to rebuild/reproduce an already-fast site just for SEO.
Here’s the problem…
Let’s say Google do indeed prerender already-fast pages when they’re listed in search results. You, a search user, type something into Google. A list of results come back. Google begins pre-rendering some of them. But you don’t end up clicking through to those pages. Nonetheless, the servers those pages are hosted on have received a GET request coming from a Google search. Those publishers now know that a particular (cookied?) user could have clicked through to their site. That’s very different from knowing when someone has actually arrived at a particular site.
And that’s why Google host all the AMP pages that they prerender. Given the privacy implications of prerendering non-Google URLs, I must admit that I see their point.
Still, it’s a real shame to miss out on the speed benefit of prerendering:
Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit.
Why is Google’s AMP cache just for AMP pages? (Y’know, apart from the obvious answer that it’s in the name.)
What if Google were allowed to host non-AMP pages? Google search could then prerender those pages just like it currently does for AMP pages. There would be no privacy leaks; everything would happen on the same domain—google.com or ampproject.org or whatever—just as currently happens with AMP pages.
Don’t get me wrong: I’m not suggesting that Google should make a 1:1 model of the web just to prerender search results. I think that the implementation would need to have two important requirements:
This could be a
meta element. Maybe something like:
<meta name="caches-allowed" content="google">
This would have the nice benefit of allowing comma-separated values:
<meta name="caches-allowed" content="google, yandex">
(The name is just a strawman, by the way—I’m not suggesting that this is what the final implementation would actually look like.)
If not a
meta element, then perhaps this could be part of
robots.txt? Although my feeling is that this needs to happen on a document-by-document basis rather than site-wide.
Many people will, quite rightly, never want Google—or anyone else—to host and serve up their content. That’s why it’s so important that this behaviour needs to be opt-in. It’s kind of appalling that the current hosting of AMP pages is opt-in-by-proxy-sort-of.
Which pages should be blessed with hosting and prerendering? The fast ones. That’s sorta the whole point of AMP. But right now, there’s a lot of resentment by people with already-fast websites who quite rightly feel they shouldn’t have to use the AMP format to benefit from the AMP ecosystem.
Page speed is already a ranking factor. It doesn’t seem like too much of a stretch to extend its benefits to hosting and prerendering. As mentioned above, there are already a few possible metrics to use:
Ah, but what if a page has good score when it’s indexed, but then gets worse afterwards? Not a problem! The version of the page that’s measured is the same version of the page that gets hosted and prerendered. Google can confidently say “This page is fast!” After all, they’re the ones serving up the page.
That does raise the question of how often Google should check back with the original URL to see if it has changed/worsened/improved. The answer to that question is however long it currently takes to check back in on AMP pages:
Each time a user accesses AMP content from the cache, the content is automatically updated, and the updated version is served to the next user once the content has been cached.
This proposal does not solve the problem with the address bar. You’d still find yourself looking at a page from The Washington Post or The New York Times (or adactio.com) but seeing a completely different URL in your browser. That’s not good, for all the reasons outlined in the AMP letter.
In fact, this proposal could potentially make the situation worse. It would allow even more sites to be impersonated by Google’s URLs. Where currently only AMP pages are bad actors in terms of URL confusion, opening up the AMP cache would allow equal opportunity URL confusion.
What I’m suggesting is definitely not a long-term solution. The long-term solutions currently being investigated are technically tricky and will take quite a while to come to fruition—web packages and signed exchanges. In the meantime, what I’m proposing is a stopgap solution that’s technically a lot simpler. But it won’t solve all the problems with AMP.
This proposal solves one problem—AMP pages being unfairly privileged in search results—but does nothing to solve the other, perhaps more serious problem: the erosion of site identity.
Currently, Google can assess whether a page should be hosted and prerendered by checking to see if it’s a valid AMP page. That test would need to be widened to include a different measurement of performance, but those measurements already exist.
I can see how this assessment might not be as quick as checking for AMP validity. That might affect whether non-AMP pages could be measured quickly enough to end up in the Top Stories carousel, which is, by its nature, time-sensitive. But search results are not necessarily as time-sensitive. Let’s start there.
Currently, AMP pages can be prerendered without fetching anything other than the markup of the AMP page itself. All the CSS is inline. There are no initial requests for other kinds of content like images. That’s because there are no
img elements on the page: authors must use
amp-img instead. The image itself isn’t loaded until the user is on the page.
If the AMP cache were to be opened up to non-AMP pages, then any content required for prerendering would also need to be hosted on that same domain. Otherwise, there’s privacy leakage.
This definitely introduces an extra level of complexity. Paths to assets within the markup might need to be re-written to point to the Google-hosted equivalents. There would almost certainly need to be a limit on the number of assets allowed. Though, for performance, that’s no bad thing.
Make no mistake, figuring out what to do about assets—style sheets, scripts, and images—is very challenging indeed. Luckily, there are very smart people on the Google AMP team. If that brainpower were to focus on this problem, I am confident they could solve it.
There will be technical challenges, but hopefully nothing insurmountable.
I honestly can’t see what Google have to lose here. If their goal is genuinely to reward fast pages, then opening up their AMP cache to fast non-AMP pages will actively encourage people to make fast web pages (without having to switch over to the AMP format).
I’ve deliberately kept the details vague—what the opt-in should look like; what the speed measurement should be; how to handle assets—I’m sure smarter folks than me can figure that stuff out.
I would really like to know what other people think about this proposal. Obviously, I’d love to hear from members of the Google AMP team. But I’d also love to hear from publishers. And I’d very much like to know what people in the web performance community think about this. (Write a blog post and send me a webmention.)
What am I missing here? What haven’t I thought of? What are the potential pitfalls (and are they any worse than the current acrimonious situation with Google AMP)?
I would really love it if someone with a fast website were in a position to say, “Hey Google, I’m giving you permission to host this page so that it can be prerendered.”
I would really love it if someone with a slow website could say, “Oh, shit! We’d better make our existing website faster or Google won’t host our pages for prerendering.”
And I would dearly love to finally be able to embrace AMP-the-format with a clear conscience. But as long as prerendering is joined at the hip to the AMP format, the injustice of the situation only harms the AMP project.
Google, open up the AMP cache.
Chris makes the very good point that the J in JAMstack isn’t nearly as important as the static hosting part.
This is my maj.
Trust no one! Harry enumerates the reason why you should be self-hosting your assets (and busts some myths along the way).
There really is very little reason to leave your static assets on anyone else’s infrastructure. The perceived benefits are often a myth, and even if they weren’t, the trade-offs simply aren’t worth it. Loading assets from multiple origins is demonstrably slower.
Anchor seems to be going for the YouTube model. They want a huge number of people to use their platform. But the concentration of so much media in one place is one of the problems with today’s web. Massive social networks like Facebook, Instagram, and YouTube have too much power over writers, photographers, and video creators. We do not want that for podcasts.
But when I hear AMP described as an open, community-led project, it strikes me as incredibly problematic, and more than a little troubling. AMP is, I think, best described as nominally open-source. It’s a corporate-led product initiative built with, and distributed on, open web technologies.
But so what, right? Tom-ay-to, tom-a-to. Well, here’s a pernicious example of where it matters: in a recent announcement of their intent to ship a new addition to HTML, the Google Chrome team cited the mood of the web development community thusly:
Web developers: Positive (AMP team indicated desire to start using the attribute)
If AMP were actually the product of working web developers, this justification would make sense. As it is, we’ve got one team at Google citing the preference of another team at Google but representing it as the will of the people.
This is just one example of AMP’s sneaky marketing where some finely-shaved semantics allows them to appear far more reasonable than they actually are.
At AMP Conf, the Google Search team were at pains to repeat over and over that AMP pages wouldn’t get any preferential treatment in search results …but they appear in a carousel above the search results. Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.
The same semantic nit-picking can be found in their defence of caching. See, they’ve even got me calling it caching! It’s hosting. If I click on a search result, and I am taken to page that has a URL beginning with
https://www.google.com/amp/s/... then that page is being hosted on the domain
google.com. That is literally what hosting means. Now, you might argue that the original version was hosted on a different domain, but the version that the user gets sent to is the Google copy. You can call it caching if you like, but you can’t tell me that Google aren’t hosting AMP pages.
That’s a particularly low blow, because it’s such a bait’n’switch. One of the reasons why AMP first appeared to be different to Facebook Instant Articles or Apple News was the promise that you could host your AMP pages yourself. That’s the very reason I first got interested in AMP. But if you actually want the benefits of AMP—appearing in the not-search-results carousel, pre-rendered performance, etc.—then your pages must be hosted by Google.
So, to summarise, here are three statements that Google’s AMP team are currently peddling as being true:
I don’t think those statements are even truthy, much less true. In fact, if I were looking for the right term to semantically describe any one of those statements, the closest in meaning would be this:
A statement used intentionally for the purpose of deception.
That is the dictionary definition of a lie.
Update: That last part was a bit much. Sorry about that. I know it’s a bit much because The Register got all gloaty about it.
I don’t think the developers working on the AMP format are intentionally deceptive (although they are engaging in some impressive cognitive gymnastics). The AMP ecosystem, on the other hand, that’s another story—the preferential treatment of Google-hosted AMP pages in the carousel and in search results; that’s messed up.
Still, I would do well to remember that there are well-meaning people working on even the fishiest of projects.
Except for the people working at the shitrag that is The Register.
(The other strong signal that I overstepped the bounds of decency was that this post attracted the pond scum of Hacker News. That’s another place where the “well-meaning people work on even the fishiest of projects” rule definitely doesn’t apply.)
Hadley points to the serious security concerns with AMP:
Fundamentally, we think that it’s crucial to the web ecosystem for you to understand where content comes from and for the browser to protect you from harm. We are seriously concerned about publication strategies that undermine them.
The anchor element is designed to allow one website to refer visitors to content on another website, whilst retaining all the features of the web platform. We encourage distribution platforms to use this mechanism where appropriate. We encourage the loading of pages from original source origins, rather than re-hosted, non-canonical locations.
That last sentence there? That’s what I’m talking about!
It’s all very admirable, but it also feels a little bit 927.
If you’re planning the move to TLS and your server is on Digital Ocean running Nginx, Graham’s here to run you through the (surprisingly simple) process.
Sorting out hosting is a big stumbling block for people who want to go down the Indie Web route. Frankly it’s much easier to just use a third-party silo like Facebook or Twitter. I’ve been saying for a while now that I’d really like to see “concierge” services for hosting—”here, you take care of all this hassle!”
Well, this initiative looks like exactly that.
Aaron raises a point that I’ve discussed before in regards to the indie web (and indeed, the web in general): we don’t buy domain names; we rent them.
It strikes me that all the good things about the web are decentralised (one-way linking, no central authority required to add a node), but all the sticking points are centralised: ICANN, DNS.
Aaron also points out that we are beholden to our hosting companies, although—having moved hosts a number of times myself—that’s an issue that DNS (and URLs in general) helps alleviate. And there’s now some interesting work going on in literally owning your own website: a web server in the home.
Y’know, I’m worried about what will happen to my own photos when Flickr inevitably goes down the tubes (there are still some good people there fighting the good fight, but they’re in the minority and they’re battling against the douchiest of Silicon Valley managerial types who have been brought in to increase “engagement” by stripping away everything that makes Flickr special) …but what really worries me is what’s going to happen to Flickr Commons. It’s an unbelievably important and valuable resource.
As of today, we have left Flickr (including The Commons).
Unfortunately, they didn’t just leave their Flickr collection; they razed it to the ground. All those links, all those comments, and all those annotations have been wiped out.
They’ve moved their images over to Wikimedia Commons …for now. It turns out that they have a very cavalier attitude towards online storage (a worrying trait for a museum). They’re jumping out of the frying pan of Flickr and into the fire of Tumblr:
In the past few months, we’ve been testing Tumblr and it’s been a much better channel for this type of content.
Audio and video is being moved around to where the eyeballs and earholes currently are:
We have left iTunesU in favor of sharing content via YouTube and SoundCloud.
I find this quite disturbing. A museum should be exactly the kind of institution that should be taking a thoughtful, considered approach to how it stores content online. Digital preservation should be at the heart of its activities. Instead, it takes a back seat to chasing the fleeting thrill of “engagement.”
Leaving Flickr Commons could have been the perfect opportunity to invest in long-term self-hosting. Instead they’re abandoning the Titanic by hitching a ride on the Hindenberg.
A rallying cry for the Indie Web.
Let’s build this.