Splitting the Web
This rings true to me.
This rings true to me.
On day 1 of your class about behaviour change in a science course, you learn that behaviour change is not a simple matter of information in, behaviour out. Human behaviour, and changing it, is big and complex.
Meanwhile, on your marketing courses, which I have had the misfortune to attend, the model of changing behaviour is pretty much this: information in, behaviour out.
AMP succeeded spectacularly. Then it failed. And to anyone looking for a reason not to trust the biggest company on the internet, AMP’s story contains all the evidence you’ll ever need.
This is a really good oral history of how AMP soured Google’s reputation.
Full disclosure: I’m briefly cited:
“When it suited them, it was open-source,” says Jeremy Keith, a web developer and a former member of AMP’s advisory council. “But whenever there were any questions about direction and control… it was Google’s.”
As an aside, this article contains a perfect description of the company cultures of Facebook, Apple, and Google:
“You meet with a Facebook person and you see in their eyes they’re psychotic,” says one media executive who’s dealt with all the major platforms. “The Apple person kind of listens but then does what it wants to do. The Google person honestly thinks what they’re doing is the best thing.”
Spot. On.
There are some tasty designs in this archive from Sainbury’s.
Targeted advertising based on online behavior doesn’t just hurt privacy. It also contributes to a range of other harms.
I very much agree with this call to action from the EFF.
Maybe we can finally get away from the ludicrious idea that behavioural advertising is the only possible form of effective advertising. It’s simply not true.
I really hope that Betteridge’s Law doesn’t apply to this headline.
Google Topics is the successor to Google FLoC. It seems to require collusion from your “user agent”:
I can’t see why any other browser would consider supporting Topics. Google wants to keep tracking users across the entire web in a world where users realize they don’t want to be tracked. Why help Google?
Google sees Chrome as a way to embed the entire web into an iframe on Google.com.
While the dream of “personalized” ads has turned out to be mostly a nightmare, adtech has built some of the wealthiest companies in the world based on tracking us. It’s no surprise to me that as Members of the European Parliament contemplate tackling these many harms, Big Tech is throwing millions of Euros behind a “necessary evil” PR defense for its business model.
But tracking is an unnecessary evil.
Even in today’s tracking-obsessed digital ecosystem it’s perfectly possible to target ads successfully without placing people under surveillance. In fact right now, some of the most effective and highly valued online advertising is contextual — based on search terms, other non-tracking based data, and the context of websites rather than intrusive, dangerous surveillance.
Let’s be clear. Advertising is essential for small and medium size businesses, but tracking is not.
Rather than creating advertising that is more relevant, more timely and more likable we are creating advertising that is more annoying, more disliked, and more avoided.
I promise you, the minute tracking is outlawed, Facebook, Google and the rest of the adtech giants will claim that their new targeting mechanisms (whatever they turn out to be) are superior to tracking.
Behavioral ads are only more profitable than context ads if all the costs of surveillance – the emotional burden of being watched; the risk of breach, identity-theft and fraud; the potential for government seizure of surveillance data – is pushed onto internet users. If companies have to bear those costs, behavioral ads are a total failure, because no one in the history of the human race would actually grant consent to all the things that gets done with our data.
Google and the entire tracking industry relies on IAB Europe’s consent system, which has now been found to be illegal.
Chrome Dev Summit kicked off yesterday. The opening keynote had its usual share of announcements.
There was quite a bit of talk about privacy, which sounds good in theory, but then we were told that Google would be partnering with “industry stakeholders.” That’s probably code for the kind of ad-tech sharks that have been making a concerted effort to infest W3C groups. Beware.
But once Una was on-screen, the topics shifted to the kind of design and development updates that don’t have sinister overtones.
My favourite moment was when Una said:
We’re also partnering with Jeremy Keith of Clearleft to launch Learn Responsive Design on web.dev. This is a free online course with everything you need to know about designing for the new responsive web of today.
This is what’s been keeping me busy for the past few months (and for the next month or so too). I’ve been writing fifteen pieces—or “modules”—on modern responsive web design. One third of them are available now at web.dev/learn/design:
The rest are on their way: typography, responsive images, theming, UI patterns, and more.
I’ve been enjoying this process. It’s hard work that requires me to dive deep into the nitty-gritty details of lots of different techniques and technologies, but that can be quite rewarding. As is often said, if you truly want to understand something, teach it.
Oh, and I made one more appearance at the Chrome Dev Summit. During the “Ask Me Anything” section, quizmaster Una asked the panelists a question from me:
Given the court proceedings against AMP, why should anyone trust FLOC or any other Google initiatives ostensibly focused on privacy?
(Thanks to Jake for helping craft the question into a form that could make it past the legal department but still retain its spiciness.)
The question got a response. I wouldn’t say it got an answer. My verdict remains:
I’m not sure that Google Chrome can be considered a user agent.
The fundamental issue is that you’ve got a single company that’s the market leader in web search, the market leader in web advertising, and the market leader in web browsers. I honestly believe all three would function better—and more honestly—if they were separate entities.
Monopolies aren’t just damaging for customers. They’re damaging for the monopoly too. I’d love to see Google Chrome compete on being a great web browser without having to also balance the needs of surveillance-based advertising.
I subscribe to Peter Gasston’s newsletter, The Tech Landscape. It’s good. Peter’s a smart guy with his finger on the pulse of many technologies that are beyond my ken. I recommend subscribing.
But I was very taken aback by what he wrote in issue 202. It was to do with algorithmic recommendation engines.
This week I want to take a little dump on a tweet I read. I’m not going to link to it (I’m not that person), but it basically said something like: “I’m afraid to Google something because I don’t want the algorithm to think I like it, and I’m afraid to click a link because I don’t want the algorithm to show me more like it… what a cage.”
I saw the same tweet. It resonated with me. I had responded with a link to a post I wrote a while back called Get safe. That post made two points:
But Peter describes ubiquitous surveillance as a feature, not a bug:
It’s observing what someone likes or does, then trying to make recommendations for more things like it—whether that’s books, TV shows, clothes, advertising, or whatever. It works on probability, so it’s going to make better guesses the more it knows you; if you like ten things of type A, then liking one thing of type B shouldn’t be enough to completely change its recommendations. The problem is, we don’t like “the algorithm” if it doesn’t work, and we don’t like it if works too well (“creepy!”). But it’s not sinister, and it’s not a cage.
He would be correct if the balance of power were tipped towards the person actively looking for recommendations. As I said in my earlier post:
Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies?
When Peter says “it’s not sinister, and it’s not a cage” that may be true for him, but that is not a shared feeling, as the original tweet demonstrates. I don’t think it’s fair to dismiss someone else’s psychological pain because you don’t think they “get it”. I’m pretty sure everyone “gets” how recommendation engines are supposed to work. That’s not the issue. Trying to provide relevant content isn’t the problem. It’s the unbelievably heavy-handed methods that make it feel like a cage.
Peter uses the metaphor of a record shop:
“The algorithm” is the best way to navigate a world of infinite choice; imagine you went to a record shop (remember them?) which had every recording ever released; how would you find new music? You’d either buy music by bands you know you already liked, or you’d take a pure gamble on something—which most of the time would be a miss. So you’d ask a store worker, and they’d recommend the music they liked—but that’s no guarantee you’d like it. A good worker would ask what type of music you like, and recommend music based on that—you might not like all the recommendations, but there’s more of a chance you’d like some. That’s just what “the algorithm” does.
But that’s not true. You don’t ask “the algorithm” for a recommendation—it foists them on you whether you want them or not. A more apt metaphor would be that you walked by a record shop once and the store worker came out and followed you down the street, into your home, and watched your every move for the rest of your life.
What Peter describes sounds great—a helpful knowledgable software agent that you ask for recommendations. But that’s not what “the algorithm” is. And that’s why it feels like a cage. That’s why it is a cage.
The original tweet was an open, honest, and vulnerable insight into what online recommendation engines feel like. That’s a valuable insight that should be taken on board, not dismissed.
And what a lack of imagination to look at an existing broken system—that doesn’t even provide good recommendations while making people afraid to click on links—and shrug and say that this is the best we can do. If this really is “is the best way to navigate a world of infinite choice” then it’s no wonder that people feel like they need to go on a digital detox and get away from their devices in order to feel normal. It’s like saying that decapitation is the best way of solving headaches.
Imagine living in a surveillance state like East Germany, and saying “Well, how else is the government supposed to make informed decisions without constantly monitoring its citizens?” I think it’s more likely that you’d feel like you’re in a cage.
Apples to oranges? Kind of. But whether it’s surveillance communism or surveillance capitalism, there’s a shared methodology at work. They’re both systems that disempower people for the supposedly greater good of amassing data. Both are built on the false premise that problems can be solved by getting more and more data. If that results in collateral damage to people’s privacy and mental health, well …it’s all for the greater good, right?
It’s fucking bullshit. I don’t want to live in that cage and I don’t want anyone else to have to live in it either. I’m going to do everything I can to tear it down.
The way most of the internet works today would be considered intolerable if translated into comprehensible real world analogs, but it endures because it is invisible.
You can try to use Facebook’s own tools to make the invisible visible but that kind of transparency isn’t allowed.
I’ve always liked the way that web browsers are called “user agents” in the world of web standards. It’s such a succinct summation of what browsers are for, or more accurately who browsers are for. Users.
The term makes sense when you consider that the internet is for end users. That’s not to be taken for granted. This assertion is now enshrined in the Internet Engineering Task Force’s RFC 8890—like Magna Carta for the network age. It’s also a great example of prioritisation in a design principle:
When there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users.
So when a web browser—ostensibly an agent for the user—prioritises user-hostile third parties, we get upset.
Google Chrome—ostensibly an agent for the user—is running an origin trial for Federated Learning of Cohorts (FLoC). This is not a technology that serves the end user. It is a technology that serves third parties who want to target end users. The most common use case is behavioural advertising, but targetting could be applied for more nefarious purposes.
The Electronic Frontier Foundation wrote an explainer last month: Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know.
Let’s back up a minute and look at why this is happening. End users are routinely targeted today (for behavioural advertising and other use cases) through third-party cookies. Some user agents like Apple’s Safari and Mozilla’s Firefox are stamping down on this, disabling third party cookies by default.
Seeing which way the wind is blowing, Google’s Chrome browser will also disable third-party cookies at some time in the future (they’re waiting to shut that barn door until the fire is good’n’raging). But Google isn’t just in the browser business. Google is also in the ad tech business. So they still want to advertisers to be able to target end users.
Yes, this is quite the cognitive dissonance: one part of the business is building a user agent while a different part of the company is working on ways of tracking end users. It’s almost as if one company shouldn’t simultaneously be the market leader in three separate industries: search, advertising, and web browsing. (Seriously though, I honestly think Google’s search engine would get better if it were split off from the parent company, and I think that Google’s web browser would also get better if it were a separate enterprise.)
Anyway, one possible way of tracking users without technically tracking individual users is to assign them to buckets, or cohorts of interest based on their browsing habits. Does that make you feel safer? Me neither.
That’s what Google is testing with the origin trial of FLoC.
If you, as an end user, don’t wish to be experimented on like this, there are a few things you can do:
That last decision is interesting. On the one hand, the origin trial is supposed to be on a small scale, hence the lack of European countries. On the other hand, the origin trial is “opt out” instead of “opt in” so that they can gather a big enough data set. Weird.
The plan is that if and when FLoC launches, websites would have to opt in to it. And when I say “plan”, I mean “best guess.”
I, for one, am filled with confidence that Google would never pull a bait-and-switch with their technologies.
In the meantime, if you’re a website owner, you have to opt your website out of the origin trial. You can do this by sending a server header. A meta
element won’t do the trick, I’m afraid.
I’ve done it for my sites, which are served using Apache. I’ve got this in my .conf
file:
<IfModule mod_headers.c>
Header always set Permissions-Policy "interest-cohort=()"
</IfModule>
If you don’t have access to your server, tough luck. But if your site runs on Wordpress, there’s a proposal to opt out of FLoC by default.
Interestingly, none of the Chrome devs that I follow are saying anything about FLoC. They’re usually quite chatty about proposals for potential standards, but I suspect that this one might be embarrassing for them. It was a similar situation with AMP. In that case, Google abused its monopoly position in search to blackmail publishers into using Google’s format. Now Google’s monopoly in advertising is compromising the integrity of its browser. In both cases, it makes it hard for Chrome devs claiming to have the web’s best interests at heart.
But one of the advantages of having a huge share of the browser market is that Chrome can just plough ahead and unilaterily implement whatever it wants even if there’s no consensus from other browser makers. So that’s what Google is doing with FLoC. But their justification for doing this doesn’t really work unless other browsers play along.
The problem is with step three. The theory is that if FLoC gives third parties what they need, then they won’t reach for fingerprinting. Even if there were any validity to that hypothesis, the only chance it has of working is if every browser joins in with FLoC. Otherwise ad tech companies are leaving money on the table. Can you seriously imagine third parties deciding that they just won’t target iPhone or iPad users any more? Remember that Safari is the only real browser on iOS so unless FLoC is implemented by Apple, third parties can’t reach those people …unless those third parties use fingerprinting instead.
Google have set up a situation where it looks like FLoC is going head-to-head with fingerprinting. But if FLoC becomes a reality, it won’t be instead of fingerprinting, it will be in addition to fingerprinting.
Google is quite right to point out that fingerprinting is A Very Bad Thing. But their concerns about fingerprinting sound very hollow when you see that Chrome is pushing ahead and implementing a raft of browser APIs that other browser makers quite rightly point out enable more fingerprinting: Battery Status, Proximity Sensor, Ambient Light Sensor and so on.
When it comes to those APIs, the message from Google is that fingerprinting is a solveable problem.
But when it comes to third party tracking, the message from Google is that fingerprinting is inevitable and so we must provide an alternative.
Which one is it?
Google’s flimsy logic for why FLoC is supposedly good for end users just doesn’t hold up. If they were honest and said that it’s to maintain the status quo of the ad tech industry, it would make much more sense.
The flaw in Google’s reasoning is the fundamental idea that tracking is necessary for advertising. That’s simply not true. Sacrificing user privacy is fundamental to behavioural advertising …but behavioural advertising is not the only kind of advertising. It isn’t even a very good kind of advertising.
FLoC seems to be Google’s way of saving a dying business. They are trying to keep targeted ads going by making them more “privacy-friendly” and “anonymous”. But behavioral profiling and targeted advertisement is not compatible with a privacy-respecting web.
What’s striking is that the very monopolies that make Google and Facebook the leaders in behavioural advertising would also make them the leaders in contextual advertising. Almost everyone uses Google’s search engine. Almost everyone uses Facebook’s social network. An advertising model based on what you’re currently looking at would keep Google and Facebook in their dominant positions.
Google made their first many billions exclusively on contextual advertising. Google now prefers to push the message that behavioral advertising based on personal data collection is superior but there is simply no trustworthy evidence to that.
I sincerely hope that Chrome will align with Safari, Firefox, Vivaldi, Brave, Edge and every other web browser. Everyone already agrees that fingerprinting is the real enemy. Imagine the combined brainpower that could be brought to bear on that problem if all browsers made user privacy a priority.
Until that day, I’m not sure that Google Chrome can be considered a user agent.
Following on from the piece they ran called Google’s FLoC Is a Terrible Idea, the EFF now have the details of the origin trial and it’s even worse than what was originally planned.
I strongly encourage you to use a privacy-preserving browser like Firefox or Safari.
Privacy-invasive user tracking is to Google and Facebook what carbon emissions are to fossil fuel companies — a form of highly profitable pollution that for a very long time few people in the mainstream cared about, but now, seemingly suddenly, very many care about quite a bit.
Trying to predict the future is a discouraging and hazardous occupation becaue the profit invariably falls into two stools. If his predictions sounded at all reasonable, you can be quite sure that in 20 or most 50 years, the progress of science and technology has made him seem ridiculously conservative. On the other hand, if by some miracle a prophet could describe the future exactly as it was going to take place, his predictions would sound so absurd, so far-fetched, that everybody would laugh him to scorn.
But I couldn’t resist responding to a recent request for augery. Eric asked An Event Apart speakers for their predictions for the coming year. The responses have been gathered together and published, although it’s in the form of a PDF for some reason.
Here’s what I wrote:
This is probably more of a hope than a prediction, but 2021 could be the year that the ponzi scheme of online tracking and surveillance begins to crumble. People are beginning to realize that it’s far too intrusive, that it just doesn’t work most of the time, and that good ol’-fashioned contextual advertising would be better. Right now, it feels similar to the moment before the sub-prime mortgage bubble collapsed (a comparison made in Tim Hwang’s recent book, Subprime Attention Crisis). Back then people thought “Well, these big banks must know what they’re doing,” just as people have thought, “Well, Facebook and Google must know what they’re doing”…but that confidence is crumbling, exposing the shaky stack of cards that props up behavioral advertising. This doesn’t mean that online advertising is coming to an end—far from it. I think we might see a golden age of relevant, content-driven advertising. Laws like Europe’s GDPR will play a part. Apple’s recent changes to highlight privacy-violating apps will play a part. Most of all, I think that people will play a part. They will be increasingly aware that there’s nothing inevitable about tracking and surveillance and that the web works better when it respects people’s right to privacy. The sea change might not happen in 2021 but it feels like the water is beginning to swell.
Still, predicting the future is a mug’s game with as much scientific rigour as astrology, reading tea leaves, or haruspicy.
Heydon keeps on producing more caustically funny videos that are made for me. After the last one about progressive enhancement, this one is about the indie web.
This is the story of the birth of the web, its loss of innocence, its decline, and what we can do to make it a bit less gross.
If behavioural ads aren’t more effective than contextual ads, what is all of that data collected for?
If websites opted for a context ads and privacy-focused analytics approach, cookie banners could become obsolete…
See, that’s what I’m talking about;
Levy deftly conflates “advertising” and “personalized advertising”, as if there are no ways to target people planning a wedding without surveilling their web browsing behaviour. Facebook’s campaign casually ignores decades of advertising targeted based on the current webpage or video instead of who those people are because it would impact Facebook’s primary business. Most people who are reading an article about great wedding venues are probably planning a wedding, but you don’t need quite as much of the ad tech stack to make that work.