Tags: browse

823

sparkline

Sunday, June 6th, 2021

Ancestors and Descendants – Eric’s Archived Thoughts

Eric looks back on 25 years of CSS and remarks on how our hacks and workarounds have fallen away over time, thank goodness.

But this isn’t just a message of nostalgia about how much harder things were back in my day. Eric also shows how CSS very nearly didn’t make it. I’m not exaggerating when I say that Todd Fahrner and Tantek Çelik saved the day. If Tantek hadn’t implemented doctype switching, there’s no way that CSS would’ve been viable.

Monday, May 24th, 2021

Doc Searls Weblog · How the cookie poisoned the Web

Lou’s idea was just for a server to remember the last state of a browser’s interaction with it. But that one move—a server putting a cookie inside every visiting browser—crossed a privacy threshold: a personal boundary that should have been clear from the start but was not.

Once that boundary was crossed, and the number and variety of cookies increased, a snowball started rolling, and whatever chance we had to protect our privacy behind that boundary, was lost.

The Doctor is incensed.

At this stage of the Web’s moral devolution, it is nearly impossible to think outside the cookie-based fecosystem.

Saturday, May 22nd, 2021

Should DevTools teach the CSS cascade?

In a break with Betteridge’s law, I think the answer here is “yes.”

Wednesday, May 12th, 2021

Add support for defining a theme color for both light & dark modes (prefers color scheme)

There’s a good discussion here (kicked off by Jen) about providing different theme-color values in a web app manifest to match prefers-color-scheme in media queries.

Saturday, April 24th, 2021

Still Hoping for Better Native Page Transitions | CSS-Tricks

It would be nice to be able to animate the transition between pages if we want to on the web without resorting to hacks or full-blown architecture choices to achieve it.

Amen, Chris, amen!

The danger here is that you might pick a single-page app just for this ability, which is what I mean by having to buy into a site architecture just to achieve this.

Wednesday, April 21st, 2021

Get the FLoC out

I’ve always liked the way that web browsers are called “user agents” in the world of web standards. It’s such a succinct summation of what browsers are for, or more accurately who browsers are for. Users.

The term makes sense when you consider that the internet is for end users. That’s not to be taken for granted. This assertion is now enshrined in the Internet Engineering Task Force’s RFC 8890—like Magna Carta for the network age. It’s also a great example of prioritisation in a design principle:

When there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users.

So when a web browser—ostensibly an agent for the user—prioritises user-hostile third parties, we get upset.

Google Chrome—ostensibly an agent for the user—is running an origin trial for Federated Learning of Cohorts (FLoC). This is not a technology that serves the end user. It is a technology that serves third parties who want to target end users. The most common use case is behavioural advertising, but targetting could be applied for more nefarious purposes.

The Electronic Frontier Foundation wrote an explainer last month: Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know.

Let’s back up a minute and look at why this is happening. End users are routinely targeted today (for behavioural advertising and other use cases) through third-party cookies. Some user agents like Apple’s Safari and Mozilla’s Firefox are stamping down on this, disabling third party cookies by default.

Seeing which way the wind is blowing, Google’s Chrome browser will also disable third-party cookies at some time in the future (they’re waiting to shut that barn door until the fire is good’n’raging). But Google isn’t just in the browser business. Google is also in the ad tech business. So they still want to advertisers to be able to target end users.

Yes, this is quite the cognitive dissonance: one part of the business is building a user agent while a different part of the company is working on ways of tracking end users. It’s almost as if one company shouldn’t simultaneously be the market leader in three separate industries: search, advertising, and web browsing. (Seriously though, I honestly think Google’s search engine would get better if it were split off from the parent company, and I think that Google’s web browser would also get better if it were a separate enterprise.)

Anyway, one possible way of tracking users without technically tracking individual users is to assign them to buckets, or cohorts of interest based on their browsing habits. Does that make you feel safer? Me neither.

That’s what Google is testing with the origin trial of FLoC.

If you, as an end user, don’t wish to be experimented on like this, there are a few things you can do:

  • Don’t use Chrome. No other web browser is participating in this experiment. I recommend Firefox.
  • If you want to continue to use Chrome, install the Duck Duck Go Chrome extension.
  • Alternatively, if you manually disable third-party cookies, your Chrome browser won’t be included in the experiment.
  • Or you could move to Europe. The origin trial won’t be enabled for users in the European Union, which is coincidentally where GDPR applies.

That last decision is interesting. On the one hand, the origin trial is supposed to be on a small scale, hence the lack of European countries. On the other hand, the origin trial is “opt out” instead of “opt in” so that they can gather a big enough data set. Weird.

The plan is that if and when FLoC launches, websites would have to opt in to it. And when I say “plan”, I meanbest guess.”

I, for one, am filled with confidence that Google would never pull a bait-and-switch with their technologies.

In the meantime, if you’re a website owner, you have to opt your website out of the origin trial. You can do this by sending a server header. A meta element won’t do the trick, I’m afraid.

I’ve done it for my sites, which are served using Apache. I’ve got this in my .conf file:

<IfModule mod_headers.c>
Header always set Permissions-Policy "interest-cohort=()"
</IfModule>

If you don’t have access to your server, tough luck. But if your site runs on Wordpress, there’s a proposal to opt out of FLoC by default.

Interestingly, none of the Chrome devs that I follow are saying anything about FLoC. They’re usually quite chatty about proposals for potential standards, but I suspect that this one might be embarrassing for them. It was a similar situation with AMP. In that case, Google abused its monopoly position in search to blackmail publishers into using Google’s format. Now Google’s monopoly in advertising is compromising the integrity of its browser. In both cases, it makes it hard for Chrome devs claiming to have the web’s best interests at heart.

But one of the advantages of having a huge share of the browser market is that Chrome can just plough ahead and unilaterily implement whatever it wants even if there’s no consensus from other browser makers. So that’s what Google is doing with FLoC. But their justification for doing this doesn’t really work unless other browsers play along.

Here’s Google’s logic:

  1. Third-party cookies are on their way out so advertisers will no longer be able to use that technology to target users.
  2. If we don’t provide an alternative, advertisers and other third parties will use fingerprinting, which we all agree is very bad.
  3. So let’s implement Federated Learning of Cohorts so that advertisers won’t use fingerprinting.

The problem is with step three. The theory is that if FLoC gives third parties what they need, then they won’t reach for fingerprinting. Even if there were any validity to that hypothesis, the only chance it has of working is if every browser joins in with FLoC. Otherwise ad tech companies are leaving money on the table. Can you seriously imagine third parties deciding that they just won’t target iPhone or iPad users any more? Remember that Safari is the only real browser on iOS so unless FLoC is implemented by Apple, third parties can’t reach those people …unless those third parties use fingerprinting instead.

Google have set up a situation where it looks like FLoC is going head-to-head with fingerprinting. But if FLoC becomes a reality, it won’t be instead of fingerprinting, it will be in addition to fingerprinting.

Google is quite right to point out that fingerprinting is A Very Bad Thing. But their concerns about fingerprinting sound very hollow when you see that Chrome is pushing ahead and implementing a raft of browser APIs that other browser makers quite rightly point out enable more fingerprinting: Battery Status, Proximity Sensor, Ambient Light Sensor and so on.

When it comes to those APIs, the message from Google is that fingerprinting is a solveable problem.

But when it comes to third party tracking, the message from Google is that fingerprinting is inevitable and so we must provide an alternative.

Which one is it?

Google’s flimsy logic for why FLoC is supposedly good for end users just doesn’t hold up. If they were honest and said that it’s to maintain the status quo of the ad tech industry, it would make much more sense.

The flaw in Google’s reasoning is the fundamental idea that tracking is necessary for advertising. That’s simply not true. Sacrificing user privacy is fundamental to behavioural advertising …but behavioural advertising is not the only kind of advertising. It isn’t even a very good kind of advertising.

Marko Saric sums it up:

FLoC seems to be Google’s way of saving a dying business. They are trying to keep targeted ads going by making them more “privacy-friendly” and “anonymous”. But behavioral profiling and targeted advertisement is not compatible with a privacy-respecting web.

What’s striking is that the very monopolies that make Google and Facebook the leaders in behavioural advertising would also make them the leaders in contextual advertising. Almost everyone uses Google’s search engine. Almost everyone uses Facebook’s social network. An advertising model based on what you’re currently looking at would keep Google and Facebook in their dominant positions.

Google made their first many billions exclusively on contextual advertising. Google now prefers to push the message that behavioral advertising based on personal data collection is superior but there is simply no trustworthy evidence to that.

I sincerely hope that Chrome will align with Safari, Firefox, Vivaldi, Brave, Edge and every other web browser. Everyone already agrees that fingerprinting is the real enemy. Imagine the combined brainpower that could be brought to bear on that problem if all browsers made user privacy a priority.

Until that day, I’m not sure that Google Chrome can be considered a user agent.

Tuesday, April 20th, 2021

Numbers

Core web vitals from Google are the ingredients for an alphabet soup of exlusionary intialisms. But once you get past the unnecessary jargon, there’s a sensible approach underpinning the measurements.

From May—no, June—these measurements will be a ranking signal for Google search so performance will become more of an SEO issue. This is good news. This is what Google should’ve done years ago instead of pissing up the wall with their dreadful and damaging AMP project that blackmailed publishers into using a proprietary format in exchange for preferential search treatment. It was all done supposedly in the name of performance, but in reality all it did was antagonise users and publishers alike.

Core web vitals are an attempt to put numbers on user experience. This is always a tricky balancing act. You’ve got to watch out for the McNamara fallacy. Harry has already started noticing this:

A new and unusual phenomenon: clients reluctant (even refusing) to fix performance issues unless they directly improve Vitals.

Once you put a measurement on something, there’s a danger of focusing too much on the measurement. Chris is worried that we’re going to see tips’n’tricks for gaming core web vitals:

This feels like the start of a weird new era of web performance where the metrics of web performance have shifted to user-centric measurements, but people are implementing tricky strategies to game those numbers with methods that, if anything, slightly harm user experience.

The map is not the territory. The numbers are a proxy for user experience, but it’s notoriously difficult to measure intangible ideas like pain and frustration. As Laurie says:

This is 100% the downside of automatic tools that give you a “score”. It’s like gameification. It’s about hitting that perfect score instead of the holistic experience.

And Ethan has written about the power imbalance that exists when Google holds all the cards, whether it’s AMP or core web vitals:

Google used its dominant position in the marketplace to force widespread adoption of a largely proprietary technology for creating websites. By switching to Core Web Vitals, those power dynamics haven’t materially changed.

We would do well to remember:

When you measure, include the measurer.

But if we’re going to put numbers to user experience, the core web vitals are a pretty good spread of measurements: largest contentful paint, cumulative layout shift, and first input delay.

(If you prefer using initialisms, remember that CFP is Certified Financial Planner, CLS is Community Legal Services, and FID is Flame Ionization Detector. Together they form CWV, Catholic War Veterans.)

Tuesday, April 6th, 2021

Web Browser Engineering

It’s heavy on computer science, but this is a fascinating endeavour. It’s a work-in-progress book that not only describes how browsers work, but invites you to code along too. At the end, you get a minimum viable web browser (and more knowledge than you ever wanted about how browsers work).

As a black box, the browser is either magical or frustrating (depending on whether it is working correctly or not!). But that also make a browser a pretty unusual piece of software, with unique challenges, interesting algorithms, and clever optimizations. Browsers are worth studying for the pure pleasure of it.

See how the sausage is made and make your own sausage!

This book explains, building a basic but complete web browser, from networking to JavaScript, in a thousand lines of Python.

Monday, April 5th, 2021

Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know. | Electronic Frontier Foundation

Following on from the piece they ran called Google’s FLoC Is a Terrible Idea, the EFF now have the details of the origin trial and it’s even worse than what was originally planned.

I strongly encourage you to use a privacy-preserving browser like Firefox or Safari.

Monday, March 29th, 2021

Compat2021: Eliminating five top compatibility pain points on the web

Good to see Google, Mozilla, and Apple collaborating on fixing cross-browser CSS compatability issues:

  1. flexbox
  2. grid
  3. position: sticky
  4. aspect-ratio
  5. transforms

You can track progress here.

Chromium Blog: A safer default for navigation: HTTPS

Just over a year ago, I pondered some default browser behaviours and how they might be updated.

The first one is happening: Chrome is going to assume https before checking for http.

Now what about the other default behaviour that’s almost 15 years old now? When might a viewport width value of device-width become the default?

Wednesday, March 24th, 2021

prefers-reduced-motion: Taking a no-motion-first approach to animations

Given the widespread browser support for prefers-reduced-motion now, this approach makes a lot of sense.

Tuesday, March 23rd, 2021

Service worker weirdness in Chrome

I think I’ve found some more strange service worker behaviour in Chrome.

It all started when I was checking out the very nice new redesign of WebPageTest. I figured while I was there, I’d run some of my sites through it. I passed in a URL from The Session. When the test finished, I noticed that the “screenshot” tab said that something was being logged to the console. That’s odd! And the file doing the logging was the service worker script.

I fired up Chrome (which isn’t my usual browser), and started navigating around The Session with dev tools open to see what appeared in the console. Sure enough, there was a failed fetch attempt being logged. The only time my service worker script logs anything is in the catch clause of fetching pages from the network. So Chrome was trying to fetch a web page, failing, and logging this error:

The service worker navigation preload request failed with a network error.

But all my pages were loading just fine. So where was the error coming from?

After a lot of spelunking and debugging, I think I’ve figured out what’s happening…

First of all, I’m making use of navigation preloads in my service worker. That’s all fine.

Secondly, the website is a progressive web app. It has a manifest file that specifies some metadata, including start_url. If someone adds the site to their home screen, this is the URL that will open.

Thirdly, Google recently announced that they’re tightening up the criteria for displaying install prompts for progressive web apps. If there’s no network connection, the site still needs to return a 200 OK response: either a cached copy of the URL or a custom offline page.

So here’s what I think is happening. When I navigate to a page on the site in Chrome, the service worker handles the navigation just fine. It also parses the manifest file I’ve linked to and checks to see if that start URL would load if there were no network connection. And that’s when the error gets logged.

I only noticed this behaviour because I had specified a query string on my start URL in the manifest file. Instead of a start_url value of /, I’ve set a start_url value of /?homescreen. And when the error shows up in the console, the URL being fetched is /?homescreen.

Crucially, I’m not seeing a warning in the console saying “Site cannot be installed: Page does not work offline.” So I think this is all fine. If I were actually offline, there would indeed be an error logged to the console and that start_url request would respond with my custom offline page. It’s just a bit confusing that the error is being logged when I’m online.

I thought I’d share this just in case anyone else is logging errors to the console in the catch clause of fetches and is seeing an error even when everything appears to be working fine. I think there’s nothing to worry about.

Update: Jake confirmed my diagnosis and agreed that the error is a bit confusing. The good news is that it’s changing. In Chrome Canary the error message has already been updated to:

DOMException: The service worker navigation preload request failed due to a network error. This may have been an actual network error, or caused by the browser simulating offline to see if the page works offline: see https://w3c.github.io/manifest/#installability-signals

Much better!

Saturday, March 20th, 2021

Dropping Support For IE11 Is Progressive Enhancement · The Ethically-Trained Programmer

Any time or effort spent getting your JavaScript working in IE11 is wasted time that could be better spent making a better experience for users without JavaScript.

I agree with this approach.

With a few minor omissions and links, you can create a site that works great in modern browsers with ES6+ and acceptably in browsers without JavaScript. This approach is more sustainable for teams without the resources for extensive QA, and more beneficial to users of nonstandard browsers. Trying to recreate functionality that already works in modern browsers in IE11 is thankless work that is doomed to neglect.

Tuesday, March 16th, 2021

Monday, March 15th, 2021

A Live Interview With Rachel Andrew and Jeremy Keith on Vimeo

I really enjoyed this 20 minute chat with Eric and Rachel all about web standards, browsers, HTML and CSS.

Friday, March 12th, 2021

The Performance Inequality Gap, 2021 - Infrequently Noted

Developers, particularly in Silicon Valley firms, are definitionally wealthy and enfranchised by world-historical standards. Like upper classes of yore, comfort (“DX”) comes with courtiers happy to declare how important comfort must surely be. It’s bunk, or at least most of it is.

As frontenders, our task is to make services that work well for all, not just the wealthy. If improvements in our tools or our comfort actually deliver improvements in that direction, so much the better. But we must never forget that measurable improvement for users is the yardstick.

Thursday, March 11th, 2021

When service workers met framesets

Oh boy, do I have some obscure browser behaviour for you!

To set the scene…

I’ve been writing here in my online journal for almost twenty years. The official anniversary will be on September 30th. But this website has been even online longer than that, just in a very different form.

Here’s the first version of adactio.com.

Like a tour guide taking you around the ruins of some lost ancient civilisation, let me point out some interesting features:

  • Observe the .shtml file extension. That means it was once using Apache’s server-side includes, a simple way of repeating chunks of markup across pages. Scientists have been trying to reproduce the wisdom of the ancients using modern technology ever since.
  • See how the layout is 100vw and 100vh? Well, this was long before viewport units existed. In fact there is no CSS at all on that page. It’s one big table element with 100% width and 100% height.
  • So if there’s no CSS, where is the border-radius coming from? Let me introduce you to an old friend—the non-animated GIF. It’s got just enough transparency (though not proper alpha transparency) to fake rounded corners between two solid colours.
  • The management takes no responsibility for any trauma that might befall you if you view source. There you will uncover JavaScript from the dawn of time; ancient runic writing like if (navigator.appName == "Netscape")

Now if your constitution was able to withstand that, brace yourself for what happens when you click on either of the two links, deutsch or english.

You find yourself inside a frameset. You may also experience some disorienting “DHTML”—the marketing term given to any combination of JavaScript and positioning in the late ’90s.

Note that these are not iframes, they are frames. Different thing. You could create single page apps long before Ajax was a twinkle in Jesse James Garrett’s eye.

If you view source, you’ll see a React-like component system. Each frameset component contains frame components that are isolated from one another. They’re like web components. Each frame has its own (non-shadow) DOM. That’s because each frame is actually a separate web page. If you right-click on any of the frames, your browser should give the option to view the framed document in its own tab or window.

Now for the part where modern and ancient technologies collide…

If you’re looking at the frameset URL in Firefox or Safari, everything displays as it should in all its ancient glory. But if you’re looking in Google Chrome and you’ve visited adactio.com before, something very odd happens.

Each frame of the frameset displays my custom offline page. The only way that could be served up is through my service worker script. You can verify this by opening the framest URL in an incognito window—everything works fine when no service worker has been registered.

I have no idea why this is happening. My service worker logic is saying “if there’s a request for a web page, try fetching it from the network, otherwise look in the cache, otherwise show an offline page.” But if those page requests are initiated by a frame element, it goes straight to showing the offline page.

Is this a bug? Or perhaps this is the correct behaviour for some security reason? I have no idea.

I wonder if anyone has ever come across this before. It’s a very strange combination of factors:

  • a domain served over HTTPS,
  • that registers a service worker,
  • but also uses framesets and frames.

I could submit a bug report about this but I fear I would be laughed out of the bug tracker.

Still …the World Wide Web is remarkable for its backward compatibility. This behaviour is unusual because browser makers are at pains to support existing content and never break the web.

Technically a modern website (one that registers a service worker) shouldn’t be using deprecated technology like frames. But browsers still need to be able support those old technologies in order to render old websites.

This situation has only arisen because the same domain—adactio.com—is host to a modern website and a really old one.

Maybe Chrome is behaving strangely because I’ve built my online home on ancient burial ground.

Update: Both Remy and Jake did some debugging and found the issue…

It’s all to do with navigation preloads and the value of event.preloadResponse, which I believe is only supported in Chrome which would explain the differences between browsers.

According to this post by Jake:

event.preloadResponse is a promise that resolves with a response, if:

  • Navigation preload is enabled.
  • The request is a GET request.
  • The request is a navigation request (which browsers generate when they’re loading pages, including iframes).

Otherwise event.preloadResponse is still there, but it resolves with undefined.

Notice that iframes are mentioned, but not frames.

My code was assuming that if event.preloadRepsonse exists in my block of code for responding to page requests, then there’d be a response. But if the request was initiated from a frameset, it is a request for a page and event.preloadRepsonse does exist …but it’s undefined.

I’ve updated my code now to check this assumption (and fall back to fetch).

This may technically still be a bug though. Shouldn’t a page loaded from a frameset count as a navigation request?

Monday, March 8th, 2021

Progressive enhancement and accessibility redux - QuirksBlog

This is a really interesting take on the intersection between accessibility and progressive enhancement (which I always felt was there, but this expresses it well):

Accessibility aims to optimize an experience across a spectrum of user capabilities. Progressive enhancement aims to optimize an experience across a spectrum of user agent capabilities.

Indeed, if you broaden the definition of “user agent” to include a user’s physiology, I think the concepts become nearly identical.